Author:
Melinda Pintér Pázmány Péter Katolikus Egyetem Budapest Magyarország; Pázmány Péter Catholic University Budapest Hungary

Search for other papers by Melinda Pintér in
Current site
Google Scholar
PubMed
Close
Open access

Összefoglalás. A tanulmány arra a kérdésre keresi a választ, hogy milyen biztonsági aggályokat jelent az, hogy az online platformok diskurzusát egyre nagyobb mértékben mesterséges intelligencia (MI) ellenőrzi. A kérdés megválaszolásához röviden bemutatja a téma szakirodalmának és a releváns empirikus kutatások eredményeit, valamint a legfontosabb értelmezési kereteket és fogalmakat. Megállapítja, hogy az online diskurzusok MI általi moderálásának vonatkozásában meghatározhatók egyrészt belülről kifelé, másrészt pedig kívülről befelé irányuló biztonsági aggályok és fenyegetések. A belülről kifelé irányuló fenyegetések az MI gyakorlati működése, így pedig a diskurzust befolyásoló szerepe okán jelentkeznek, a kívülről befelé irányuló fenyegetések pedig az MI mögött álló emberek döntéseiből adódnak.

Summary. The premise of this study is that online platforms and other public spaces in cyberspace can provide a framework for citizens to discuss issues affecting society as a whole or wider groups in society, contributing to the strengthening of democratic processes. The study seeks to answer the question of what security concerns are posed by the fact that the discourse of online platforms is increasingly moderated by artificial intelligence. To answer this question, the study briefly presents the results of the Hungarian and international literature on the topic and relevant empirical research, as well as reviews the theoretical, conceptual and interpretive frameworks whose presentation and conceptualization of relevant concepts are essential for processing the topic. It introduces the concepts of online platform, online discourse and moderation, and demonstrates applications of artificial intelligence in content moderation, with particular reference to related security concerns and their interpretation. The study then reviews the aspects related to the moderation of the discourse on online platforms by artificial intelligence that may also broaden the horizons of the security interpretation of the issue. The study finds that in connection with the moderation of online discourses by artificial intelligence, threats can be identified in two directions: in the form of “inside out” and in the form of “outside in” security concerns and threats. In the first case, the “inside out” threats are factors which arise from the practical operation of artificial intelligence and thus from its role in influencing discourse. In such way these “inside out” threats can also endanger the world beyond the discourses on online platforms, such as society, politics or even the economy. In the second case, the risks associated with the operation of artificial intelligence in terms of “outside-in” threats are not due to the built-in flaw of the algorithm, i.e. the uniqueness of its operation. In this case, the threats come from the decisions of the owners, developers, managers of artificial intelligence, the people behind artificial intelligence. The study concludes that in the moderation of discourse in artificial intelligence-based decision-making systems, such as online platforms, it is not enough to base accountability on platform conscience and self-restraint. It is absolutely necessary to subject platforms that use artificial intelligence in content moderation to a higher level of external and objective control, and in this connection to create a legal interpretation framework that defines the main guidelines for discourses moderated by artificial intelligence on online platforms and also enforces these guidelines.

  • 1

    Belényesi P. (2015) A digitális piacok időszerű versenyjogi vonatkozásai. Róma, John Cabot University

  • 2

    Buzan, B., Waever, O. & de Wilde, J. (1998) Security: A new framework for analysis. Lynne Rienner Publishers

  • 3

    Castells, M. (1996) The Information Age: Economy, Society and Culture. Vol. 1. The Rise of the Network Society. Oxford, Blackwell

  • 4

    Corich, S., Kinshuk, H. L. & Lynn, M. (2004) Assessing discussion forum participation: In search of quality. International Journal of Instructional Technology & Distance Learning, Vol. 1. No. 12. pp. 3–12.

  • 5

    Dahlberg, L. (2007) Rethinking the fragmentation of the cyberpublic: from consensus to contestation. New Media & Society, Vol. 9. No. 5. pp. 827–847.

  • 6

    Európai Bizottság (2016) A Bizottság közleménye az Európai Parlamentnek, a Tanácsnak, az Európai Gazdasági és Szociális Bizottságnak és a Régiók Bizottságának. Online platformok és a digitális egységes piac: Lehetőség és kihívás Európa számára. https://eur-lex.europa.eu/legal-content/HU/TXT/PDF/?uri=CELEX:52016DC0288&from=HU [Letöltve: 2022. 03. 15.]

  • 7

    Gimmler, A. (2001) Deliberative democracy, the public sphere and the internet. Philosophy & Social Criticism, Vol. 27. No. 4. pp. 21–39.

  • 8

    Jullien, B. & Sand-Zantman, W. (2021) The economics of platforms: A theory guide for competition policy. Information Economics and Policy, Vol. 54. 100880. https://doi.org/10.1016/j.infoecopol.2020.100880

  • 9

    Koltay A. (2019) A social media platformok jogi státusa a szólásszabadság nézőpontjából. In Medias Res, Vol. 8. No. 1. pp. 1–56.

  • 10

    LaRue, F., Mijatović, D., Botero-Marino, C. & Tlakula, F. P. (2011) Joint Declaration on Freedom of Expression and the Internet. https://www.osce.org/files/f/documents/e/9/78309.pdf [Letöltve: 2022. 03. 15.]

  • 11

    Llansó, E. J. (2020) No amount of “AI” in content moderation will solve filtering’s prior-restraint problem. Big Data & Society, Vol. 7. No. 1. 2053951720920686. https://doi.org/10.1177%2F2053951720920686

  • 12

    Llansó, E., van Hoboken, J., Leerssen, P. & Harambam, J. (2020) Artificial intelligence, content moderation, and freedom of expression. One in a series: A working paper of the Transatlantic Working Group on Content Moderation Online and Freedom of Expression. https://lirias.kuleuven.be/retrieve/594053 [Letöltve: 2022. 03. 15.]

  • 13

    Machlup, F. (1962) The Production and Distribution of Knowledge in the United States. Princeton, New Jersey, Princeton University Press

  • 14

    Masuda, Y. (1980) The Information Society as Post-industrial Society. Tokyo, Institute for the Information Society

  • 15

    Nahmias, Y. & Perel, M. (2021) The Oversight of Content Moderation by AI: Impact Assessments and Their Limitations. Harvard. Journal on Legislation, Vol. 58. No. 1. pp. 145–194.

  • 16

    OECD (2019) An Introduction to Online Platforms and Their Role in the Digital Transformation. Paris, OECD Publishing. https://doi.org/10.1787/53e5f593-en

  • 17

    Papacharissi, Z. (2002) The virtual sphere: The internet as a public sphere. New Media & Society, Vol. 4. No. 1. pp. 9–27.

  • 18

    Poster, M. (1997) Cyberdemocracy: The Internet and the Public Sphere. In: Holmes, D. (ed.): Virtual Politics: Identity and Community in Cyberspace. London, SAGE Publications Ltd., pp. 212–228.

  • 19

    Roberts, S. T. (2017) Content Moderation. In: Schintler L. & McNeely C. (eds): Encyclopedia of Big Data. Springer, Cham. https://doi.org/10.1007/978-3-319-32001-4_44-1

  • 20

    Röttger, P., Vidgen, B., Nguyen, D., Waseem, Z., Margetts, H. & Pierrehumbert, J. B. (2021) HateCheck: Functional tests for hate speech detection models. In: Proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021) https://doi.org/10.48550/arXiv.2012.15606

  • 21

    von Solms, R. & van Niekerk, J. (2013) From information security to cyber security. Computers & Security, Vol. 38. pp. 97–102. https://doi.org/10.1016/j.cose.2013.04.004

  • 22

    Toffler, A. (1980) The Third Wave. New York, William Morrow & Company, Inc.

  • 23

    Wolfers, A. (1952) “National security” as an ambiguous symbol. Political Science Quarterly, Vol. 67. No. 4. pp. 481–502.

  • 24

    Ződi Zs. (2018) Platformok. Robotok és a jog. Budapest, Gondolat Kiadó

  • Collapse
  • Expand

Editor-in-Chief:

Founding Editor-in-Chief:

  • Tamás NÉMETH

Managing Editor:

  • István SABJANICS (Ministry of Interior, Budapest, Hungary)

Editorial Board:

  • Attila ASZÓDI (Budapest University of Technology and Economics)
  • Zoltán BIRKNER (University of Pannonia)
  • Valéria CSÉPE (Research Centre for Natural Sciences, Brain Imaging Centre)
  • Gergely DELI (University of Public Service)
  • Tamás DEZSŐ (Migration Research Institute)
  • Imre DOBÁK (University of Public Service)
  • Marcell Gyula GÁSPÁR (University of Miskolc)
  • József HALLER (University of Public Service)
  • Charaf HASSAN (Budapest University of Technology and Economics)
  • Zoltán GYŐRI (Hungaricum Committee)
  • János JÓZSA (Budapest University of Technology and Economics)
  • András KOLTAY (National Media and Infocommunications Authority)
  • Gábor KOVÁCS (University of Public Service)
  • Levente KOVÁCS buda University)
  • Melinda KOVÁCS (Hungarian University of Agriculture and Life Sciences (MATE))
  • Miklós MARÓTH (Avicenna Institue of Middle Eastern Studies )
  • Judit MÓGOR (Ministry of Interior National Directorate General for Disaster Management)
  • József PALLO (University of Public Service)
  • István SABJANICS (Ministry of Interior)
  • Péter SZABÓ (Hungarian University of Agriculture and Life Sciences (MATE))
  • Miklós SZÓCSKA (Semmelweis University)

Ministry of Interior
Science Strategy and Coordination Department
Address: H-2090 Remeteszőlős, Nagykovácsi út 3.
Phone: (+36 26) 795 906
E-mail: scietsec@bm.gov.hu

DOAJ

2023  
CrossRef Documents 32
CrossRef Cites 15
Days from submission to acceptance 59
Days from acceptance to publication 104
Acceptance Rate 81%

2022  
CrossRef Documents 38
CrossRef Cites 10
Days from submission to acceptance 54
Days from acceptance to publication 78
Acceptance Rate 84%

2021  
CrossRef Documents 46
CrossRef Cites 0
Days from submission to acceptance 33
Days from acceptance to publication 85
Acceptance Rate 93%

2020  
CrossRef Documents 13
CrossRef Cites 0
Days from submission to acceptance 30
Days from acceptance to publication 62
Acceptance Rate 93%

Publication Model Gold Open Access
Submission Fee none
Article Processing Charge none

Scientia et Securitas
Language Hungarian
English
Size A4
Year of
Foundation
2020
Volumes
per Year
1
Issues
per Year
4
Founder Academic Council of Home Affairs and
Association of Hungarian PhD and DLA Candidates
Founder's
Address
H-2090 Remeteszőlős, Hungary, Nagykovácsi út 3.
H-1055 Budapest, Hungary Falk Miksa utca 1.
Publisher Akadémiai Kiadó
Publisher's
Address
H-1117 Budapest, Hungary 1516 Budapest, PO Box 245.
Responsible
Publisher
Chief Executive Officer, Akadémiai Kiadó
Applied
Licenses
CC-BY 4.0
CC-BY-NC 4.0
ISSN 3057-9759 (print)
ISSN 2732-2688 (online)

Monthly Content Usage

Abstract Views Full Text Views PDF Downloads
Dec 2024 0 48 1
Jan 2025 0 66 3
Feb 2025 0 74 2
Mar 2025 0 59 12
Apr 2025 0 16 5
May 2025 0 16 1
Jun 2025 0 0 0