Összefoglalás. A tanulmány arra a kérdésre keresi a választ, hogy milyen biztonsági aggályokat jelent az, hogy az online platformok diskurzusát egyre nagyobb mértékben mesterséges intelligencia (MI) ellenőrzi. A kérdés megválaszolásához röviden bemutatja a téma szakirodalmának és a releváns empirikus kutatások eredményeit, valamint a legfontosabb értelmezési kereteket és fogalmakat. Megállapítja, hogy az online diskurzusok MI általi moderálásának vonatkozásában meghatározhatók egyrészt belülről kifelé, másrészt pedig kívülről befelé irányuló biztonsági aggályok és fenyegetések. A belülről kifelé irányuló fenyegetések az MI gyakorlati működése, így pedig a diskurzust befolyásoló szerepe okán jelentkeznek, a kívülről befelé irányuló fenyegetések pedig az MI mögött álló emberek döntéseiből adódnak.
Summary. The premise of this study is that online platforms and other public spaces in cyberspace can provide a framework for citizens to discuss issues affecting society as a whole or wider groups in society, contributing to the strengthening of democratic processes. The study seeks to answer the question of what security concerns are posed by the fact that the discourse of online platforms is increasingly moderated by artificial intelligence. To answer this question, the study briefly presents the results of the Hungarian and international literature on the topic and relevant empirical research, as well as reviews the theoretical, conceptual and interpretive frameworks whose presentation and conceptualization of relevant concepts are essential for processing the topic. It introduces the concepts of online platform, online discourse and moderation, and demonstrates applications of artificial intelligence in content moderation, with particular reference to related security concerns and their interpretation. The study then reviews the aspects related to the moderation of the discourse on online platforms by artificial intelligence that may also broaden the horizons of the security interpretation of the issue. The study finds that in connection with the moderation of online discourses by artificial intelligence, threats can be identified in two directions: in the form of “inside out” and in the form of “outside in” security concerns and threats. In the first case, the “inside out” threats are factors which arise from the practical operation of artificial intelligence and thus from its role in influencing discourse. In such way these “inside out” threats can also endanger the world beyond the discourses on online platforms, such as society, politics or even the economy. In the second case, the risks associated with the operation of artificial intelligence in terms of “outside-in” threats are not due to the built-in flaw of the algorithm, i.e. the uniqueness of its operation. In this case, the threats come from the decisions of the owners, developers, managers of artificial intelligence, the people behind artificial intelligence. The study concludes that in the moderation of discourse in artificial intelligence-based decision-making systems, such as online platforms, it is not enough to base accountability on platform conscience and self-restraint. It is absolutely necessary to subject platforms that use artificial intelligence in content moderation to a higher level of external and objective control, and in this connection to create a legal interpretation framework that defines the main guidelines for discourses moderated by artificial intelligence on online platforms and also enforces these guidelines.
Belényesi P. (2015) A digitális piacok időszerű versenyjogi vonatkozásai. Róma, John Cabot University
Buzan, B., Waever, O. & de Wilde, J. (1998) Security: A new framework for analysis. Lynne Rienner Publishers
Castells, M. (1996) The Information Age: Economy, Society and Culture. Vol. 1. The Rise of the Network Society. Oxford, Blackwell
Corich, S., Kinshuk, H. L. & Lynn, M. (2004) Assessing discussion forum participation: In search of quality. International Journal of Instructional Technology & Distance Learning, Vol. 1. No. 12. pp. 3–12.
Dahlberg, L. (2007) Rethinking the fragmentation of the cyberpublic: from consensus to contestation. New Media & Society, Vol. 9. No. 5. pp. 827–847.
Európai Bizottság (2016) A Bizottság közleménye az Európai Parlamentnek, a Tanácsnak, az Európai Gazdasági és Szociális Bizottságnak és a Régiók Bizottságának. Online platformok és a digitális egységes piac: Lehetőség és kihívás Európa számára. https://eur-lex.europa.eu/legal-content/HU/TXT/PDF/?uri=CELEX:52016DC0288&from=HU [Letöltve: 2022. 03. 15.]
Gimmler, A. (2001) Deliberative democracy, the public sphere and the internet. Philosophy & Social Criticism, Vol. 27. No. 4. pp. 21–39.
Jullien, B. & Sand-Zantman, W. (2021) The economics of platforms: A theory guide for competition policy. Information Economics and Policy, Vol. 54. 100880. https://doi.org/10.1016/j.infoecopol.2020.100880
Koltay A. (2019) A social media platformok jogi státusa a szólásszabadság nézőpontjából. In Medias Res, Vol. 8. No. 1. pp. 1–56.
LaRue, F., Mijatović, D., Botero-Marino, C. & Tlakula, F. P. (2011) Joint Declaration on Freedom of Expression and the Internet. https://www.osce.org/files/f/documents/e/9/78309.pdf [Letöltve: 2022. 03. 15.]
Llansó, E. J. (2020) No amount of “AI” in content moderation will solve filtering’s prior-restraint problem. Big Data & Society, Vol. 7. No. 1. 2053951720920686. https://doi.org/10.1177%2F2053951720920686
Llansó, E., van Hoboken, J., Leerssen, P. & Harambam, J. (2020) Artificial intelligence, content moderation, and freedom of expression. One in a series: A working paper of the Transatlantic Working Group on Content Moderation Online and Freedom of Expression. https://lirias.kuleuven.be/retrieve/594053 [Letöltve: 2022. 03. 15.]
Machlup, F. (1962) The Production and Distribution of Knowledge in the United States. Princeton, New Jersey, Princeton University Press
Masuda, Y. (1980) The Information Society as Post-industrial Society. Tokyo, Institute for the Information Society
Nahmias, Y. & Perel, M. (2021) The Oversight of Content Moderation by AI: Impact Assessments and Their Limitations. Harvard. Journal on Legislation, Vol. 58. No. 1. pp. 145–194.
OECD (2019) An Introduction to Online Platforms and Their Role in the Digital Transformation. Paris, OECD Publishing. https://doi.org/10.1787/53e5f593-en
Papacharissi, Z. (2002) The virtual sphere: The internet as a public sphere. New Media & Society, Vol. 4. No. 1. pp. 9–27.
Poster, M. (1997) Cyberdemocracy: The Internet and the Public Sphere. In: Holmes, D. (ed.): Virtual Politics: Identity and Community in Cyberspace. London, SAGE Publications Ltd., pp. 212–228.
Roberts, S. T. (2017) Content Moderation. In: Schintler L. & McNeely C. (eds): Encyclopedia of Big Data. Springer, Cham. https://doi.org/10.1007/978-3-319-32001-4_44-1
Röttger, P., Vidgen, B., Nguyen, D., Waseem, Z., Margetts, H. & Pierrehumbert, J. B. (2021) HateCheck: Functional tests for hate speech detection models. In: Proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021) https://doi.org/10.48550/arXiv.2012.15606
von Solms, R. & van Niekerk, J. (2013) From information security to cyber security. Computers & Security, Vol. 38. pp. 97–102. https://doi.org/10.1016/j.cose.2013.04.004
Toffler, A. (1980) The Third Wave. New York, William Morrow & Company, Inc.
Wolfers, A. (1952) “National security” as an ambiguous symbol. Political Science Quarterly, Vol. 67. No. 4. pp. 481–502.
Ződi Zs. (2018) Platformok. Robotok és a jog. Budapest, Gondolat Kiadó