Search Results

You are looking at 1 - 1 of 1 items for

  • Author or Editor: Melinda Pintér x
  • Refine by Access: All Content x
Clear All Modify Search

Az online platformok diskurzusának moderációja és biztonsági aggályai

Moderation and security concerns about discourses on online platforms

Scientia et Securitas
Author:
Melinda Pintér

Összefoglalás. A tanulmány arra a kérdésre keresi a választ, hogy milyen biztonsági aggályokat jelent az, hogy az online platformok diskurzusát egyre nagyobb mértékben mesterséges intelligencia (MI) ellenőrzi. A kérdés megválaszolásához röviden bemutatja a téma szakirodalmának és a releváns empirikus kutatások eredményeit, valamint a legfontosabb értelmezési kereteket és fogalmakat. Megállapítja, hogy az online diskurzusok MI általi moderálásának vonatkozásában meghatározhatók egyrészt belülről kifelé, másrészt pedig kívülről befelé irányuló biztonsági aggályok és fenyegetések. A belülről kifelé irányuló fenyegetések az MI gyakorlati működése, így pedig a diskurzust befolyásoló szerepe okán jelentkeznek, a kívülről befelé irányuló fenyegetések pedig az MI mögött álló emberek döntéseiből adódnak.

Summary. The premise of this study is that online platforms and other public spaces in cyberspace can provide a framework for citizens to discuss issues affecting society as a whole or wider groups in society, contributing to the strengthening of democratic processes. The study seeks to answer the question of what security concerns are posed by the fact that the discourse of online platforms is increasingly moderated by artificial intelligence. To answer this question, the study briefly presents the results of the Hungarian and international literature on the topic and relevant empirical research, as well as reviews the theoretical, conceptual and interpretive frameworks whose presentation and conceptualization of relevant concepts are essential for processing the topic. It introduces the concepts of online platform, online discourse and moderation, and demonstrates applications of artificial intelligence in content moderation, with particular reference to related security concerns and their interpretation. The study then reviews the aspects related to the moderation of the discourse on online platforms by artificial intelligence that may also broaden the horizons of the security interpretation of the issue. The study finds that in connection with the moderation of online discourses by artificial intelligence, threats can be identified in two directions: in the form of “inside out” and in the form of “outside in” security concerns and threats. In the first case, the “inside out” threats are factors which arise from the practical operation of artificial intelligence and thus from its role in influencing discourse. In such way these “inside out” threats can also endanger the world beyond the discourses on online platforms, such as society, politics or even the economy. In the second case, the risks associated with the operation of artificial intelligence in terms of “outside-in” threats are not due to the built-in flaw of the algorithm, i.e. the uniqueness of its operation. In this case, the threats come from the decisions of the owners, developers, managers of artificial intelligence, the people behind artificial intelligence. The study concludes that in the moderation of discourse in artificial intelligence-based decision-making systems, such as online platforms, it is not enough to base accountability on platform conscience and self-restraint. It is absolutely necessary to subject platforms that use artificial intelligence in content moderation to a higher level of external and objective control, and in this connection to create a legal interpretation framework that defines the main guidelines for discourses moderated by artificial intelligence on online platforms and also enforces these guidelines.

Open access