Author:
György Kampis Eötvös Loránd Tudományegyetem Természettudományi Kar, Etológia Tanszék Budapest Magyarország; Eötvös Loránd University Faculty of Science, Dept. of Ethology Budapest Hungary

Search for other papers by György Kampis in
Current site
Google Scholar
PubMed
Close
Open access

Összefoglalás. A jelen írás alapja a témában tartott előadásom. Először általános kérdésekkel foglalkozom, majd a tervezett „EU AI Act”-ről lesz szó, utána egy VW projektet ismertetek röviden, majd a „megmagyarázható MI”-ről fogok beszélni, aztán egy saját, hazai kezdeményezésről, az Alfi projektről teszek említést. Végezetül egy kitekintés zárja le az írást.

Summary. This writing is based on a lecture on the topic. In my other (German) affiliation I am manager of a large-scale EU project called “HumanE AI Net” (funded with 12m Euro) comprising 53 leading EU institutions, including large universities (UCL London, LMU Munich, Sorbonne, Sussex or ELTE), networks of research institutes (Fraunhofer, Max Planck Gesellschaft, INRIA, CNR Italy), large international companies (ING Bank, SAP, Philips, Airbus), etc. In the writing I discuss general issues related to Humane AI, the planned EU AI Act, social credit systems, explainable AI, and the Alphie project, respectively.

In April 2021, the European Commission proposed a regulation on artificial intelligence, known as the AI Act. The regulation aims at human-faced AI in a European dimension. Although it is still only a draft, the stakes are high. The planned law has, however, faults (I maintain here), to be corrected before the text passes as law.

Another subject to discuss is the study – and prohibition (at least in Europe) – of social credit systems. The original “Social Credit System” is a national credit rating and blacklist developed by the Government of the People’s Republic of China. Proponents of the system claim that it helps regulate social behaviour, improves citizens’ ‘trustworthiness’ (which includes paying taxes and bills on time) and promotes the spread of traditional moral values. Critics of the system, however, argue that it goes far beyond the rule of law and violates the legitimate rights of people – in particular, the right to reputation, privacy and personal dignity – and that it can be a tool for extensive government surveillance and suppression of dissent.

“Explainable AI” (XAI) has become a hot topic in recent years. AI applications are mostly “opaque”: this is especially true for learning systems and by definition for neural networks (NN). The current fashion, “deep learning”, usually means the application of a particularly opaque NN anyway. It is natural not to know what the system is doing and why. So, let’s change that! With this tenet, XAI was born. I review some solutions to the problem.

In the writing I also mention an application, Alphie, the first version of which was done in the OTKA project “Good Mobile” and is now supported by the MI National Laboratory. Alphie is a science-based playful application for children that helps them to use digital tools more consciously and within limits, while developing a variety of skills. It performs the functions of a ‘grandmother’ who shows emotions towards the child: can be e.g. angry, loving, etc. The application makes the corresponding sounds (!) and facilitates real social interactions (e.g. sends the child to play football (!).

  • 1

    Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R. & Chatila, R. (2020) Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, Vol. 58, pp. 82–115.

    • Crossref
    • Export Citation
  • 2

    Cataleta, M. S. (2021) Humane Artificial Intelligence. The Fragility of Human Rights Facing AI. East West Center

    • Crossref
    • Export Citation
  • 3

    Konok, V., Liszkai-Peres, K., Bunford, N., Ferdinandy, B., Jurányi, Z., Ujfalussy, D. J., Réti, Z., Pogány, Á., Kampis, G. & Miklósi, Á. (2021) Mobile use induces local attentional precedence and is associated with limited socio-cognitive skills in preschoolers. Computers in Human Behavior, Vol. 120, 106758.

  • 4

    Mac Síthigh, D., Siems, M. (2019) The Chinese social credit system: A model for other countries? The Modern Law Review, Vol. 82. No. 6. pp. 1034–1071.

    • Crossref
    • Export Citation
  • 5

    Veale, M., Borgesius, F. Z. (2021) Demystifying the Draft EU Artificial Intelligence Act—Analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International, Vol. 22. No. 4. pp. 97–112.

  • Collapse
  • Expand

Editor-in-Chief:

Founding Editor-in-Chief:

  • Tamás NÉMETH

Managing Editor:

  • István SABJANICS (Ministry of Interior, Budapest, Hungary)

Editorial Board:

  • Attila ASZÓDI (Budapest University of Technology and Economics)
  • Zoltán BIRKNER (University of Pannonia)
  • Valéria CSÉPE (Research Centre for Natural Sciences, Brain Imaging Centre)
  • Gergely DELI (University of Public Service)
  • Tamás DEZSŐ (Migration Research Institute)
  • Imre DOBÁK (University of Public Service)
  • Marcell Gyula GÁSPÁR (University of Miskolc)
  • József HALLER (University of Public Service)
  • Charaf HASSAN (Budapest University of Technology and Economics)
  • Zoltán GYŐRI (Hungaricum Committee)
  • János JÓZSA (Budapest University of Technology and Economics)
  • András KOLTAY (National Media and Infocommunications Authority)
  • Gábor KOVÁCS (University of Public Service)
  • Levente KOVÁCS buda University)
  • Melinda KOVÁCS (Hungarian University of Agriculture and Life Sciences (MATE))
  • Miklós MARÓTH (Avicenna Institue of Middle Eastern Studies )
  • Judit MÓGOR (Ministry of Interior National Directorate General for Disaster Management)
  • József PALLO (University of Public Service)
  • István SABJANICS (Ministry of Interior)
  • Péter SZABÓ (Hungarian University of Agriculture and Life Sciences (MATE))
  • Miklós SZÓCSKA (Semmelweis University)

Ministry of Interior
Science Strategy and Coordination Department
Address: H-2090 Remeteszőlős, Nagykovácsi út 3.
Phone: (+36 26) 795 906
E-mail: scietsec@bm.gov.hu

DOAJ

2023  
CrossRef Documents 32
CrossRef Cites 15
Days from submission to acceptance 59
Days from acceptance to publication 104
Acceptance Rate 81%

2022  
CrossRef Documents 38
CrossRef Cites 10
Days from submission to acceptance 54
Days from acceptance to publication 78
Acceptance Rate 84%

2021  
CrossRef Documents 46
CrossRef Cites 0
Days from submission to acceptance 33
Days from acceptance to publication 85
Acceptance Rate 93%

2020  
CrossRef Documents 13
CrossRef Cites 0
Days from submission to acceptance 30
Days from acceptance to publication 62
Acceptance Rate 93%

Publication Model Gold Open Access
Submission Fee none
Article Processing Charge none

Scientia et Securitas
Language Hungarian
English
Size A4
Year of
Foundation
2020
Volumes
per Year
1
Issues
per Year
4
Founder Academic Council of Home Affairs and
Association of Hungarian PhD and DLA Candidates
Founder's
Address
H-2090 Remeteszőlős, Hungary, Nagykovácsi út 3.
H-1055 Budapest, Hungary Falk Miksa utca 1.
Publisher Akadémiai Kiadó
Publisher's
Address
H-1117 Budapest, Hungary 1516 Budapest, PO Box 245.
Responsible
Publisher
Chief Executive Officer, Akadémiai Kiadó
Applied
Licenses
CC-BY 4.0
CC-BY-NC 4.0
ISSN ISSN 2732-2688 (online), 3057-9759 (print)
   

Monthly Content Usage

Abstract Views Full Text Views PDF Downloads
Aug 2024 0 37 9
Sep 2024 0 63 7
Oct 2024 0 167 15
Nov 2024 0 70 6
Dec 2024 0 29 4
Jan 2025 0 7 8
Feb 2025 0 0 0