Priscian’s Institutiones grammaticae is a systematic exposition of Latin grammar. Among the writings of George of Trebizond (Georgius Trapezuntius, 1395–ca. 1474) there is a grammar entitled Departibus orationis ex Prisciano compendium, a grammatical catechism written in Venice in the early 1430s. The primary aim of the present paper is to analyse Trebizond’s procedure in “condensing” Priscian, and to give a comparison of the Institutiones grammaticae with the De partibus orationis ex Prisciano compendium.
Összefoglalás. A mesterséges intelligencia az elmúlt években hatalmas
fejlődésen ment keresztül, melynek köszönhetően ma már rengeteg különböző
szakterületen megtalálható valamilyen formában, rengeteg kutatás szerves részévé
vált. Ez leginkább az egyre inkább fejlődő tanulóalgoritmusoknak, illetve a Big Data
környezetnek köszönhető, mely óriási mennyiségű tanítóadatot képes szolgáltatni.
A cikk célja, hogy összefoglalja a technológia jelenlegi állapotát. Ismertetésre
kerül a mesterséges intelligencia történelme, az alkalmazási területek egy nagyobb
része, melyek központi eleme a mesterséges intelligencia. Ezek mellett rámutat a
mesterséges intelligencia különböző biztonsági réseire, illetve a kiberbiztonság
területén való felhasználhatóságra. A cikk a jelenlegi mesterséges intelligencia
alkalmazások egy szeletét mutatja be, melyek jól illusztrálják a széles felhasználási
Summary. In the past years artificial intelligence has seen several
improvements, which drove its usage to grow in various different areas and became the
focus of many researches. This can be attributed to improvements made in the learning
algorithms and Big Data techniques, which can provide tremendous amount of
The goal of this paper is to summarize the current state of artificial intelligence.
We present its history, introduce the terminology used, and show technological areas
using artificial intelligence as a core part of their applications. The paper also
introduces the security concerns related to artificial intelligence solutions but
also highlights how the technology can be used to enhance security in different
applications. Finally, we present future opportunities and possible improvements. The
paper shows some general artificial intelligence applications that demonstrate the
wide range usage of the technology.
Many applications are built around artificial intelligence technologies and there are
many services that a developer can use to achieve intelligent behavior. The
foundation of different approaches is a well-designed learning algorithm, while the
key to every learning algorithm is the quality of the data set that is used during
the learning phase. There are applications that focus on image processing like face
detection or other gesture detection to identify a person. Other solutions compare
signatures while others are for object or plate number detection (for example the
automatic parking system of an office building). Artificial intelligence and accurate
data handling can be also used for anomaly detection in a real time system. For
example, there are ongoing researches for anomaly detection at the ZalaZone
autonomous car test field based on the collected sensor data. There are also more
general applications like user profiling and automatic content recommendation by
using behavior analysis techniques.
However, the artificial intelligence technology also has security risks needed to be
eliminated before applying an application publicly. One concern is the generation of
fake contents. These must be detected with other algorithms that focus on small but
noticeable differences. It is also essential to protect the data which is used by the
learning algorithm and protect the logic flow of the solution. Network security can
help to protect these applications.
Artificial intelligence can also help strengthen the security of a solution as it is
able to detect network anomalies and signs of a security issue. Therefore, the
technology is widely used in IT security to prevent different type of attacks.
As different BigData technologies, computational power, and storage capacity increase
over time, there is space for improved artificial intelligence solution that can
learn from large and real time data sets. The advancements in sensors can also help
to give more precise data for different solutions. Finally, advanced natural language
processing can help with communication between humans and computer based
Authors:Khalid Kahloot, Kristof Csorba, and Peter Ekler
The study of respiratory forms a major application and publication in the medical field. It characterizes the abnormalities in the breathing pattern, which assists in selecting the appropriate treatment methods. In some cases, respiratory characteristics unravel and point out potential diseases. A medical team gathered data from randomly selected recruits. A huge dataset was prepared, which capture the volume and velocity of the breathed air from the target recruits. This paper presents the results of carrying out some of the signal processing, dimension reduction, and data mining techniques over this dataset. In particular, convolution filter, singular value decomposition and density-based spatial clustering of applications with noise were applied. Inhaling behaviors have been categorized into nine groups but with stochastic noise. Some of the groups are big enough and distinguishable to evaluate with the use of eight types of inhalers the model for velocity versus volume of inhaling. Results will be considered by the medical team for choosing the appropriate inhaler out of five types of inhalers appropriate for each group.