Authors:
Szuzina Fazekas Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest, Hungary

Search for other papers by Szuzina Fazekas in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0002-7683-0305
,
Bettina Katalin Budai Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest, Hungary

Search for other papers by Bettina Katalin Budai in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0002-3982-7887
,
Róbert Stollmayer Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest, Hungary

Search for other papers by Róbert Stollmayer in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0003-4673-7588
,
Pál Novák Kaposi Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest, Hungary

Search for other papers by Pál Novák Kaposi in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0002-7150-3495
, and
Viktor Bérczi Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest, Hungary

Search for other papers by Viktor Bérczi in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0003-4386-2527
Open access

Abstract

The area of Artificial Intelligence is developing at a high rate. In the medical field, an extreme amount of data is created every day. As the images and the reports are quantifiable, the field of radiology aspires to deliver better, more efficient clinical care. Artificial intelligence (AI) means the simulation of human intelligence by a system or machine. It has been developed to enable machines to “think”, which means to be able to learn, reason, predict, categorize, and solve problems concerning high amounts of data and make decisions in a more effective manner than before. Different AI methods can help radiologists with pre-screening images and identifying features. In this review, we summarize the basic concepts which are needed to understand AI. As the AI methods are expected to exceed the threshold for clinical usefulness soon, in the near future it will be inevitable to use AI in medicine.

Abstract

The area of Artificial Intelligence is developing at a high rate. In the medical field, an extreme amount of data is created every day. As the images and the reports are quantifiable, the field of radiology aspires to deliver better, more efficient clinical care. Artificial intelligence (AI) means the simulation of human intelligence by a system or machine. It has been developed to enable machines to “think”, which means to be able to learn, reason, predict, categorize, and solve problems concerning high amounts of data and make decisions in a more effective manner than before. Different AI methods can help radiologists with pre-screening images and identifying features. In this review, we summarize the basic concepts which are needed to understand AI. As the AI methods are expected to exceed the threshold for clinical usefulness soon, in the near future it will be inevitable to use AI in medicine.

Introduction

Artificial intelligence (AI) means the simulation of human intelligence by a system or machine [1]. It has been developed to enable machines to “think”, which means to be able to learn, reason, predict, categorize and solve problems concerning high amounts of data and make decisions in a more effective manner than before (Fig. 1).

Fig. 1.
Fig. 1.

Hierarchy of artificial intelligence. Artificial intelligence (AI) means the simulation of human intelligence by a system or machine. The term “machine learning” stands for a subgroup of AI methods that learn patterns without being explicitly programmed. Artificial neural networks are inspired by biological neurons, they are a more complex subgroup of ML methods. Deep learning methods do not require human intelligence for information extraction, they use deep layers to learn data representations

Citation: Imaging 14, 2; 10.1556/1647.2022.00104

The areas of potential AI applications in medicine include drug discovery, prediction of epidemics, personalized medicine, patient monitoring, biomarker discovery, omics data analysis and image classification, detection, and segmentation. In the field of radiology, an exponentially increasing amount of data has been created over the years. As this medical data is quantifiable, it is possible to use AI to decrease the extra workload of doctors with the help of more automated systems and computer-aided diagnostics.

Main types of machine learning algorithms

The input of the AI is medical data, it is processed by specified algorithms and the output is a medical decision. The term “machine learning” (ML) stands for a field of AI where computers learn patterns without being explicitly programmed [2]. It means that the program is not written to match a specific pattern, but to find patterns in any given dataset (the desired pattern is not specified before, and the algorithm does not know anything about the type of pattern it should look for). The ML algorithms need to be trained, which can be done in an unsupervised, supervised, or reinforced manner.

The most relevant, most widely used types of artificial intelligence in radiology are convolutional neural networks and deep neural networks: this article is aimed to understand the core principles of these, but a short introduction of conventional machine learning methods are also included.

Conventional machine learning algorithms

Unsupervised learning is used when the available training data consists of data points neither classified nor labelled. The algorithm uses different methods – such as k-means clustering, principal component analysis, Gaussian mixture model, or hidden Markov model – to identify previously unknown patterns [3]. As an example, images of cats and dogs can be fed to the algorithm without labels (so we do not tell the algorithm which is a cat and which is a dog). If the algorithm can distinguish cats from dogs with high accuracy, it means that the algorithm was able to identify the dominant pattern that is unique to cats and was able to successfully distinguish between the two groups without human assistance. An example in radiology may be a classification of kidney tumors. In this case, the images of the tumors will be grouped, but we can not know how (based on which property) the groups are formed. We can not choose the desired separation criteria (for example we can not separate based on histology), so the algoritm may ditinguish based on the contrast agent enhancement, size of the necrotic part or any other property.

Supervised learning means that categorized input and output data are provided for the algorithm in the learning phase. For classification problems (when we want to predict qualitative outputs), support vector machines, discriminant analysis, naive Bayesian classifiers, or nearest neighbour methods can be used. In supervised learning of conventional machine learning methods, the algorithms create a mathematical model based on the training data points that gives the least number of wrong classifications, in case of a classification problem. For regression problems (when we want to predict quantitative output), linear regression, support vector regression, ensemble methods, or decision trees can be used [3]. If we want to solve a regression problem, the mathematical model is created based on algorithm-specific regularization parameters. An error term can be defined (for example mean squared error) and the algorithm chooses its model by minimizing this error term [4]. As an example, we can feed the algorithm with a large number of images of cats and dogs with labels (so we tell the algorithm whether the image is of a cat or a dog), and then give input images of cats and dogs mixed. The algorithm will be able to determine if a cat is present in the given image or if that is not a cat, so it will separate the data into two classes, with one group of cats and another group of not cats. In radiology, supervised learning makes it possible to train an algorithm for the classification of tumorous tissues and healthy tissues. It required to feed the algorithm with a large number of images of tumorous organs and a large number of healthy organs. Based on the learned pattern, the algorithm will be able to determine if a tumor is present in a new image.

Reinforcement learning is mathematically an optimal sequential decision-making algorithm [5]. The basic concept is to learn by trial and the subsequent error based on the interaction with the environment. After each decision, the environment changes, which makes this method appropriate for dynamic environments. A reinforcement learning method does not know what the right output is, as it has unlabelled data. Thus, it cannot adapt itself to the right output and cannot know how accurate it is. There is no binary feedback after each decision, it knows the goal and chooses the decision path that gives the highest accumulated reward over time. Reinforcement learning uses reward and penalty to compute the loss value as feedback information, and the algorithm learns as maximizing the reward while minimizing the penalty [6]. In medical imaging, reinforcement learning framework can be used for image segmentation. With the controlling of local threshold and post-processing parameters using a reinforcement learning agent, the segmentation of the prostate in ultrasound images can be achieved [7].

Neural networks

Unsupervised learning aims to find hidden patterns or to cluster data. In the case of unsupervised learning of neural networks, the input data is unlabelled and uncategorized, and the algorithm learns by itself and uses more complex processing methods and operations compared to supervised learning. That is why it requires orders of magnitude more data [8].

The latest and most complex form of supervised learning is the convolutional neural network, which is able to learn from the accuracy of the predicted output. If its actual output is wrong, it learns and improves its efficiency by minimizing the so-called loss value as feedback information.

Deep reinforcement learning stands for cases when a deep neural network is used to learn representations to solve reinforcement learning problems. It performs sequential decisions, and the success or failure is determined after multiple steps. Deep learning methods for reinforcement learning problems make it possible to scale up to previously unmanageable problems [6].

How does AI work?

Conventional machine learning algorithms

Support vector machines (SVMs) are one of the most widespread examples of classical ML algorithms. SVMs are designed for both classification and regression problems [8]. In the case of a support vector classifier (SVC), the algorithm finds the optimal hyperplane that separates the predefined groups of the training cases [9]. In the case of a two-dimensional dataset with two variables, the algorithm defines the x and y axis along the first and second variable and aims to find a line with the largest possible margin from the support vector data points which separate the cases into two segments, so to classify our data into two separate groups (Fig. 2). The algorithm can also work with a larger number of variables, e.g. in 3D, it computes a plane instead of a line, moreover by increasing the number of variables, it is capable of computing the optimal hyperplane in even higher dimensions that cannot be further visualized. Although the algorithm was originally designed to separate linearly separable classes, one of its biggest advantages that makes it the most widely used ML algorithm is its flexibility. With the so-called “kernel trick”, the usage of the algorithm can be expanded to classify linearly non-separable data. It is achieved by mapping the original data to higher dimensional space and finding a linear separating hyperplane there [10].

Fig. 2.
Fig. 2.

Support vector machines. Support vector machines (SVMs) are one of the most widespread examples of classical ML algorithms. The algorithm finds an optimal hyperplane (in 2D a hyperplane is a straight line), which separates the data points into classes in a way that the distance to the support vectors is minimal. The data points which support the hyperplane are called support vectors

Citation: Imaging 14, 2; 10.1556/1647.2022.00104

Artificial neural networks and deep learning

Artificial Neural Networks (ANNs) are networks consisting of artificial neurons, which imitate the human nervous system. In the brain, the neurons fire depending on the sum of the excitatory and inhibitory input signals in many consecutive layers of the grey matter. Similarly, an ANN has nodes that sum the inputs (multiplied by the corresponding weight of the input) and turn on or off depending on the result of the activation function (Fig. 3). The weights of the inputs can be interpreted as the strength of the corresponding incoming connection. These weights are the parameters which have to be trained to configure the network. An ANN can be divided into multiple layers: an input layer, at least one hidden layer, and an output layer. The output of the previous layer functions as the input of the consecutive layer, and the output layer linearly combines the outputs of the hidden units, so each patch is mapped to any desired output value. ANNs that have only one hidden layer are also known as shallow neural networks, while an ANN is called a deep neural network if it has more than one hidden layer. The advantage of deep architectures is that the consecutive hidden layer of a deep neural network can re-use the features which were computed in the previous hidden layer, which can improve the generalization and precision [11].

Fig. 3.
Fig. 3.

The nodes of the artificial neural networks. They are similar to the neurons of the brain. The neurons get inputs from multiple sources and the neuron activates depending on the sum of all input signals. Mathematically, activation is decided depending on the result of the activation function

Citation: Imaging 14, 2; 10.1556/1647.2022.00104

Along with the development of information technology, the number of types of neural networks is constantly increasing. In the field of medical imaging, convolutional neural networks (CNNs) are considered as the most popular type of ANNs, but Recurrent Neural Networks (RNNs) and Generative Adversarial Networks (GANs) are becoming more and more popular and widespread.

Convolutional neural networks can perform convolutions, also known as template matching. Convolution is a mathematical operation: f(τ)g(tτ)dτ. What it means in other words is the amount of overlap as one of the functions is shifted over the other function. The structure of a CNN includes multiple neurons that share weights, which enables to apply filters (or convolutional kernels) to the input images. This means that a subset of pixels of the input image is multiplied by the filter matrix, and the result matrix is summed up, so that for example from a 2 × 2 matrix, the result will be one pixel value (Figs 4 and 5). The CNN layer which performs this operation multiple times at different locations of the image is called a convolutional layer. The advantage of using convolutional layers is extracting different sharp and smooth image features while also compressing the data to save computation capacity and reduce memory usage. To further compress the data during processing while also reducing overfitting, convolutional layers are usually combined with max-pooling layers in the network, which execute similar filtering processes, but choose the maximum value of the input pixels and add this maximal value to the result matrix [12].

Fig. 4.
Fig. 4.

Application of the kernel matrix. The pixel values are multiplied by the kernel matrix. The kernel filter is applied to consecutive parts of the original pixel data

Citation: Imaging 14, 2; 10.1556/1647.2022.00104

Fig. 5.
Fig. 5.

Results of the filters. The red line shows the chosen part of the image. The representation of this particular filter is a curved line (see the values with 30 surrounded by 0 values), the corresponding matrix representing the given selected part of the analysed image is shown on the left side of the Figure. During applying the kernel, each value of the image matrix is multiplied by the corresponding kernel unit value in the same location, then the entire matrix is summed up. If we apply the filter to the given matrix, the result of the multiplication and summation will be a large number, as the kernel units with large values overlap with large pixel values in the same location

Citation: Imaging 14, 2; 10.1556/1647.2022.00104

In recurrent neural networks, the connections are not simply feedforward, but they can form cycles. Consequently, this kind of network can execute more complex computations and gives its best performance in tasks with dependent observations. RNNs can preserve and maintain an internal memory over time, they are able to recognize or generate temporal patterns. Therefore, RNNs can be used for the perception of video or speech, linguistic tasks, translation, and motoric planning and control [11]. However, there have been examples in medical imaging as well [13].

There are some additional methods to fine-tune and improve the performance of the algorithms. Generative adversarial networks (GANs) have gained attention recently as the latest breakthrough in the field of deep learning. The basic idea behind this technique is that two neural networks are being simultaneously trained: one for image generation and one for discrimination. The first network transforms random patterns into images, while the second network separates the synthetic images and the authentic examples of the desired category. That way it is possible to integrate unlabelled samples into training and generate synthetic data. In medical imaging, GANs are used either to generate new images as a form of data augmentation (after the exploration of the underlying structure of the training data) or to discriminate between normal and abnormal images [14].

Feature extraction from images

The original way of radiological image assessment is a subjective visual evaluation of medical images by radiologists. The traditional way of quantitative feature extraction is the strategy of radiomics: a region of interest (ROI) is manually selected, and different quantitative features (shape, size, texture, etc.) are extracted, selected, and analysed. The latest approach, deep learning methods, do not require human intelligence for information extraction, because images can be fed to ANNs and they automatically extract, select, and learn the important features [15].

Application of neural networks in radiology

Data collection and ANN training

As ANNs have to be trained, validated, and tested to perform a given task efficiently, high-quality and accurately labelled large datasets are required. There are large public datasets currently available, for example ChestXray14 (CXR14) with 112,000 chest radiographs or Muskuloskeletal Radiology (MURA) dataset with 40,000 upper limb radiographs. However, in case of the CXR14 dataset the labels did not precisely reflect the visual content with positive predictive value between 10% and 30%. In the MURA dataset the original labels were inaccurate with a sensitivity of 60% and specificity of 82%, which result underlines the importance of the correct documentation of the development process [16].

Defining ground truth

The evaluation of the data from clinical studies has many crucial steps. Firstly, suitable cases must be selected, and the data is needed to be anonymized. Secondly, the proper labelling of the images (or defining the ground truth) should be done by experts. For some diagnoses, the image itself contains enough information (e.g., intracranial haemorrhage), but in other cases, clinical information (e.g., pathology) or additional imaging (e.g., lung cancer on X-ray and subsequent CT) examinations may be required [17].

Another important potential bias is population variability, local disease prevalence, and different imaging protocols [17]. To avoid overfitting, the development of an AI algorithm needs diverse but balanced training data, which should be from different counties and various geographic regions. Unfortunately, limited access to medical data makes it difficult to obtain a diverse enough collection of images.

Curation, augmentation

Data from the real world is usually diverse. If we want to analyse them with neural networks, there are parameters that have to be the same in the case of all the data. For example, analysing volumetric CT or MRI images is challenging due to the huge variety in voxel size, the number of slices, slice thickness, inter-slice gap, etc. The curation of medical data consists of quality checking, noise filtering, metadata creation, and annotation of the data. To create more data and make the algorithms more generalizable, some forms of data augmentation can also be applied such as flipping or rotating the image, zooming, changing the contrast or resolution, or altering the location of the lesion [2].

Training, validation, testing

With the help of the training data, the algorithm gets trained, and the parameters are optimized. The validation during training is performed on the validation set to investigate the performance of the model and find the best-fitting one. Finally, the performance of the trained model is evaluated on the independent test dataset. All these steps again require high-quality, but independent datasets.

Image classification, object detection, and segmentation

Image classification

Classification means assigning the data points or images to discrete (specified number of) categories. Image classification is one of the first problems for which AI algorithms have been used in radiology. A typical image classification problem can be explained as the following: one or multiple images are the input, and the algorithm decides whether the specified disease is present or not, so the output is a single decision (Fig. 6A). One strategy is to use our own data for the training phase to train the algorithm. The other strategy is to use pre-trained networks by transfer learning which is preferable in that case if only a limited number of training cases are available. Then, we can either use the original weights of a previously published, trained neural network, or we can finetune its weights by continuing its training with cases of our own training dataset [18]. Convolutional neural networks have shown excellent results in connection with image classification. The first layer has an input at the level of pixels, and each operation is a small convolution, which then spread forward to the network. In that way, CNNs can reduce the number of hyperparameters needed while also effectively extracting the features.

Fig. 6.
Fig. 6.

Comparison of image classification, object detection, instance and semantic segmentation. The classification decides if a dog is present in the image or not. Object detection localizes the dogs. The instance segmentation draws the outlines of the dogs (each of the dogs separately). The semantic segmentation categorizes each pixel: dog, floor, or rug

Citation: Imaging 14, 2; 10.1556/1647.2022.00104

Object detection

The main goal of object detection is to decide whether objects of a specific category is present in the image or not (Fig. 6B). There are two types of object detection methods. One of them is the proposal-based framework, which means that the algorithm selects regions (region proposals) and then classifies these into categories. The widely used networks R-CNN [19], Fast R-CNN [20] and Mask R-CNN [21] are using this approach. The other method is regression-based, so it uses a unified framework and directly executes categories and localizations. MultiBox [22], You only look once (YOLO) [23] and RetinaNet [24] networks belong to this category [25].

Segmentation

Segmentation stands for the partition of the image pixels/voxels into subgroups called image segments. Practically it means to delineate the boundaries of the desired area or volume (i.e., the boundaries of an organ, tumour, or lesion). Currently, the gold standard is manual segmentation. There are several segmentation programs, such as 3D Slicer [26], which make it possible to draw the boundary of the object manually in each of the slices. In the case of instance segmentation, each of the objects is drawn independently (Fig. 6C). During semantic segmentation, the AI algorithm segments the images based on the pixels, so a semantic label is assigned to each of the pixels (Fig. 6D). Thus, this method does not only identify the object itself but marks its boundaries as well. A potential application area is the delineation of tumour boundaries (Fig. 7), in that case, precise delineation is crucial for clinical decision-making or surgical resection [27].

Fig. 7.
Fig. 7.

* Segmentation of the liver. Based on the original CT scan (A), a trained CNN can propose a segmentation heatmap (B). A probability value is assigned to each pixel. Warmer colour means higher possibility that the current pixel belongs to the liver. A threshold can be set, only the pixels with probability value over this threshold will be assigned as part of the segmentation. The gold standard is manual segmentation (C). The segmented area can be reconstructed in 3 dimensions too (D). * Central illustration

Citation: Imaging 14, 2; 10.1556/1647.2022.00104

Pitfalls and limitations

Data availability

In the area of digital healthcare, it is still difficult to collect enough data as the medical data is not as clean, structured, and complete as in other fields. Unfortunately, medical images are often noisy and indistinct. Neural networks usually have hundred-thousands or millions of hyperparameters; for the effective training of machine learning algorithms, large amount of data is required which we call ‘data-hungry’. In clinical practice, collecting such amount of information is often not feasible as the patients may not be willing to allow their private data for research purposes [8]. Even the distribution and prevalence of the given disease could make it challenging to assemble such a dataset. The dataset should be representative of the population and must be precisely annotated. It should be balanced in patient age, ethnicity and imaging protocols among other properties. In the case of more expensive examinations, such as MRI, a relatively small number of patients are screened, which may also depend on the examination protocol of the different health systems [28].

Overfitting, overtraining

Overfitting can be a large problem in the application of machine learning algorithms. When the proposed model has near zero error – i.e., the constructed hyperplane of the support vector classifier fits all the data without any error – it is possible that our model captures the noise of our data rather than the real pattern we are looking for. In such case, after a random train/test split of the dataset, the shape of the hyperplane or the mathematical equation of it can show huge differences depending on the given training cases. In the case of neural networks, the phenomenon of overfitting is often caused by excessively maximized training accuracy (Fig. 8) [29].

Fig. 8.
Fig. 8.

Overfitting. In case of overfitting, the model may find trivial patterns – noise – in the dataset, so the proposed curve has zero error on the training dataset (A). The overfitted model can not be generalized, because it produces large error in external datasets. The optimal classification works with relatively small error in both the training and the external testing datasets, which makes it generalizable (B)

Citation: Imaging 14, 2; 10.1556/1647.2022.00104

Legal and ethical challenges, patient acceptance

AI may be used in clinical practice in the future, but acceptance by the patients and by the medical professionals may be controversial. The case is similar to many new technologies (e.g., self-driving cars), when the legal regulations are far behind the technology. The legal system does not define who would be accountable for possible mistakes that AI makes. Another question is whether patient will accept and trust the clinical workflow without human involvement [12].

Most relevant results in the field of radiology. Where are we now?

There are several promising results in the field of artificial intelligence in radiology. In respect of the increasing interest, the journal “Radiology: Artificial Intelligence” was established in 2019. In this journal a variety of artificial intelligence studies have been released. Different deep learning methods performed similarly to radiologists in the case of liver and spleen segmentation and prediction of cirrhosis or fibrosis [30], detection of acute or subacute haemorrhage on non-contrast CT scans [31], prediction of thoracic aortic aneurysms [32], differentiation of mitral regurgitation on chest radiographs [33], segmentation of breast cancer on MRI [34] or assessment of the severity of pulmonary oedema on chest radiographs [35]. Convolutional neural networks could measure thoracic aortic diameters with good accuracy [36] and the severity of knee lesions on MRI [37]. Deep learning algorithms can also improve the quality of CT pulmonary angiography [38]. Different strategies in radiomics, such as CT texture analysis of abdominal lesions, could identify the malignancies [39, 40].

In the future, The number of clinicians using some kind of AI methods - predominantly deep learning - will increase over time. The applications of AI in medicine is not limited to the field of radiology, but includes pathology, dermatology, ophthalmology, cardiology, gastroenterology and mental health as well. As of 5th of October 2022, a total number of 521 different Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices are approved by the FDA (https://www.fda.gov/) from which many are currently in clinical use, such as syngo.CT CaScoring (coronaria Ca scoring), Oxypit (chest Xray analysis), Aidoc (detection of intracranial haemorrhage) or Quantib (brain atrophy quantification).

Conclusions

The area of AI is developing at a high rate. In the medical field, an extreme amount of data is created every day. As the images and the reports are quantifiable, the field of radiology aspires to deliver better, more efficient clinical care. AI is capable of recognizing complex patterns and providing quantitative evaluation automatically. As the volume of images has grown, the workload of radiologists has risen to such a level that a radiologist may have to interpret an image every 3–4 s of the 8-h workday to meet the increasing demand [41]. More and more AI methods can help radiologists with pre-screening images and identifying features [42].

Author's contribution

SzF – Conceptualization, Drafting of the manuscript, Preparing the Figures.

BKB – Conceptualization, Drafting of the manuscript, Proofreading, Critical revision.

RS – Drafting of the manuscript, Critical revision.

PNK – Drafting of the manuscript, Critical revision.

VB – Conceptualization, Drafting of the manuscript, Critical revision.

All authors reviewed the final version of the manuscript and agreed to submit it to IMAGING for publication.

Funding sources

SzF received financial support from Gedeon Richter Talentum Foundation in framework of Gedeon Richter Excellence PhD Scholarship of Gedeon Richter. The funders had no role in the study design, data collection, and analysis, decision to publish, or preparation of the manuscript.

Conflict of interests

Dr. Viktor Bérczi is the Deputy Editor-in-Chief of IMAGING, therefore the submission was handled by a different member of the editorial team.

Acknowledgements

Not applicable.

References

  • [1]

    Xu Y, Liu X, Cao X, Huang C, Liu E, Qian S, et al.: Artificial intelligence: a powerful paradigm for scientific research. Innovation (Cambridge (Mass)) 2021; 2(4), 100179.

    • Search Google Scholar
    • Export Citation
  • [2]

    Do S, Song KD, Chung JW: Basics of deep learning: a radiologist’s guide to understanding published radiology articles on deep learning. Korean J Radiol 2020; 21(1): 3341.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [3]

    Chauhan NK, Singh K: A review on conventional machine learning vs deep learning. 2018 International Conference on Computing, Power and Communication Technologies (GUCON) 2018: 347352.

    • Search Google Scholar
    • Export Citation
  • [4]

    Ray S: A quick review of machine learning algorithms. 2019 International Conference on Machine Learning, Big Data. Cloud and Parallel Computing (COMITCon) 2019; 35–9.

    • Search Google Scholar
    • Export Citation
  • [5]

    Nian R, Liu J, Huang B: A review on reinforcement learning: introduction and applications in industrial process control. Comput Chem Eng 2020; 139, 106886.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [6]

    Zhou S, Le H, Luu KHV, Ayache N: Deep reinforcement learning in medical imaging: a literature review. Med Image Anal 2021; 73, 102193.

  • [7]

    Sahba F, Tizhoosh HR, Salama MMA, editors. A reinforcement learning framework for medical image segmentation. The 2006 IEEE International Joint Conference on Neural Network Proceedings; 2006 16–21, July 2006.

    • Search Google Scholar
    • Export Citation
  • [8]

    Manne R, Kantheti S: Application of artificial intelligence in healthcare: chances and challenges. Curr. J Appl Sci Technol 2021; 40: 7889.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [9]

    Battineni G, Chintalapudi N, Amenta F: Machine learning in medicine: performance calculation of dementia prediction by support vector machines (SVM). Inform Med Unlocked 2019; 16, 100200.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [10]

    Hofmann M: Support vector machines — kernels and the kernel trick. An elaboration for the Hauptseminar “Reading Club: Support Vector Machines” 2006.

    • Search Google Scholar
    • Export Citation
  • [11]

    Kriegeskorte N, Golan T: Neural network models and deep learning. Curr. Biol. CB 2019; 29(7): R231R236.

  • [12]

    Mazurowski MA, Buda M, Saha A, Bashir MR: Deep learning in radiology: an overview of the concepts and a survey of the state of the art with focus on MRI. J Magn Reson Imag: JMRI 2019; 49(4): 939954.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [13]

    Dvornek N, Yang D, Ventola P, Duncan J: Learning generalizable recurrent neural networks from small task-fMRI datasets. Medical image computing and computer-assisted intervention. MICCAI International Conference on Medical Image Computing and Computer-Assisted Intervention 2018; 11072: 329337.

    • Search Google Scholar
    • Export Citation
  • [14]

    Yi X, Walia E, Babyn P: Generative adversarial network in medical imaging: a review. Med Image Anal 2019; 58, 101552.

  • [15]

    Yasaka K, Akai H, Kunimatsu A, Kiryu S, Abe O: Deep learning with convolutional neural network in radiology. Jpn J Radiol 2018; 36(4): 25772.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [16]

    Oakden-Rayner L: Exploring large-scale public medical image datasets. Acad Radiol 2020; 27(1): 106112.

  • [17]

    Wichmann JL, Willemink MJ, De Cecco CN: Artificial intelligence and machine learning in radiology: current state and considerations for routine clinical implementation. Invest Radiol 2020; 55(9): 619627.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [18]

    Litjens G, Kooi T, Bejnordi B, Setio A, Ciompi F, Ghafoorian M: A survey on deep learning in medical image analysis. Med Image Anal 2017; 42: 6088.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [19]

    Girshick R, Donahue J, Darrell T, Malik J, editors: Rich feature hierarchies for accurate object detection and semantic segmentation. 2014 IEEE Conference on Computer Vision and Pattern Recognition; 2014; 23–8, June 2014.

    • Search Google Scholar
    • Export Citation
  • [20]

    Girshick R, editor: Fast R-CNN. IEEE International Conference on Computer Vision (ICCV); 2015 7–13, Dec. 2015.

  • [21]

    He K, Gkioxari G, Dollár P, Girshick R: IEEE Trans Pattern Anal Mach Intell 2020; 42(2): 386397.

  • [22]

    Erhan D, Szegedy C, Toshev A, Anguelov D: Scalable object detection using deep neural networks. CoRR 2013. Available from: http://arxiv.org/abs/1312.2249.

    • Search Google Scholar
    • Export Citation
  • [23]

    Redmon J, Divvala S, Girshick R, Farhadi A, editors: You only look once: unified, real-time object detection 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016 27–30, June 2016.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [24]

    Lin T-Y, Goyal P, Girshick R, He K, Dollár P: Focal loss for dense object detection. IEEE International Conference on Computer Vision (ICCV) 2017.

    • Search Google Scholar
    • Export Citation
  • [25]

    Ueda D, Shimazaki A, Miki Y: Technical and clinical overview of deep learning in radiology. Jpn J Radiol 2019; 37(1): 1533.

  • [26]

    Fedorov A, Beichel R, Kalpathy-Cramer J, Finet J, Fillion- Robin J, Pujol S: 3D slicer as an image computing platform for the quantitative imaging network. Magn Reson Imaging 2012; 30(9): 13231341.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [27]

    Yang R, Yu Y: Artificial convolutional neural network in object detection and semantic segmentation for medical imaging analysis. Front Oncol 2021; 11, 638182.

    • Search Google Scholar
    • Export Citation
  • [28]

    Chan HP, Samala RK, Hadjiiski LM, Zhou C: Deep learning in medical image analysis. Adv Exp Med Biol 2020; 1213: 321.

  • [29]

    Bilbao I, Bilbao J, editors. Overfitting problem and the overtraining in the era of data: particularly for artificial neural networks. 2017 Eighth International Conference on Intelligent Computing and Information Systems (ICICIS); 2017 5–7, Dec 2017.

    • Search Google Scholar
    • Export Citation
  • [30]

    Lee S, Elton DC, Yang AH, Koh C, Kleiner DE, Lubner MG: Fully automated and explainable liver segmental volume ratio and spleen segmentation at CT for diagnosing cirrhosis. Radiol. Artif Intell 2022; 4(5), e210268.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [31]

    Seyam M, Weikert T, Sauter A, Brehm A, Psychogios MN, Blackham KA: Utilization of artificial intelligence-based intracranial hemorrhage detection on emergent noncontrast CT images in clinical workflow. Radiol. Artif Intell 2022; 4(2), e210168.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [32]

    Macruz FBC, Lu C, Strout J, Takigami A, Brooks R, Doyle S, et al.: Quantification of the thoracic aorta and detection of aneurysm at CT: development and validation of a fully automatic methodology. Radiol. Artif Intell 2022; 4(2), e210076.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [33]

    Ueda D, Ehara S, Yamamoto A, Iwata S, Abo K, Walston SL, et al.: Development and validation of artificial intelligence-based method for diagnosis of mitral regurgitation from chest radiographs. Radiol. Artif Intell 2022; 4(2), e210221.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [34]

    Hirsch L, Huang Y, Luo S, Rossi Saccarelli C, Lo Gullo R, Daimiel Naranjo I, et al.: Radiologist-level performance by using deep learning for segmentation of breast cancers on MRI scans. Radiol. Artif Intell 2022; 4(1), e200231.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [35]

    Horng S, Liao R, Wang X, Dalal S, Golland P, Berkowitz SJ: Deep learning to quantify pulmonary edema in chest radiographs 2021; 3(2), e190228.

    • Search Google Scholar
    • Export Citation
  • [36]

    Monti CB, van Assen M, Stillman AE, Lee SJ, Hoelzer P, Fung GSK, et al.: Evaluating the performance of a convolutional neural network algorithm for measuring thoracic aortic diameters in a heterogeneous population. Radiol. Artif Intell 2022; 4(2), e210196.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [37]

    Astuto B, Flament I, N KN, Shah R, Bharadwaj UTML, et al.: Automatic deep learning-assisted detection and grading of abnormalities in knee MRI studies. Radiol. Artif Intell 2021; 3(3), e200165.

    • Search Google Scholar
    • Export Citation
  • [38]

    Hahn LD, Hall K, Alebdi T, Kligerman SJ, Hsiao A: Automated deep learning analysis for quality improvement of CT pulmonary angiography. Radiol. Artif Intell 2022; 4(2), e210162.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [39]

    Budai BK, Frank V, Shariati S, Fejér B, Tóth A, Orbán V: CT texture analysis of abdominal lesions – Part I.: liver lesions. IMAGING 2021; 13(1): 1324.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [40]

    Frank V, Shariati S, Budai B, Fejér B, Tóth A, Orbán V: CT texture analysis of abdominal lesions – Part II.: tumors of the kidney and pancreas. IMAGING 2021; 13(1): 2536.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [41]

    McDonald RJ, Schwartz KM, Eckel LJ, Diehn FE, Hunt CH, Bartholmai BJ, et al.: The effects of changes in utilization and technological advancements of cross-sectional imaging on radiologist workload. Acad Radiol 2015; 22(9): 11911198.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [42]

    Wu H, Liu Q, Liu X: A review on deep learning approaches to image classification and object segmentation. Comput Mater Continua 2019; 58: 575597.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [1]

    Xu Y, Liu X, Cao X, Huang C, Liu E, Qian S, et al.: Artificial intelligence: a powerful paradigm for scientific research. Innovation (Cambridge (Mass)) 2021; 2(4), 100179.

    • Search Google Scholar
    • Export Citation
  • [2]

    Do S, Song KD, Chung JW: Basics of deep learning: a radiologist’s guide to understanding published radiology articles on deep learning. Korean J Radiol 2020; 21(1): 3341.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [3]

    Chauhan NK, Singh K: A review on conventional machine learning vs deep learning. 2018 International Conference on Computing, Power and Communication Technologies (GUCON) 2018: 347352.

    • Search Google Scholar
    • Export Citation
  • [4]

    Ray S: A quick review of machine learning algorithms. 2019 International Conference on Machine Learning, Big Data. Cloud and Parallel Computing (COMITCon) 2019; 35–9.

    • Search Google Scholar
    • Export Citation
  • [5]

    Nian R, Liu J, Huang B: A review on reinforcement learning: introduction and applications in industrial process control. Comput Chem Eng 2020; 139, 106886.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [6]

    Zhou S, Le H, Luu KHV, Ayache N: Deep reinforcement learning in medical imaging: a literature review. Med Image Anal 2021; 73, 102193.

  • [7]

    Sahba F, Tizhoosh HR, Salama MMA, editors. A reinforcement learning framework for medical image segmentation. The 2006 IEEE International Joint Conference on Neural Network Proceedings; 2006 16–21, July 2006.

    • Search Google Scholar
    • Export Citation
  • [8]

    Manne R, Kantheti S: Application of artificial intelligence in healthcare: chances and challenges. Curr. J Appl Sci Technol 2021; 40: 7889.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [9]

    Battineni G, Chintalapudi N, Amenta F: Machine learning in medicine: performance calculation of dementia prediction by support vector machines (SVM). Inform Med Unlocked 2019; 16, 100200.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [10]

    Hofmann M: Support vector machines — kernels and the kernel trick. An elaboration for the Hauptseminar “Reading Club: Support Vector Machines” 2006.

    • Search Google Scholar
    • Export Citation
  • [11]

    Kriegeskorte N, Golan T: Neural network models and deep learning. Curr. Biol. CB 2019; 29(7): R231R236.

  • [12]

    Mazurowski MA, Buda M, Saha A, Bashir MR: Deep learning in radiology: an overview of the concepts and a survey of the state of the art with focus on MRI. J Magn Reson Imag: JMRI 2019; 49(4): 939954.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [13]

    Dvornek N, Yang D, Ventola P, Duncan J: Learning generalizable recurrent neural networks from small task-fMRI datasets. Medical image computing and computer-assisted intervention. MICCAI International Conference on Medical Image Computing and Computer-Assisted Intervention 2018; 11072: 329337.

    • Search Google Scholar
    • Export Citation
  • [14]

    Yi X, Walia E, Babyn P: Generative adversarial network in medical imaging: a review. Med Image Anal 2019; 58, 101552.

  • [15]

    Yasaka K, Akai H, Kunimatsu A, Kiryu S, Abe O: Deep learning with convolutional neural network in radiology. Jpn J Radiol 2018; 36(4): 25772.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [16]

    Oakden-Rayner L: Exploring large-scale public medical image datasets. Acad Radiol 2020; 27(1): 106112.

  • [17]

    Wichmann JL, Willemink MJ, De Cecco CN: Artificial intelligence and machine learning in radiology: current state and considerations for routine clinical implementation. Invest Radiol 2020; 55(9): 619627.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [18]

    Litjens G, Kooi T, Bejnordi B, Setio A, Ciompi F, Ghafoorian M: A survey on deep learning in medical image analysis. Med Image Anal 2017; 42: 6088.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [19]

    Girshick R, Donahue J, Darrell T, Malik J, editors: Rich feature hierarchies for accurate object detection and semantic segmentation. 2014 IEEE Conference on Computer Vision and Pattern Recognition; 2014; 23–8, June 2014.

    • Search Google Scholar
    • Export Citation
  • [20]

    Girshick R, editor: Fast R-CNN. IEEE International Conference on Computer Vision (ICCV); 2015 7–13, Dec. 2015.

  • [21]

    He K, Gkioxari G, Dollár P, Girshick R: IEEE Trans Pattern Anal Mach Intell 2020; 42(2): 386397.

  • [22]

    Erhan D, Szegedy C, Toshev A, Anguelov D: Scalable object detection using deep neural networks. CoRR 2013. Available from: http://arxiv.org/abs/1312.2249.

    • Search Google Scholar
    • Export Citation
  • [23]

    Redmon J, Divvala S, Girshick R, Farhadi A, editors: You only look once: unified, real-time object detection 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016 27–30, June 2016.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [24]

    Lin T-Y, Goyal P, Girshick R, He K, Dollár P: Focal loss for dense object detection. IEEE International Conference on Computer Vision (ICCV) 2017.

    • Search Google Scholar
    • Export Citation
  • [25]

    Ueda D, Shimazaki A, Miki Y: Technical and clinical overview of deep learning in radiology. Jpn J Radiol 2019; 37(1): 1533.

  • [26]

    Fedorov A, Beichel R, Kalpathy-Cramer J, Finet J, Fillion- Robin J, Pujol S: 3D slicer as an image computing platform for the quantitative imaging network. Magn Reson Imaging 2012; 30(9): 13231341.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [27]

    Yang R, Yu Y: Artificial convolutional neural network in object detection and semantic segmentation for medical imaging analysis. Front Oncol 2021; 11, 638182.

    • Search Google Scholar
    • Export Citation
  • [28]

    Chan HP, Samala RK, Hadjiiski LM, Zhou C: Deep learning in medical image analysis. Adv Exp Med Biol 2020; 1213: 321.

  • [29]

    Bilbao I, Bilbao J, editors. Overfitting problem and the overtraining in the era of data: particularly for artificial neural networks. 2017 Eighth International Conference on Intelligent Computing and Information Systems (ICICIS); 2017 5–7, Dec 2017.

    • Search Google Scholar
    • Export Citation
  • [30]

    Lee S, Elton DC, Yang AH, Koh C, Kleiner DE, Lubner MG: Fully automated and explainable liver segmental volume ratio and spleen segmentation at CT for diagnosing cirrhosis. Radiol. Artif Intell 2022; 4(5), e210268.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [31]

    Seyam M, Weikert T, Sauter A, Brehm A, Psychogios MN, Blackham KA: Utilization of artificial intelligence-based intracranial hemorrhage detection on emergent noncontrast CT images in clinical workflow. Radiol. Artif Intell 2022; 4(2), e210168.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [32]

    Macruz FBC, Lu C, Strout J, Takigami A, Brooks R, Doyle S, et al.: Quantification of the thoracic aorta and detection of aneurysm at CT: development and validation of a fully automatic methodology. Radiol. Artif Intell 2022; 4(2), e210076.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [33]

    Ueda D, Ehara S, Yamamoto A, Iwata S, Abo K, Walston SL, et al.: Development and validation of artificial intelligence-based method for diagnosis of mitral regurgitation from chest radiographs. Radiol. Artif Intell 2022; 4(2), e210221.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [34]

    Hirsch L, Huang Y, Luo S, Rossi Saccarelli C, Lo Gullo R, Daimiel Naranjo I, et al.: Radiologist-level performance by using deep learning for segmentation of breast cancers on MRI scans. Radiol. Artif Intell 2022; 4(1), e200231.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [35]

    Horng S, Liao R, Wang X, Dalal S, Golland P, Berkowitz SJ: Deep learning to quantify pulmonary edema in chest radiographs 2021; 3(2), e190228.

    • Search Google Scholar
    • Export Citation
  • [36]

    Monti CB, van Assen M, Stillman AE, Lee SJ, Hoelzer P, Fung GSK, et al.: Evaluating the performance of a convolutional neural network algorithm for measuring thoracic aortic diameters in a heterogeneous population. Radiol. Artif Intell 2022; 4(2), e210196.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [37]

    Astuto B, Flament I, N KN, Shah R, Bharadwaj UTML, et al.: Automatic deep learning-assisted detection and grading of abnormalities in knee MRI studies. Radiol. Artif Intell 2021; 3(3), e200165.

    • Search Google Scholar
    • Export Citation
  • [38]

    Hahn LD, Hall K, Alebdi T, Kligerman SJ, Hsiao A: Automated deep learning analysis for quality improvement of CT pulmonary angiography. Radiol. Artif Intell 2022; 4(2), e210162.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [39]

    Budai BK, Frank V, Shariati S, Fejér B, Tóth A, Orbán V: CT texture analysis of abdominal lesions – Part I.: liver lesions. IMAGING 2021; 13(1): 1324.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [40]

    Frank V, Shariati S, Budai B, Fejér B, Tóth A, Orbán V: CT texture analysis of abdominal lesions – Part II.: tumors of the kidney and pancreas. IMAGING 2021; 13(1): 2536.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [41]

    McDonald RJ, Schwartz KM, Eckel LJ, Diehn FE, Hunt CH, Bartholmai BJ, et al.: The effects of changes in utilization and technological advancements of cross-sectional imaging on radiologist workload. Acad Radiol 2015; 22(9): 11911198.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [42]

    Wu H, Liu Q, Liu X: A review on deep learning approaches to image classification and object segmentation. Comput Mater Continua 2019; 58: 575597.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Collapse
  • Expand

 

The author instruction is available in PDF.
Please, download the file from HERE.
 
The Open Access statement together with the description of the Copyright and License Policy are available in PDF.
Please, download the file from HERE.


 

Chair of the Editorial Board:
Béla MERKELY (Semmelweis University, Budapest, Hungary)

Editor-in-Chief:
Pál MAUROVICH-HORVAT (Semmelweis University, Budapest, Hungary)

Deputy Editor-in-Chief:
Viktor BÉRCZI (Semmelweis University, Budapest, Hungary)

Executive Editor:
Charles S. WHITE (University of Maryland, USA)

Deputy Editors:
Gianluca PONTONE (Department of Cardiovascular Imaging, Centro Cardiologico Monzino IRCCS, Milan, Italy)
Michelle WILLIAMS (University of Edinburgh, UK)

Senior Associate Editors:
Tamás Zsigmond KINCSES (University of Szeged, Hungary)
Hildo LAMB (Leiden University, The Netherlands)
Denisa MURARU (Istituto Auxologico Italiano, IRCCS, Milan, Italy)
Ronak RAJANI (Guy’s and St Thomas’ NHS Foundation Trust, London, UK)

Associate Editors:
Andrea BAGGIANO (Department of Cardiovascular Imaging, Centro Cardiologico Monzino IRCCS, Milan, Italy)
Fabian BAMBERG (Department of Radiology, University Hospital Freiburg, Germany)
Péter BARSI (Semmelweis University, Budapest, Hungary)
Theodora BENEDEK (University of Medicine, Pharmacy, Sciences and Technology, Targu Mures, Romania)
Ronny BÜCHEL (University Hospital Zürich, Switzerland)
Filippo CADEMARTIRI (SDN IRCCS, Naples, Italy) Matteo CAMELI (University of Siena, Italy)
Csilla CELENG (University of Utrecht, The Netherlands)
Edit DÓSA (Semmelweis University, Budapest, Hungary)
Marco FRANCONE (La Sapienza University of Rome, Italy)
Viktor GÁL (OrthoPred Ltd., Győr, Hungary)
Alessia GIMELLI (Fondazione Toscana Gabriele Monasterio, Pisa, Italy)
Tamás GYÖRKE (Semmelweis Unversity, Budapest)
Fabian HYAFIL (European Hospital Georges Pompidou, Paris, France)
György JERMENDY (Bajcsy-Zsilinszky Hospital, Budapest, Hungary)
Pál KAPOSI (Semmelweis University, Budapest, Hungary)
Mihaly KÁROLYI (University of Zürich, Switzerland)
Lajos KOZÁK (Semmelweis University, Budapest, Hungary)
Mariusz KRUK (Institute of Cardiology, Warsaw, Poland)
Zsuzsa LÉNARD (Semmelweis University, Budapest, Hungary)
Erica MAFFEI (ASUR Marche, Urbino, Marche, Italy)
Robert MANKA (University Hospital, Zürich, Switzerland)
Saima MUSHTAQ (Cardiology Center Monzino (IRCCS), Milan, Italy)
Gábor RUDAS (Semmelweis University, Budapest, Hungary)
Balázs RUZSICS (Royal Liverpool and Broadgreen University Hospital, UK)
Christopher L SCHLETT (Unievrsity Hospital Freiburg, Germany)
Bálint SZILVESZTER (Semmelweis University, Budapest, Hungary)
Richard TAKX (University Medical Centre, Utrecht, The Netherlands)
Ádám TÁRNOKI (National Institute of Oncology, Budapest, Hungary)
Dávid TÁRNOKI (National Institute of Oncology, Budapest, Hungary)
Ákos VARGA-SZEMES (Medical University of South Carolina, USA)
Hajnalka VÁGÓ (Semmelweis University, Budapest, Hungary)
Jiayin ZHANG (Department of Radiology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China)

International Editorial Board:

Gergely ÁGOSTON (University of Szeged, Hungary)
Anna BARITUSSIO (University of Padova, Italy)
Bostjan BERLOT (University Medical Centre, Ljubljana, Slovenia)
Edoardo CONTE (Centro Cardiologico Monzino IRCCS, Milan)
Réka FALUDI (University of Szeged, Hungary)
Andrea Igoren GUARICCI (University of Bari, Italy)
Marco GUGLIELMO (Department of Cardiovascular Imaging, Centro Cardiologico Monzino IRCCS, Milan, Italy)
Kristóf HISRCHBERG (University of Heidelberg, Germany)
Dénes HORVÁTHY (Semmelweis University, Budapest, Hungary)
Julia KARADY (Harvard Unversity, MA, USA)
Attila KOVÁCS (Semmelweis University, Budapest, Hungary)
Riccardo LIGA (Cardiothoracic and Vascular Department, Università di Pisa, Pisa, Italy)
Máté MAGYAR (Semmelweis University, Budapest, Hungary)
Giuseppe MUSCOGIURI (Centro Cardiologico Monzino IRCCS, Milan, Italy)
Anikó I NAGY (Semmelweis University, Budapest, Hungary)
Liliána SZABÓ (Semmelweis University, Budapest, Hungary)
Özge TOK (Memorial Bahcelievler Hospital, Istanbul, Turkey)
Márton TOKODI (Semmelweis University, Budapest, Hungary)

Managing Editor:
Anikó HEGEDÜS (Semmelweis University, Budapest, Hungary)

Pál Maurovich-Horvat, MD, PhD, MPH, Editor-in-Chief

Semmelweis University, Medical Imaging Centre
2 Korányi Sándor utca, Budapest, H-1083, Hungary
Tel: +36-20-663-2485
E-mail: maurovich-horvat.pal@med.semmelweis-univ.hu

Indexing and Abstracting Services:

  • WoS Emerging Science Citation Index
  • Scopus
  • DOAJ

2022  
Web of Science  
Total Cites
WoS
65
Journal Impact Factor 0.4
Rank by Impact Factor

n/a

Impact Factor
without
Journal Self Cites
0.3
5 Year
Impact Factor
0.8
Journal Citation Indicator 0.06
Rank by Journal Citation Indicator

Medicine, General & Internal (Q4)

Scimago  
Scimago
H-index
18
Scimago
Journal Rank
0.171
Scimago Quartile Score

Medicine (miscellanous) (Q4)
Radiological and Ultrasound Technology (Q4)
Radiology, Nuclear Medicine and Imaging (Q4)

Scopus  
Scopus
Cite Score
1.0
Scopus
CIte Score Rank
Medicine (miscellaneous) 221/309 (28th PCTL)
Radiological and Ultrasound Technology 45/58 (23rd PCTL)
Radiology, Nuclear Medicine and Imaging 242/312 (22nd PCTL)
Scopus
SNIP
0.354

2021  
Web of Science  
Total Cites
WoS
56
Journal Impact Factor not applicable
Rank by Impact Factor

not applicable

Impact Factor
without
Journal Self Cites
not applicable
5 Year
Impact Factor
not applicable
Journal Citation Indicator 0,10
Rank by Journal Citation Indicator

Medicine, General & Internal 236/329

Scimago  
Scimago
H-index
16
Scimago
Journal Rank
0,226
Scimago Quartile Score Medicine (miscellaneous) (Q4)
Radiological and Ultrasound Technology (Q4)
Radiology, Nuclear Medicine and Imaging (Q4)
Scopus  
Scopus
Cite Score
1,6
Scopus
CIte Score Rank
Medicine (miscellaneous) 175/276 (Q3)
Radiology, Nuclear Medicine and Imaging 209/308 (Q3)
Radiological and Ultrasound Technology 42/60 (Q3)
Scopus
SNIP
0,451

2020  
CrossRef Documents 7
CrossRef Cites 0
CrossRef H-index 1
Days from submission to acceptance 17
Days from acceptance to publication 70
Acceptance Rate 43%

Imaging
Publication Model Gold Open Access
Submission Fee none
Article Processing Charge none
Subscription Information Gold Open Access

Imaging
Language English
Size A4
Year of
Foundation
2020 (2009)
Volumes
per Year
1
Issues
per Year
2
Founder Akadémiai Kiadó
Founder's
Address
H-1117 Budapest, Hungary 1516 Budapest, PO Box 245.
Publisher Akadémiai Kiadó
Publisher's
Address
H-1117 Budapest, Hungary 1516 Budapest, PO Box 245.
Responsible
Publisher
Chief Executive Officer, Akadémiai Kiadó
ISSN 2732-0960 (Online)

Monthly Content Usage

Abstract Views Full Text Views PDF Downloads
Jun 2023 0 54 47
Jul 2023 0 43 51
Aug 2023 0 80 69
Sep 2023 0 61 51
Oct 2023 0 59 36
Nov 2023 0 123 68
Dec 2023 0 11 5