Authors:
Mariya Kondova Department of Neuroradiology, University Medical Center Mainz, Johannes Gutenberg University, Langenbeckstr. 1, 55131 Mainz, Germany

Search for other papers by Mariya Kondova in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0009-0001-3440-3145
,
Oliver Korczynski Department of Neuroradiology, University Medical Center Mainz, Johannes Gutenberg University, Langenbeckstr. 1, 55131 Mainz, Germany

Search for other papers by Oliver Korczynski in
Current site
Google Scholar
PubMed
Close
,
Matthias Müller-Eschner Department of Neuroradiology, University Medical Center Mainz, Johannes Gutenberg University, Langenbeckstr. 1, 55131 Mainz, Germany

Search for other papers by Matthias Müller-Eschner in
Current site
Google Scholar
PubMed
Close
,
Winson Chan BioMind, Zhongguancun Medical Engineering Center, 10 Anxiang Road, 8th floor, Zhongguan, China

Search for other papers by Winson Chan in
Current site
Google Scholar
PubMed
Close
,
Antoine Sanner Department of Neuroradiology, University Medical Center Mainz, Johannes Gutenberg University, Langenbeckstr. 1, 55131 Mainz, Germany

Search for other papers by Antoine Sanner in
Current site
Google Scholar
PubMed
Close
,
Ahmed Othman Department of Neuroradiology, University Medical Center Mainz, Johannes Gutenberg University, Langenbeckstr. 1, 55131 Mainz, Germany

Search for other papers by Ahmed Othman in
Current site
Google Scholar
PubMed
Close
,
Marc A. Brockmann Department of Neuroradiology, University Medical Center Mainz, Johannes Gutenberg University, Langenbeckstr. 1, 55131 Mainz, Germany

Search for other papers by Marc A. Brockmann in
Current site
Google Scholar
PubMed
Close
, and
Sebastian Altmann Department of Neuroradiology, University Medical Center Mainz, Johannes Gutenberg University, Langenbeckstr. 1, 55131 Mainz, Germany

Search for other papers by Sebastian Altmann in
Current site
Google Scholar
PubMed
Close
Open access

Abstract

Background

We evaluated the capability of an AI application to independently detect, segment and classify intracranial tumors in MRI.

Methods

In this retrospective single-centre study, 138 patients (65 female and 73 male) with a mean age of 35 ± 26y were included. 97 were diagnosed with an intracranial neoplasm, while 41 exhibited no intracranial pathology. Inclusion criteria were a 1.5 or 3.0T MRI dataset with the following sequences: T2 axial, T1 axial pre- and post-contrast with a slice thickness between 3 and 6 mm and no previous brain surgery. Image analysis was performed by two human readers (R1 = 5 years and R2 = 10 years of experience in brain MRI) and a deep learning (DL)-based AI model. Sensitivity, specificity and accuracy of the AI model and the human readers to detect and correctly classify brain tumors were measured. Histological results served as the gold standard.

Results

The AI model reached a sensitivity of 93.81% [87.02–97.70] and a specificity of 63.41% [46.94–77.88], while human readers reached 100% [96.27–100.00] and 100% [91.40–100.00], respectively. Human readers provided a significantly higher accuracy rate with R1 0.93 (95% CI: 0.88–0.97) and R2 0.98 (95% CI: 0.94, 0.99) vs. 0.74 (95% CI: 0.66–0.81) for the AI model (P-value <0.001).

Conclusion

The underlying DL-based AI algorithm can independently identify and segment intracranial tumors while providing satisfactory results for establishing the correct diagnosis. Despite its current inferiority compared to experienced radiologists, it still experiences ongoing development and it is a step towards developing an artificial intelligence-augmented radiology workflow.

Abstract

Background

We evaluated the capability of an AI application to independently detect, segment and classify intracranial tumors in MRI.

Methods

In this retrospective single-centre study, 138 patients (65 female and 73 male) with a mean age of 35 ± 26y were included. 97 were diagnosed with an intracranial neoplasm, while 41 exhibited no intracranial pathology. Inclusion criteria were a 1.5 or 3.0T MRI dataset with the following sequences: T2 axial, T1 axial pre- and post-contrast with a slice thickness between 3 and 6 mm and no previous brain surgery. Image analysis was performed by two human readers (R1 = 5 years and R2 = 10 years of experience in brain MRI) and a deep learning (DL)-based AI model. Sensitivity, specificity and accuracy of the AI model and the human readers to detect and correctly classify brain tumors were measured. Histological results served as the gold standard.

Results

The AI model reached a sensitivity of 93.81% [87.02–97.70] and a specificity of 63.41% [46.94–77.88], while human readers reached 100% [96.27–100.00] and 100% [91.40–100.00], respectively. Human readers provided a significantly higher accuracy rate with R1 0.93 (95% CI: 0.88–0.97) and R2 0.98 (95% CI: 0.94, 0.99) vs. 0.74 (95% CI: 0.66–0.81) for the AI model (P-value <0.001).

Conclusion

The underlying DL-based AI algorithm can independently identify and segment intracranial tumors while providing satisfactory results for establishing the correct diagnosis. Despite its current inferiority compared to experienced radiologists, it still experiences ongoing development and it is a step towards developing an artificial intelligence-augmented radiology workflow.

Introduction

In recent years, an increase in the incidence of brain tumors has been reported globally across all age groups [1, 2]. Therefore, imaging of intracranial tumors is crucial for establishing the correct diagnosis and for therapy planning and monitoring of the disease. Especially differentiation between malignant and benign tumors and between primary and metastatic brain tumors is essential to determine further therapeutic steps: E.g. if there is a suspicion of CNS lymphoma corticosteroid therapy should be avoided prior to biopsy because it may impede the histopathological diagnosis [3].

MRI is the method of choice for evaluating intracranial pathologies, demonstrating excellent soft tissue contrast superior to CT, with no ionizing radiation, and therefore, no risk of developing radiation-induced cancer [4, 5]. There are various sequences, techniques, and suggested protocols for tumor detection. Currently, recommended brain tumor MRI protocols include the following sequences: 2D fluid-attenuated inversion recovery (FLAIR) axial, susceptibility-weighted imaging (SWI), 2D DWI axial, 2D T1-weighted imaging (T1WI) and 3D T1 postcontrast [6]. Nevertheless, the correct classification of brain tumors remains challenging, especially for residents and doctors with limited experience in the field of neuroradiology. This is not only because of the phenotypic overlap of different tumor entities but also due to the rare occurrence of some tumor types.

In recent years, deep learning (DL)-based methods have been introduced in radiology with regard to a wide range of tasks, e.g., image reconstruction, automated segmentation and classification, but also identification of abnormal chest radiographs [7, 8], detection of intracranial hemorrhage and stroke on non-contrast head CT [8, 9], and acute stroke on diffusion-weighted MRI [8, 10], aiming to improve the clinical workflow.

We hypothesized that we could generate quick and readily available, high-quality reports with quantitative image information by applying a novel AI application for tumor detection, segmentation, quantitative volumetry, automated image interpretation and reporting. We also proposed that this deep learning-based algorithm may prove to be a valuable tool in our daily routine, potentially speeding up workflow while simultaneously improving the quality of medical reports.

Materials and methods

This retrospective study was approved by the Ethics Committee of the Rhineland Palatinate Chamber of Physicians, and written informed consent was waived.

Patient cohort

During the inclusion period of this retrospective single-centre study between October 2022 and July 2023, 361 patients underwent a clinically indicated brain MRI because of suspicion of intracranial tumor or as a follow-up due to already confirmed neoplasm. 138 patients who had undergone a clinically indicated MRI study were included, 97 were diagnosed with an intracranial tumor and 41 exhibited no intracranial pathology. The inclusion criteria were as follows: (i) an MRI study obtained on either 1.5T or 3T, (ii) available T2-weighted images (T2WI), T1-weighted pre-contrast (T1W) and T1-weighted post-contrast image (T1CWI) in the axial plane with a slice thickness of 3–6 mm, (iii) no previous brain surgery and (iv) tumor types used to train the AI (Artificial Intelligence) algorithm. 223 patients were excluded due to: (i) no obtained T2WI (n = 57), (ii) no T1CWI (n = 86), (iii) neither T2WI nor T1W pre- and post-contrast (n = 37) and (iv) due to brain surgery with resulting anatomic distortion and no preoperative imaging available (n = 43). The inclusion/exclusion process is presented in Fig. 1.

Fig. 1.
Fig. 1.

Flow chart of the inclusion/exclusion process

Citation: Imaging 16, 2; 10.1556/1647.2024.00232

BioMind setup and software architecture

A deep learning-based tool approved by NMPA China (BioMind, Beijing, China) was used for automated tumor segmentation and classification, comprising three deep convolutional neural Network (CNN) models: i) a lesion segmentation model, ii) an atlas-based segmentation and iii) a classification model. The lesion segmentation model consists of two components: One trained to detect brain anomalies using MRI T2-weighted images (T2WI) and the other trained on T1-weighted post-contrast image (T1CWI). A final heat map is generated by overlaying the segmentation results obtained from T2WI and T1CWI. The heat map is a probabilistic distribution for image pixels to determine if a pixel is tumor-positive. The higher the probability, the “hotter” it is in visualization. The region of interest (ROI) is loosely cropped across all MRI sequences and concatenated with the heat map as multiple input channels. As a next step, the classification model analyzes the selected region to further distinguish different types of tumors with a confidence score, suggesting the two most probable diagnoses. The atlas-based algorithm, on the other hand, provides information about the spatial position of different structures [11] by mapping at least 12 major anatomic brain structures such as prefrontal lobes, temporal lobes, cerebellum, brainstem, pituitary region, etc., using T2WI as an input. The detected abnormality is fused with the segmented brain atlas afterwards to eliminate unlikely tumor types based on their regular distribution pattern. This filtering via brain atlas segmentation helps improve tumor differentiation accuracy. For example, if the region of interest coincides with the pituitary region's segmentation mask, the differential diagnosis will unlikely include medulloblastoma, vestibular schwannoma or hemangioblastoma. Hence, their likelihood score is assigned as zero by default. Eventually, the tumor type with the highest likelihood score is displayed as output, and the two most favorable diagnoses are given. For a visual representation of the AI algorithm analysis process, refer to Fig. 2.

Fig. 2.
Fig. 2.

Diagram illustrating the DL-based algorithm's analysis process

Citation: Imaging 16, 2; 10.1556/1647.2024.00232

Neuronal network

U-Net [12] has been the basis of neural network architecture for most medical image segmentation, including the tumor segmentation and the brain-atlas models used in our study. Meanwhile, the tumor classification model is an ensemble of multiple 3D CNNs with an attention mechanism.

In our applied algorithm, SE-blocks [13, 14] were incorporated for their ability to enhance feature representation power by improving channel dependencies and residual blocks [15, 16] to tackle the vanishing gradient problem in substantially deep neural network training.

Furthermore, this segmentation model uses stacks of three slices of 2D images as input (2.5D). It is done by having neighbouring images sandwich the main image in the middle layer, and the neural network's task is to produce a segmentation mask for the main image in the middle layer. Compared to pure 2D input, the neighbouring slice gives extra information on the construction and connectivity of brain tissue and lesions. The advantage of 2.5D, in contrast to complete 3D datasets, is that it can adapt MRI scans with different slice thicknesses and is relatively lightweight in model parameters and computations. Each convolutional layer has a residual block, or SE-block added parallel to the existing U-Net convolutional block.

The classification model is an ensemble [17] of multiple 3D CNNs with an attention mechanism [18]. T2WI, T1CWI and segmentation heat map are resampled before feeding into the model as multichannel input. Each image is resized to 256 × 256xD (D is the depth of the image), and the cropped image is fixed at 128 × 128 × 8. The output block is several fully connected layers, with the final output being an array of 24 elements representing 24 types of tumors found in our training dataset.

Image analysis

Image analysis was performed by the AI and two human readers, of which one was a board-certified radiologist (R1) with five years of experience in brain MRI reading, and the other one was a board-certified radiologist and neuroradiologist (R2) with ten years of experience in reading brain MRI scans. Both readers were blinded to the patient's history; however, they had access to the patient's age. The arrangement with the most pertinent MRI sequences was saved in the PACS Workstation (SECTRA, Linköping, Sweden) for the human readers. In contrast to the AI, however, all of the acquired sequences (e.g., 3D Datasets or diffusion-weighted imaging) were available for image interpretation to the readers, who were required to name the two most favorable diagnoses. The reading time was not limited. The accuracy, sensitivity and specificity were measured for the human readers. The same parameters were assessed also for the DL algorithm, and the results were compared. The ground truth was the histopathology obtained through biopsy or tumor resection.

Statistical analysis

Statistical analysis was performed using R-Statistics (Version 4.1.3). Continuous variables were reported as mean ± standard deviation if normally distributed and as median/interquartile range for a non-normal distribution. Model performance was quantified by assessing the accuracy using Pearson's Chi-squared test. Furthermore, we reported sensitivity and specificity, including their 95% Confidence Intervals (95% CI), for each reader and the AI model and compared the results using McNemar's test. The multi-rater Fleiss' Kappa (χ) was assessed for the inter-rater agreement. The level of agreement was defined as follows: poor, χ < 0.21; fair, χ = 0.21–0.40; moderate, χ = 0.41–0.60; substantial, χ = 0.61–0.80; almost perfect, χ = 0.81–1.0. P-values less than 0.05 were considered statically significant [12].

Results

Patient cohort

138 patients (65 female and 73 male) with a mean age of 35 ± 26y (age range between 0 and 83) were included. 97 were diagnosed with a benign or malignant intracranial tumor, while 41 exhibited no intracranial pathology. Among the tumor entities were glioma (n = 52), metastasis (n = 13), meningioma (n = 11), medulloblastoma (n = 11) and other less common tumors (n = 10) such as craniopharyngioma (n = 4), germinoma (n = 2), dysembryoplastic neuroepithelial tumor (n = 1), rosette-forming glioneuronal tumor (n = 1), dermoid cyst (n = 1), primitive neuroectodermal tumor of the CNS (n = 1). Patient characteristics are listed in Table 1.

Table 1.

Patient characteristics

CharacteristicsValue (n = 138)
Age (years)*35 ± 26 (0–83)
Sex
Male73 (52.9%)
Female65 (47.1%)
Intracranial neoplasm97 (70.3%)
Glioma53 (8.2%)
Metastasis13 (13.4%)
Meningioma11 (11.3%)
Medulloblastoma11 (11.3%)
Others10 (10.3%)
No clinically relevant findings41 (29.7%)

*Data are mean ± 1SD, with ranges in parentheses.

AI model and human reader performance

Of 97 patients with an intracranial tumor on imaging, 91 were detected by the DL-based algorithm resulting in a sensitivity of 93.81% [87.02–97.70]. An example is illustrated in Fig. 3. However, 15 out of 41 patients with no intracranial tumor on imaging were falsely diagnosed with an intracranial pathology by the AI algorithm, which resulted in a specificity of 63.41% [46.94–77.88] (Fig. 4). The human readers demonstrated a higher sensitivity and specificity of 100% [96.27–100.00] and 100% [91.40–100.00], respectively.

Fig. 3.
Fig. 3.

Medulloblastoma in T1-weighted (A), T2-weighted (B) and with region of interest (ROI) in the T1-weighted post-contrast sequence (C)

Citation: Imaging 16, 2; 10.1556/1647.2024.00232

Fig. 4.
Fig. 4.

Normal findings in (A) T1-weighted pre-contrast sequence, (B) T1-weighted post-contrast sequence and (C) T2-weighted image with falsely segmented and incorrectly classified region of interest (ROI) as pituitary tumor

Citation: Imaging 16, 2; 10.1556/1647.2024.00232

Furthermore, we investigated the accuracy of the AI model and the human readers. When considering only the principal diagnosis provided by the AI model, the accuracy was 0.63 (95% CI: 0.54–0.71). However, when both differential diagnoses were considered, the accuracy improved to 0.74 (95% CI: 0.66–0.81). The human readers demonstrated an accuracy of 0.89 (95% CI: 0.82–0.94) (R1) and 0.92 (95% CI: 0.86–0.96) (R2) only for the most favorable diagnosis. The accuracy rates improved to 0.93 (95% CI: 0.88–0.97) and 0.98 (95% CI: 0.94, 0.99), respectively, when also the differential diagnosis was considered, resulting in a significantly higher accuracy rate compared to the AI model (R1 P < 0.001; R2 P < 0.001) (Fig. 5).

Fig. 5.
Fig. 5.

The image shows the testing performance demonstrated by the AI model and the human readers in our study. Column A demonstrates the accuracy when only the proposed principal diagnosis was considered, and column B when the two most likely differential diagnoses were considered

Citation: Imaging 16, 2; 10.1556/1647.2024.00232

Interrater reliability

The inter-rater agreement showed moderate to substantial results between the AI algorithm and the human readers with Fleiss' Kappa (χ) = 0.59 (CI: 0.50–0.68) for R1 and (χ) = 0.65 (CI: 0.56–0.73) for R2. An almost perfect inter-rater agreement was demonstrated between the human readers χ = 0.90 (CI: 0.81–0.99).

Discussion

Our study aimed to evaluate the performance of a DL-based algorithm for detecting and classifying intracranial tumors on MRI. Our analysis revealed that the tested CNN model, although currently inferior to experienced radiologists, showed good overall accuracy. These results highlight the potential benefits of CNN, resulting in improved workflow. Additionally, it can be a valuable aid, especially for young, inexperienced radiologists.

Due to the increasing number of medical examinations, an improved radiology workflow is needed [19, 20]. Implementing such a model can help create prioritized worklists, enabling radiologists and neuroradiologists to focus on urgent and complex studies. However, such a model's high sensitivity and specificity is essential to prevent overloading radiologists with false positive cases, which could lead to alarm fatigue. This can also expedite the process from diagnosis to treatment [21, 22]. Additionally, an overall improvement in workflow efficiency allows for valuable learning opportunities for trainees and junior physicians.

Another benefit of the deep learning-based model in our study is the automated pathology segmentation and volumetry. Manual segmentation and volumetry are very time-consuming and often restricted to qualitative and visual assessment in the clinical routine. By implementing such an AL algorithm, physicians' workflow remains unaltered, as images are automatically transferred from PACs to the working station, and a suggested report is generated and supplied to physicians automatically.

While 3D imaging is increasingly applied in clinical routines, the software in our study processes only two-dimensional (2D) images. Over the past years, the use of three-dimensional (3D) image datasets has increased significantly, making optimization of 3D CNN architectures necessary. However, a significant limitation to implementing 3D models is the excessive computational load they require. Alternative methods to 3D CNNs are emerging, including integrating 2D CNNs with a neural network designed for sequence data to analyze sequential 2D images within a 3D volume [23].

The limitation of our study lies in its retrospective single-centered design. It has only been tested in an experimental setting but has yet to be in a broad clinical dataset; thus, ensuring the model's robustness requires further validation in a prospective setting. In Addition, it was trained on a Chinese population. Some studies observed reduced model performance when test sets were collected from institutions that differed from those where the training data was collected [19]. Another critical aspect to consider is the quality of the training data used, which directly affects the accuracy of an AI model.

Furthermore, the study dataset was limited mainly to the most common intracranial tumor types, and specific tumor types were underrepresented due to their low incidence rate. Therefore, we were unable to assess the model's generalizability effectively. Moreover, the AI algorithm was trained on patients older than 10 years, and our dataset included both adult and pediatric patients <10 years old. Future studies could investigate whether individualized pediatric and adult models could enhance performance.

The AI model in our study uses axial images as input. This could complicate the detection of specific tumor types depending on the region. For example, tumors in the pituitary regions are more easily detected on a sagittal or coronal plane. Additionally, a FLAIR-sequence seems the most suitable for detecting tumors [24]; however, the evaluated CNN algorithm detects tumors using a T2-sequence. Moreover, it does not consider additional sequences, such as diffusion-weighted imaging or SWI, which could aid the diagnosis. Nevertheless, it is expected that AI algorithms will continue to develop in the near future, enabling the simultaneous analysis of more sequences and the processing of 3D images.

Furthermore, there was considerable variation among the analyzed data sets. Some of the MRI images were acquired over a decade ago, and not all images had the exact spatial resolution due to the developing MRI scanner technology. The variation among patients, scanners (different manufacturers), scanner operators, higher spatial resolution, and disease incidence in diverse populations is a crucial aspect to be considered [19, 25].

Conclusion

This AI algorithm, used in the underlying study, can independently identify and segment intracranial tumors and provides satisfactory results regarding the establishment of the correct diagnosis. Despite its current inferiority compared to experienced neuroradiologists, it still experiences ongoing development and it is a meaningful step towards developing an artificial intelligence-augmented radiology workflow.

Author contributions

Conceptualization, M.K. and M.A.B.; methodology, M.K.; software, W.Ch.; validation, M.K. and A.S.; formal analysis, M.K. and A.S.; investigation, M.K. and M.M.-E.; resources, M.A.B., M.K. and O.K.; data curation, M.K., S.A. and O.K.; writing—original draft preparation, M.K., A.S., W.Ch., A.E.O., M.A.B. and S.A.; writing—review and editing, M.K., A.S., W.Ch., A.E.O., M.A.B., S.A., M.M.-E. and O.K.; visualization, M.K., S.A., A.S. and W.Ch.; supervision, M.A.B., A.E.O und S.A.; project administration, M.A.B., M.K. and M.M.-E.; funding acquisition, M.M.-E. All authors reviewed the final version of the manuscript and agreed to submit it to IMAGING for publication.

Funding sources

This research was funded by BioMind, HANALYTICS PTE LTD 151 Lorong Chuan, New Tech Park, Singapore 556741.

Conflicts of interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; or in the decision to publish the results. The rest of the authors have no conflict of interest to disclose.

Ethical statement

The study was in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

References

  • [1]

    Havaei M, Davy A, Warde-Farley D, Biard A, Courville A, Bengio Y, et al.: Brain tumor segmentation with deep neural networks. Med Image Anal 2017; 35: 1831.

    • Search Google Scholar
    • Export Citation
  • [2]

    Magadza T, Viriri S: Deep learning for brain tumor segmentation: A survey of state-of-the-art. J Imaging 2021; 7(2): 19.

  • [3]

    Scheichel F, Marhold F, Pinggera D, Kiesel B, Rossmann T, Popadic B, et al.: Influence of preoperative corticosteroid treatment on rate of diagnostic surgeries in primary central nervous system lymphoma: A multicenter retrospective study. BMC Cancer 2021; 21: 110.

    • Search Google Scholar
    • Export Citation
  • [4]

    Rudie JD, Rauschecker AM, Bryan RN, Davatzikos C, Mohan S: Emerging applications of artificial intelligence in neuro-oncology. Radiology 2019; 290(3): 607618.

    • Search Google Scholar
    • Export Citation
  • [5]

    Srinivas C, K SN, Zakariah M, Alothaibi YA, Shaukat K, Partibane B, Awal H: Deep transfer learning approaches in performance analysis of brain tumor classification using MRI images. J Healthc Eng 2022; 2022: 3264367.

    • Search Google Scholar
    • Export Citation
  • [6]

    Kamepalli H, Kalaparti V, Kesavadas C: Imaging recommendations for the diagnosis, staging, and management of adult brain tumors. Indian J Med Paediatr Oncol 2023; 44(01): 026038.

    • Search Google Scholar
    • Export Citation
  • [7]

    Yates E, Yates L, Harvey H: Machine learning “red dot”: Open-source, cloud, deep convolutional neural networks in chest radiograph binary normality classification. Clin Radiol 2018; 73(9): 827831.

    • Search Google Scholar
    • Export Citation
  • [8]

    Syed AB, Zoga AC, editors. Artificial intelligence in radiology: Current technology and future directions. Seminars in musculoskeletal radiology; 2018: Thieme medical publishers.

    • Search Google Scholar
    • Export Citation
  • [9]

    Prevedello LM, Erdal BS, Ryu JL, Little KJ, Demirer M, Qian S, White RD: Automated critical test findings identification and online notification system using artificial intelligence in imaging. Radiology 2017; 285(3): 923931.

    • Search Google Scholar
    • Export Citation
  • [10]

    Lee E-J, Kim Y-H, Kim N, Kang D-W: Deep into the brain: Artificial intelligence in stroke imaging. J Stroke 2017; 19(3): 277.

  • [11]

    Shiee N, Bazin P-L, Cuzzocreo JL, Blitz A, Pham DL, editors. Segmentation of brain images using adaptive atlases with application to ventriculomegaly. Information processing in medical imaging: 22nd international conference, IPMI 2011, Kloster Irsee, Germany, July 3–8, 2011 Proceedings 22; 2011: Springer.

    • Search Google Scholar
    • Export Citation
  • [12]

    Siddique N, Paheding S, Elkin CP, Devabhaktuni V: U-Net and its variants for medical image segmentation: a review of theory and applications. IEEE Access 2021; 9: 8203182057.

    • Search Google Scholar
    • Export Citation
  • [13]

    Hu J, Shen L, Sun G, editors. Squeeze-and-excitation networks. Proceedings of the IEEE conference on computer vision and pattern recognition; 2018.

    • Search Google Scholar
    • Export Citation
  • [14]

    Rundo L, Han C, Nagano Y, Zhang J, Hataya R, Militello C, et al.: USE-Net: incorporating Squeeze-and-Excitation blocks into U-Net for prostate zonal segmentation of multi-institutional MRI datasets. Neurocomputing 2019; 365: 3143.

    • Search Google Scholar
    • Export Citation
  • [15]

    He K, Zhang X, Ren S, Sun J, editors. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition; 2016.

    • Search Google Scholar
    • Export Citation
  • [16]

    Diakogiannis FI, Waldner F, Caccetta P, Wu C. ResUNet-a: a deep learning framework for semantic segmentation of remotely sensed data. ISPRS J Photogramm Remote Sens 2020; 162: 94114.

    • Search Google Scholar
    • Export Citation
  • [17]

    Hansen LK, Salamon P: Neural network ensembles. IEEE Trans Pattern Anal Mach Intell 1990; 12(10): 9931001.

  • [18]

    Luong M-T, Pham H, Manning CD: Effective approaches to attention-based neural machine translation. arXiv Preprint arXiv:150804025 2015.

  • [19]

    Gauriau R, Bizzo BC, Kitamura FC, Landi Junior O, Ferraciolli SF, Macruz FB, et al.: A deep learning–based model for detecting abnormalities on brain MR images for triaging: Preliminary results from a multisite experience. Radiol Artificial Intell 2021; 3(4): e200184.

    • Search Google Scholar
    • Export Citation
  • [20]

    Andriole KP: Addressing the coming radiology crisis: the society for computer applications in radiology transforming the radiological interpretation process (TRIP™) initiative. 2003.

    • Search Google Scholar
    • Export Citation
  • [21]

    Soffer S, Klang E, Shimon O, Barash Y, Cahan N, Greenspana H, Konen E: Deep learning for pulmonary embolism detection on computed tomography pulmonary angiogram: A systematic review and meta-analysis. Sci Rep 2021; 11(1): 15814.

    • Search Google Scholar
    • Export Citation
  • [22]

    Mitka M: Joint commission warns of alarm fatigue: Multitude of alarms from monitoring devices problematic. Jama 2013; 309(22): 23152316.

    • Search Google Scholar
    • Export Citation
  • [23]

    Cheng PM, Montagnon E, Yamashita R, Pan I, Cadrin-Chênevert A, Perdigón Romero F, et al.: Deep learning: An update for radiologists. Radiographics 2021; 41(5): 14271445.

    • Search Google Scholar
    • Export Citation
  • [24]

    Anaya-Isaza A, Mera-Jiménez L, Verdugo-Alejo L, Sarasti L: Optimizing MRI-based brain tumor classification and detection using AI: a comparative analysis of neural networks, transfer learning, data augmentation, and the cross-transformer network. Eur J Radiol Open 2023; 10: 100484.

    • Search Google Scholar
    • Export Citation
  • [25]

    Hyde RJ, Ellis JH, Gardner EA, Zhang Y, Carson PL: MRI scanner variability studies using a semi-automated analysis system. Magnetic Resonance Imaging 1994; 12(7): 10891097.

    • Search Google Scholar
    • Export Citation
  • [1]

    Havaei M, Davy A, Warde-Farley D, Biard A, Courville A, Bengio Y, et al.: Brain tumor segmentation with deep neural networks. Med Image Anal 2017; 35: 1831.

    • Search Google Scholar
    • Export Citation
  • [2]

    Magadza T, Viriri S: Deep learning for brain tumor segmentation: A survey of state-of-the-art. J Imaging 2021; 7(2): 19.

  • [3]

    Scheichel F, Marhold F, Pinggera D, Kiesel B, Rossmann T, Popadic B, et al.: Influence of preoperative corticosteroid treatment on rate of diagnostic surgeries in primary central nervous system lymphoma: A multicenter retrospective study. BMC Cancer 2021; 21: 110.

    • Search Google Scholar
    • Export Citation
  • [4]

    Rudie JD, Rauschecker AM, Bryan RN, Davatzikos C, Mohan S: Emerging applications of artificial intelligence in neuro-oncology. Radiology 2019; 290(3): 607618.

    • Search Google Scholar
    • Export Citation
  • [5]

    Srinivas C, K SN, Zakariah M, Alothaibi YA, Shaukat K, Partibane B, Awal H: Deep transfer learning approaches in performance analysis of brain tumor classification using MRI images. J Healthc Eng 2022; 2022: 3264367.

    • Search Google Scholar
    • Export Citation
  • [6]

    Kamepalli H, Kalaparti V, Kesavadas C: Imaging recommendations for the diagnosis, staging, and management of adult brain tumors. Indian J Med Paediatr Oncol 2023; 44(01): 026038.

    • Search Google Scholar
    • Export Citation
  • [7]

    Yates E, Yates L, Harvey H: Machine learning “red dot”: Open-source, cloud, deep convolutional neural networks in chest radiograph binary normality classification. Clin Radiol 2018; 73(9): 827831.

    • Search Google Scholar
    • Export Citation
  • [8]

    Syed AB, Zoga AC, editors. Artificial intelligence in radiology: Current technology and future directions. Seminars in musculoskeletal radiology; 2018: Thieme medical publishers.

    • Search Google Scholar
    • Export Citation
  • [9]

    Prevedello LM, Erdal BS, Ryu JL, Little KJ, Demirer M, Qian S, White RD: Automated critical test findings identification and online notification system using artificial intelligence in imaging. Radiology 2017; 285(3): 923931.

    • Search Google Scholar
    • Export Citation
  • [10]

    Lee E-J, Kim Y-H, Kim N, Kang D-W: Deep into the brain: Artificial intelligence in stroke imaging. J Stroke 2017; 19(3): 277.

  • [11]

    Shiee N, Bazin P-L, Cuzzocreo JL, Blitz A, Pham DL, editors. Segmentation of brain images using adaptive atlases with application to ventriculomegaly. Information processing in medical imaging: 22nd international conference, IPMI 2011, Kloster Irsee, Germany, July 3–8, 2011 Proceedings 22; 2011: Springer.

    • Search Google Scholar
    • Export Citation
  • [12]

    Siddique N, Paheding S, Elkin CP, Devabhaktuni V: U-Net and its variants for medical image segmentation: a review of theory and applications. IEEE Access 2021; 9: 8203182057.

    • Search Google Scholar
    • Export Citation
  • [13]

    Hu J, Shen L, Sun G, editors. Squeeze-and-excitation networks. Proceedings of the IEEE conference on computer vision and pattern recognition; 2018.

    • Search Google Scholar
    • Export Citation
  • [14]

    Rundo L, Han C, Nagano Y, Zhang J, Hataya R, Militello C, et al.: USE-Net: incorporating Squeeze-and-Excitation blocks into U-Net for prostate zonal segmentation of multi-institutional MRI datasets. Neurocomputing 2019; 365: 3143.

    • Search Google Scholar
    • Export Citation
  • [15]

    He K, Zhang X, Ren S, Sun J, editors. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition; 2016.

    • Search Google Scholar
    • Export Citation
  • [16]

    Diakogiannis FI, Waldner F, Caccetta P, Wu C. ResUNet-a: a deep learning framework for semantic segmentation of remotely sensed data. ISPRS J Photogramm Remote Sens 2020; 162: 94114.

    • Search Google Scholar
    • Export Citation
  • [17]

    Hansen LK, Salamon P: Neural network ensembles. IEEE Trans Pattern Anal Mach Intell 1990; 12(10): 9931001.

  • [18]

    Luong M-T, Pham H, Manning CD: Effective approaches to attention-based neural machine translation. arXiv Preprint arXiv:150804025 2015.

  • [19]

    Gauriau R, Bizzo BC, Kitamura FC, Landi Junior O, Ferraciolli SF, Macruz FB, et al.: A deep learning–based model for detecting abnormalities on brain MR images for triaging: Preliminary results from a multisite experience. Radiol Artificial Intell 2021; 3(4): e200184.

    • Search Google Scholar
    • Export Citation
  • [20]

    Andriole KP: Addressing the coming radiology crisis: the society for computer applications in radiology transforming the radiological interpretation process (TRIP™) initiative. 2003.

    • Search Google Scholar
    • Export Citation
  • [21]

    Soffer S, Klang E, Shimon O, Barash Y, Cahan N, Greenspana H, Konen E: Deep learning for pulmonary embolism detection on computed tomography pulmonary angiogram: A systematic review and meta-analysis. Sci Rep 2021; 11(1): 15814.

    • Search Google Scholar
    • Export Citation
  • [22]

    Mitka M: Joint commission warns of alarm fatigue: Multitude of alarms from monitoring devices problematic. Jama 2013; 309(22): 23152316.

    • Search Google Scholar
    • Export Citation
  • [23]

    Cheng PM, Montagnon E, Yamashita R, Pan I, Cadrin-Chênevert A, Perdigón Romero F, et al.: Deep learning: An update for radiologists. Radiographics 2021; 41(5): 14271445.

    • Search Google Scholar
    • Export Citation
  • [24]

    Anaya-Isaza A, Mera-Jiménez L, Verdugo-Alejo L, Sarasti L: Optimizing MRI-based brain tumor classification and detection using AI: a comparative analysis of neural networks, transfer learning, data augmentation, and the cross-transformer network. Eur J Radiol Open 2023; 10: 100484.

    • Search Google Scholar
    • Export Citation
  • [25]

    Hyde RJ, Ellis JH, Gardner EA, Zhang Y, Carson PL: MRI scanner variability studies using a semi-automated analysis system. Magnetic Resonance Imaging 1994; 12(7): 10891097.

    • Search Google Scholar
    • Export Citation
  • Collapse
  • Expand

Chair of the Editorial Board:
Béla MERKELY (Semmelweis University, Budapest, Hungary)

Editor-in-Chief:
Pál MAUROVICH-HORVAT (Semmelweis University, Budapest, Hungary)

Deputy Editor-in-Chief:
Viktor BÉRCZI (Semmelweis University, Budapest, Hungary)

Executive Editor:
Charles S. WHITE (University of Maryland, USA)

Deputy Editors:
Gianluca PONTONE (Department of Cardiovascular Imaging, Centro Cardiologico Monzino IRCCS, Milan, Italy)
Michelle WILLIAMS (University of Edinburgh, UK)

Senior Associate Editors:
Tamás Zsigmond KINCSES (University of Szeged, Hungary)
Hildo LAMB (Leiden University, The Netherlands)
Denisa MURARU (Istituto Auxologico Italiano, IRCCS, Milan, Italy)
Ronak RAJANI (Guy’s and St Thomas’ NHS Foundation Trust, London, UK)

Associate Editors:
Andrea BAGGIANO (Department of Cardiovascular Imaging, Centro Cardiologico Monzino IRCCS, Milan, Italy)
Fabian BAMBERG (Department of Radiology, University Hospital Freiburg, Germany)
Péter BARSI (Semmelweis University, Budapest, Hungary)
Theodora BENEDEK (University of Medicine, Pharmacy, Sciences and Technology, Targu Mures, Romania)
Ronny BÜCHEL (University Hospital Zürich, Switzerland)
Filippo CADEMARTIRI (SDN IRCCS, Naples, Italy) Matteo CAMELI (University of Siena, Italy)
Csilla CELENG (University of Utrecht, The Netherlands)
Edit DÓSA (Semmelweis University, Budapest, Hungary)
Tilman EMRICH (University Hospital Mainz, Germany)

Marco FRANCONE (La Sapienza University of Rome, Italy)
Viktor GÁL (OrthoPred Ltd., Győr, Hungary)
Alessia GIMELLI (Fondazione Toscana Gabriele Monasterio, Pisa, Italy)
Tamás GYÖRKE (Semmelweis Unversity, Budapest)
Fabian HYAFIL (European Hospital Georges Pompidou, Paris, France)
György JERMENDY (Bajcsy-Zsilinszky Hospital, Budapest, Hungary)
Pál KAPOSI (Semmelweis University, Budapest, Hungary)
Mihaly KÁROLYI (University of Zürich, Switzerland)
Lajos KOZÁK (Semmelweis University, Budapest, Hungary)
Mariusz KRUK (Institute of Cardiology, Warsaw, Poland)
Zsuzsa LÉNARD (Semmelweis University, Budapest, Hungary)
Erica MAFFEI (ASUR Marche, Urbino, Marche, Italy)
Robert MANKA (University Hospital, Zürich, Switzerland)
Saima MUSHTAQ (Cardiology Center Monzino (IRCCS), Milan, Italy)
Gábor RUDAS (Semmelweis University, Budapest, Hungary)
Balázs RUZSICS (Royal Liverpool and Broadgreen University Hospital, UK)
Christopher L SCHLETT (Unievrsity Hospital Freiburg, Germany)
Bálint SZILVESZTER (Semmelweis University, Budapest, Hungary)
Richard TAKX (University Medical Centre, Utrecht, The Netherlands)
Ádám TÁRNOKI (National Institute of Oncology, Budapest, Hungary)
Dávid TÁRNOKI (National Institute of Oncology, Budapest, Hungary)
Ákos VARGA-SZEMES (Medical University of South Carolina, USA)
Hajnalka VÁGÓ (Semmelweis University, Budapest, Hungary)
Jiayin ZHANG (Department of Radiology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China)

International Editorial Board:

Gergely ÁGOSTON (University of Szeged, Hungary)
Anna BARITUSSIO (University of Padova, Italy)
Bostjan BERLOT (University Medical Centre, Ljubljana, Slovenia)
Edoardo CONTE (Centro Cardiologico Monzino IRCCS, Milan)
Réka FALUDI (University of Szeged, Hungary)
Andrea Igoren GUARICCI (University of Bari, Italy)
Marco GUGLIELMO (Department of Cardiovascular Imaging, Centro Cardiologico Monzino IRCCS, Milan, Italy)
Kristóf HISRCHBERG (University of Heidelberg, Germany)
Dénes HORVÁTHY (Semmelweis University, Budapest, Hungary)
Julia KARADY (Harvard Unversity, MA, USA)
Attila KOVÁCS (Semmelweis University, Budapest, Hungary)
Riccardo LIGA (Cardiothoracic and Vascular Department, Università di Pisa, Pisa, Italy)
Máté MAGYAR (Semmelweis University, Budapest, Hungary)
Giuseppe MUSCOGIURI (Centro Cardiologico Monzino IRCCS, Milan, Italy)
Anikó I NAGY (Semmelweis University, Budapest, Hungary)
Liliána SZABÓ (Semmelweis University, Budapest, Hungary)
Özge TOK (Memorial Bahcelievler Hospital, Istanbul, Turkey)
Márton TOKODI (Semmelweis University, Budapest, Hungary)

Managing Editor:
Anikó HEGEDÜS (Semmelweis University, Budapest, Hungary)

Pál Maurovich-Horvat, MD, PhD, MPH, Editor-in-Chief

Semmelweis University, Medical Imaging Centre
2 Korányi Sándor utca, Budapest, H-1083, Hungary
Tel: +36-20-663-2485
E-mail: maurovich-horvat.pal@med.semmelweis-univ.hu

Indexing and Abstracting Services:

  • WoS Emerging Science Citation Index
  • Scopus
  • DOAJ

2024  
Scopus  
CiteScore  
CiteScore rank  
SNIP  
Scimago  
SJR index 0.178
SJR Q rank Q4

2023  
Web of Science  
Journal Impact Factor 0.7
Rank by Impact Factor Q3 (Medicine, General & Internal)
Journal Citation Indicator 0.09
Scopus  
CiteScore 0.7
CiteScore rank Q4 (Medicine miscellaneous)
SNIP 0.151
Scimago  
SJR index 0.181
SJR Q rank Q4

Imaging
Publication Model Gold Open Access
Submission Fee none
Article Processing Charge none
Subscription Information Gold Open Access

Imaging
Language English
Size A4
Year of
Foundation
2020 (2009)
Volumes
per Year
1
Issues
per Year
2
Founder Akadémiai Kiadó
Founder's
Address
H-1117 Budapest, Hungary 1516 Budapest, PO Box 245.
Publisher Akadémiai Kiadó
Publisher's
Address
H-1117 Budapest, Hungary 1516 Budapest, PO Box 245.
Responsible
Publisher
Chief Executive Officer, Akadémiai Kiadó
ISSN 2732-0960 (Online)

Monthly Content Usage

Abstract Views Full Text Views PDF Downloads
Nov 2024 0 138 18
Dec 2024 0 105 46
Jan 2025 0 142 54
Feb 2025 0 150 68
Mar 2025 0 158 47
Apr 2025 0 62 41
May 2025 0 0 0