Verma G. K, Kumar S, Singh M, Singh H. P, Mewada A, Ansari M. A. Hybrid Approach for Classification and Segmentation of Brain Tumour. Biomed Pharmacol J 2025;18(October Spl Edition).
Manuscript received on :01-05-2025
Manuscript accepted on :14-10-2025
Published online on: 21-10-2025
Plagiarism Check: Yes
Reviewed by: Dr. Indira Priyadarsini
Second Review by: Dr. Moumita Hazra
Final Approval by: Dr. Kapil Joshi

How to Cite    |   Publication History
Views  Views: 
Visited 240 times, 1 visit(s) today
 

Gaurav Kumar Verma1, Shailendra Kumar1, Maninder Singh2, Harsh Pratap Singh3, Arvind Mewada4* and Mohd. Aquib Ansari5

1Department of Electronics and Communication Engineering, Integral University, Lucknow, India

2Department of Symbiosis Centre for Medical Image Analysis, Symbiosis International (Deemed University), Pune, India

3Department of Computer Science Engineering, Medicapes University, Indore, India

4Department of Computer Science Engineering, Bennett University, Noida, India

5SCSE, Galgotias University, Greater Noida, India

Corresponding Author Email: mewadabpl@gmail.com

Abstract

Brain tumours pose a significant health challenge, necessitating precise classification and segmentation for effective diagnosis and treatment planning. Manual MRI interpretation is time-consuming and prone to subjectivity, highlighting the need for automated, reliable solutions. This paper presents a hybrid model that combines EfficientNetB7 and UNet to provide tumour classification and segmentation inside a single framework, thereby improving diagnostic accuracy and processing efficiency. EfficientNetB7 extracts high-level features from 600*600*3 MRI scans. This facilitates the precise classification into glioma, meningioma, pituitary, and no tumour. The UNet decoder utilises these shared features to generate pixel-wise segmentation maps of the tumour core, edema, and enhancing tumour regions, with the latter being clinically relevant primarily for gliomas. When tested on the BraTS 2020 dataset, the model achieved a classification accuracy of 98.5%, a Dice coefficient of 94.5%, and an Intersection over Union (IoU) of 91.3%, outperforming standalone EfficientNet, UNet, and contemporary hybrid architectures. With a 42 ms inference time per image, the model enables real-time clinical applications. By combining segmentation and classification into a single framework, the EfficientNetB7-UNet model offers a reliable, effective, and practically useful method for automated brain tumour diagnosis.

Keywords

Brain tumour; BraTs 2020; EfficientNet; Hybrid deep learning; IoU; MRI; U-Net

Copy the following to cite this article:

Verma G. K, Kumar S, Singh M, Singh H. P, Mewada A, Ansari M. A. Hybrid Approach for Classification and Segmentation of Brain Tumour. Biomed Pharmacol J 2025;18(October Spl Edition).

Copy the following to cite this URL:

Verma G. K, Kumar S, Singh M, Singh H. P, Mewada A, Ansari M. A. Hybrid Approach for Classification and Segmentation of Brain Tumour. Biomed Pharmacol J 2025;18(October Spl Edition). Available from: https://bit.ly/4ndKw95

Introduction

Brain tumours are considered to be cancerous due to their elevated mortality rates.1-4 There is a wide range of variations in these tumours, ranging from benign lesions that grow slowly to aggressive cancers. According to a report that was published by the World Health Organisation (WHO) in its global cancer report,5 brain cancer is more common in males than females, and rates are generally higher in developed countries. In light of this, the accurate categorisation of brain tumours and the precise localisation of these tumours are of the utmost importance for establishing effective treatment and improving patient outcomes. There are various types of brain tumours, each distinguishable by its unique characteristics.6 The most prevalent kinds of brain tumours are pituitary adenomas, gliomas, meningiomas, and metastatic tumors.7-9 Magnetic resonance imaging (MRI) is the appropriate diagnostic method due to its exceptional ability to capture soft tissue contrast, which is crucial for observing the brain’s anatomy.10,11 Nonetheless, manual MRI interpretation is laborious and prone to inter-observer variability, particularly due to the complex structures and varied presentations of malignancies. Consequently, there is a need for automated solutions that deliver reliable, accurate, and efficient brain tumour classification and segmentation.

Artificial intelligence (AI) advancements, specifically in deep learning (DL), have opened new possibilities in medical image analysis. The role of deep learning-based techniques in brain tumour analysis has been discussed, as they have made significant strides in recent years.12-18 The deep learning models are able to handle segmentation or classification tasks autonomously. Sultan et al.,19 suggested a deep learning model based on a CNN. Through transfer learning on MRI datasets, studies use two publicly accessible datasets of 3064 (meningioma, glioma, and pituitary tumours) and 516 (Grade II, Grade III, and Grade IV) images. It obtained 96.13% and 98.7% accuracy, respectively, on the available datasets. Xie et al.,20 classify brain tumours using a CNN and achieve varied accuracy for the classification of meningiomas, gliomas, and pituitary tumours. Models such as EfficientNet have become effective tools for categorisation. Tan and Le,21 created EfficientNet, which balances depth, width, and resolution using a compound scaling technique to achieve high accuracy. However, the inability of Efficient Net’s models to spatially pinpoint tumours limits their applicability in treatment planning. For segmentation, the UNet architecture, introduced by Ronneberger et al.,22 remains the benchmark in medical imaging. UNet’s encoder-decoder design, enhanced by skip connections, preserves spatial details and captures fine-grained anatomical structures, making it particularly effective for segmenting tumor regions.

Nevertheless, UNet is not equipped to classify tumor types, which limits its use as a standalone diagnostic tool. Díaz-Pernas et al. 23 developed a multiscale CNN approach that processes different spatial resolutions to enhance feature extraction, achieving high classification accuracy but at a high computational cost. Vimala et al.,24 proposed an EfficientNet-based approach for brain tumor classification, achieving high accuracy through transfer learning on MRI images. Their fine-tuned EfficientNetB2 model achieved a test accuracy of 99.06%, with precision, recall, and F1-score values exceeding 98.7%, outperforming other state-of-the-art methods. However, high computational cost may restrict its use in resource-constrained scenarios. The model also had a greater false negative rate for meningioma, suggesting it may have difficulty in recognising this malignancy.

Recently, researchers have explored hybrid models to provide a comprehensive diagnostic tool. Kumar et al.,25 introduced a hybrid technique for identifying and categorising brain tumours. It employed principal component analysis for feature reduction and a discrete wavelet transform to retrieve features from MRI images. In another study, Senan et al.,26  suggested a method for classifying brain tumours from MRI images that incorporates SVM, AlexNet, and ResNet-18. Their AlexNet+SVM model achieved 95.1% accuracy, 95.25% sensitivity, and 98.5% specificity, outperforming standalone CNNs. However, meningioma detection was less accurate (93.6%), indicating potential challenges with this tumor type. The high computational cost of ResNet-18 training (349 minutes) may limit its practical deployment. Saranya et al.,27 utilised EfficientNet and DenseNet with optimisers like Adam for brain tumor classification on MRI images. Their EfficientNet-B0 model achieved 97.62% accuracy, surpassing DenseNet’s 90.94% with Adam optimiser. However, the small dataset size and the computational demands may impact generalizability across various MRI scans. To effectively diagnose brain tumours, the proposed paper introduces a hybrid EfficientNetB7-UNet model that combines the strengths of both architectures, effectively addressing the need for comprehensive brain tumor diagnostics. The proposed architecture uses EfficientNetB7 for classification and UNet for segmentation to categorise tumor types and define their precise boundaries in MRI data. The EfficientNetB7 encoder, pre-trained on ImageNet and enhanced with squeeze-and-excitation (SE) blocks, is shared between both tasks, reducing computation and improving feature consistency. The EfficientNetB7 component captures rich, hierarchical tumor characteristics for classification, which the UNet branch uses to build pixel-wise segmentation maps. This hybrid method enhances diagnostic accuracy and reduces the computational demands required for clinical real-time applications.

Contributions to the proposed work are as follows:

Proposed an end-to-end hybrid EfficientNetB7-UNet Architecture, seamlessly integrating EfficientNetB7’s advanced feature extraction capabilities with UNet’s segmentation precision to deliver a comprehensive, unified framework for simultaneous brain tumor classification and segmentation within a single, cohesive model.

Addresses the computational redundancy for real-time clinical deployment by integrating the EfficientNetB7-UNet Architecture.

Leveraging preprocessing and augmentation, the model adapts to diverse tumour morphologies and MRI protocols, thereby enhancing its robustness across varied clinical settings.

Materials and Methods

This section provides a detailed discussion of the EfficientNetB7-UNet hybrid model for the automated classification and segmentation of brain tumours in MRI data. The architecture employs the two types of datasets to preprocess the proposed architecture. Initially, we processed the classification data obtained from the GitHub source as provided in the link (https://www.kaggle.com/datasets/sartajbhuvaji/brain-tumor-classification-mri/dataMRI), which consists of four classes of an image: glioma, meningioma, pituitary and no tumor. Another dataset for the segmentation is obtained from the Brain Tumor Segmentation (BraTS) 2020 dataset. Every scan contains ground-truth segmentation masks that define the tumor core, edema, and enhancing regions and multi-modal sequences (T1, T2, FLAIR, and T1-contrast-enhanced) with classification labels (tumor type). Fig. 1 shows the structure of the proposed hybrid approach.

Data Preprocessing and Augmentation

MRI scans may vary depending on the type of MRI scanner used to acquire the image. As a result of the different acquisition parameters, magnetic resonance imaging (MRI) scans might vary in intensity and format. Preprocessingss28 was performed on the available dataset by standardising the data with skull stripping and intensity normalisation, which focused the model on tumour-relevant brain areas. The subsequent preprocessing procedures implemented to maintain uniformity are as follows:

Intensity Normalisation: The values of the pixels are standardised to a range of [0, 1] by applying min-max scaling, which helps to reduce changes in contrast.

Skull Stripping: The algorithm used by the Brain Extraction Tool isolates the brain by removing any non-brain tissues.

Resizing: The resolution of the images is changed to 600×600 pixels, which is the default input resolution of EfficientNetB7. This is done while maintaining the detail necessary for feature extraction.

Data augmentation techniques, such as rotation (with a tolerance of ±30° to replicate patient orientation heterogeneity), scaling (with a tolerance of 0.8-1.2x to account for differences in tumor size), and flipping (with horizontal flips to increase diversity), are applied to improve the issue of class imbalance. Furthermore, it is effective during training, reducing overfitting and enhancing robustness across various tumour morphologies. Fig. 2 visualises the preprocessed training samples showing skull-stripped MRI images alongside their corresponding segmentation masks after data augmentation and resizing.

Figure 1: Structure of the proposed hybrid approach

Click here to view Figure

Figure 2: shows three MRI samples that are preprocessed by applying skull-stripping alongside their corresponding segmentation masks after data augmentation and resizing.

Click here to view Figure

Hybrid Model Architecture: Efficient Net B7-UNet Integration

The hybrid model combines EfficientNetB7 and UNet into a unified architecture designed to perform simultaneous brain tumour classification and segmentation within a single framework. This method optimises both diagnostic accuracy and computational economy for MRI-based brain tumor analysis by combining the robust feature extraction capabilities of EfficientNetB7 with the precise spatial localisation capabilities of UNet. EfficientNet B7 serves as the encoder, pre-trained on ImageNet and fine-tuned on the specific classification and segmentation datasets used in this study: 2,870 MRI images for classification across four classes (glioma, meningioma, pituitary adenoma, and no tumour), and 343 cases from the BraTS 2020 dataset for segmentation. It utilises its compound scaling methodology to balance depth, width, and resolution. This allows it to achieve superior performance while simultaneously reducing the number of parameters.

To classify the four tumour categories, it processes 600X600X3 MRI inputs. It has been pre-trained on ImageNet and fine-tuned on the BraTS 2020 dataset. These features are derived during the training using Mobile Inverted Bottleneck Convolution (MBConv) blocks with squeeze-excitation processes. The layers collect hierarchical features that capture both global and local tumor characteristics. These features are extracted using various steps, which are referred to as MBConv1–MBConv7. In order to segment the data, the decoder processes these characteristics. In order to maintain the spatial characteristics lost during the downsampling process, the decoder utilises transpose convolutions to upsample the output of the MBConv7 and restore the spatial details lost during downsampling. This process generates a 600x600x4 segmentation mask that delineates the background, tumour core, oedema, and enhancing tumour regions. For classification, a global average pooling layer follows MBConv7, feeding into a dense layer with SoftMax activation to produce probabilities across the four tumor classes. The detailed layer configuration is presented in Table 1. An algorithm outlining the preprocessing, model construction, training, and inference phases is provided to elucidate the operational workflow. This hybrid design minimises computational redundancy by sharing the encoder across tasks, enhances convergence through pre-trained weights, and supports real-time clinical applications.

Table 1: Proposed architecture for classification and segmentation of brain tumor

Layer Type Input Shape Output Shape Additional Information Task
Input Layer (600, 600, 3) (600, 600, 3) RGB MRI Input Shared
Stem Conv + MBConv1 (EfficientNet) (600, 600, 3) (300, 300, 32) Initial 3×3 Conv (stride 2) followed by MBConv1, SE Ratio: 0.25 Shared
MBConv2 (300, 300, 32) (150, 150, 48) Kernel: 3×3, Stride: 2 Shared
MBConv3 (150, 150, 48) (75, 75, 80) Kernel: 5×5, Stride: 2 Shared
MBConv4 (75, 75, 80) (38, 38, 160) Kernel: 3×3, Stride: 2 Shared
MBConv5 (38, 38, 160) (19, 19, 272) Kernel: 5×5, Stride: 2 Shared
MBConv6 (19, 19, 272) (19, 19, 448) Kernel: 5×5,

Expansion: 6

Shared
MBConv7 (19, 19, 448) (19, 19, 640) Kernel: 3×3 Shared
GlobalAvgPooling2D (19, 19, 640) (640,) Feature aggregation for classification Classification
Dense Layer (640,) (128,) Activation: ReLU Classification
Dense Output (128,) (4,) Softmax with 4 classes (glioma, meningioma, pituitary, no tumor) Classification
Upsample Block 1 (19, 19, 640) (38, 38, 272) Transpose Conv, Skip from MBConv5 output Segmentation
Upsample Block 2 (38, 38, 272) (75, 75, 160) Transpose Conv, Skip from MBConv4 output Segmentation
Upsample Block 3 (75, 75, 160) (150, 150, 80) Transpose Conv, Skip from MBConv3 output Segmentation
Upsample Block 4 (150, 150, 80) (300, 300, 48) Transpose Conv, Skip from MBConv2 output Segmentation
Upsample Block 5 (300, 300, 48) (600, 600, 4) Transpose Conv to match the input resolution,  Activation: Softmax Segmentation
Segmentation Output (600, 600, 4) (600, 600, 4) Pixel-wise mask (background, core, edema, enhancing) Segmentation
#SE: Squeeze-and-Excitation
Algorithm 1:
// Input: MRI dataset D with images X, class labels Y_class (glioma, meningioma, pituitary adenoma, no tumor),
//        and segmentation masks Y_seg (background, tumor core, edema, enhancing tumor)
// Output: Classification predictions C and segmentation masks S
// Hyperparameters
Input_Size : 600x600x3
Num_Classes : 4
Seg_Channels : 4
Learning_rate : 1e-4
Batch_Size : 16
Loss_Weights : λ1 = 1.0 (classification), λ2 = 1.0 (segmentation)
// Step 1: Prepare Data and Model
for each image x in D, do:
Preprocess x: Normalise to [0,1], Resize to Input_Size
end for
Split the classification dataset into train (70%), validation (15%), and test (15%)
Split the segmentation dataset (BRaTS 2020) into train (70%), validation (15%), and test (15%) at the patient level to avoid leakage
For segmentation input, select modalities T1ce, T2, FLAIR and stack as RGB channels
initialise model: encoder : efficientnet_b7(pretrained=imagenet)
add classification head : global_average_pooling (encoder) → dense (num_classes, softmax) → c
add a segmentation head : unet_decoder (encoder) → conv2d(seg_channels) → s (600x600x4)
train model : optimiser ← adam (learning_rate)
loss : λ1 * cross_entropy(c, y_class) + λ2 * dice_loss(s, y_seg)
for epoch = 1 to Max_Epochs do:
Update weights using the Train set.
Validate on the Validation set.
end for
// Step 2: Process and Evaluate
for each test image x in d_test do:
preprocess x
features ← encoder(x)
c ← classification_head(features)
s ← segmentation_head(features)
end for
evaluate : compute accuracy (c, y_class), macro_f1(c, y_class), dice (s, y_seg), iou(s, y_seg)

Results

This section evaluates the proposed model’s performance across several metrics, comparing it to baseline models and recent literature, and analyses its efficiency and interpretability for clinical applicability. The effectiveness of the proposed hybrid model for brain tumor classification and segmentation was evaluated using an MRI dataset. The classification is performed using four different tumor types: glioma, meningioma, pituitary adenoma, and no tumor, which consists of a total of 2870 images. The classification dataset was split into 70% for training, 15% for validation, and 15% for testing, ensuring that the test set remained strictly unseen during model training and hyperparameter tuning. Fig. 3(a) shows the distribution of the training dataset for four classes, 3(b) shows the distribution of the testing dataset, 3(c) shows the sample used in training the model, and 3(d) gives a sample for testing purposes. Consider the Brats 2020 dataset for the segmentation, which consists of 343 images and their corresponding masks. Data splitting for segmentation was performed at the patient level into 70% training, 15% validation, and 15% testing to prevent data leakage between slices of the same patient. The available image in Brats 2020 consists of different modalities such as ‘T1’, ‘T1ce’, ‘T2’, and ‘FLAIR’. Fig. 4 shows a sample of the segmentation dataset used in the proposed hybrid approach. Among the different modalities, our proposed approach considered only ‘FLAIR,’ ‘T1ce, ’ and ‘T2’ to process in training to match the dimension of the structure and form a 3-channel input (600 × 600 × 3) compatible with the model architecture. The proposed framework uses a multi-task learning strategy with a shared EfficientNet-B7 encoder, from which separate classification and segmentation heads are trained in parallel.

Figure 3: (a) Distribution of training dataset for four classes, (b) distribution of testing dataset, (c) sample used in training the model and (d) sample used for testing purpose

Click here to view Figure

Figure 4: Sample of the segmentation dataset used in the proposed hybrid approach.

Click here to view Figure

To ensure balanced training of both classification and segmentation tasks, a combined loss function is utilised:

Cross-Entropy Loss for Classification

The cross-entropy loss function measures classification accuracy by penalising incorrect predictions of tumor type, optimising Efficient Net B7 for accurate differentiation between glioma, meningioma, pituitary, and no-tumour.

Dice Loss for Segmentation

The Dice loss, specifically suited for imbalanced segmentation tasks, is used to evaluate the overlap between predicted and ground-truth segmentation maps. This loss emphasises accurate tumor localisation, encouraging the UNet branch to capture precise boundaries.

The total loss is computed as

This builds upon the multi-task learning setup described above, which simultaneously adjusts the weights of EfficientNetB7 and UNet. This enables the model to learn the features of the different tasks assigned in the architecture. Moreover, the tuning of the model is performed by varying the hyperparameters, such as batch size, dropout rate and learning rate, using a grid search to determine the best model. In addition, there may be the possibility that data may sometimes overfit during training, so to avoid such instances, an early stopping function is included. For an epoch of 100, the model is trained and tested to differentiate between all four tumor classes. The classification results are summarised below. The model’s performance in identifying each tumor type was assessed through the calculation of precision, recall, and F1-score. It was determined that the model obtained an F1-score of around 0.975, which shows the balanced performance of the proposed EfficientNetB7-Unet model.

Table 2: Performance parameters for three classes of tumor

Tumor Type Recall (%) Precision (%) F1-Score (%)
Glioma 98 97.4 97.6
Meningioma 97 97.5 97.2
Pituitary Adenoma 97.8 97.8 97.7
Average 97.6 97.5 97.5
Figure 5: (a) highlights the classification of three different tumors using the confusion matrix, and (b) shows values of sensitivity and specificity for each tumor.

Click here to view Figure

Table 2 presents the model’s performance in the classification task, and the confusion matrix (Fig. 5(a) is plotted for each type of tumour. Moreover, the sensitivity and specificity were determined for each type of tumor and are highlighted in Fig. 5 (b). The sensitivity provides the accurate detection of tumor cases, while specificity reduces the incidence of false positives. The sensitivities for glioma, meningioma, and pituitary adenoma were found to be 0.98, 0.97, and 0.97, respectively, while the specificities were determined to be 0.992, 0.987, and 0.988, respectively.

Figure 6: ROC-AUC Curve for three different classes of tumor

Click here to view Figure

The “no tumor” class was excluded from sensitivity and specificity reporting because these metrics are more clinically relevant to positive tumor cases. This demonstrates that the EfficientNetB7 backbone effectively identifies the unique features of each tumor type, so reducing biases and ensuring reliable classification across all tumor categories. A ROC curve was plotted, and the Area Under the Curve (AUC) was computed for each type of tumor. The AUC for each type of tumour is determined at various thresholds. For glioma, meningioma, and pituitary adenoma, AUCs of 0.98, 0.97, and 0.97, respectively, were obtained. The “no tumor” class was not included in the ROC-AUC analysis for the same reason. The results of this computation are depicted in Fig. 6, which was plotted to test classification performance at various thresholds.

Table 3: Comparison of different region-based tumor segmentation

Region Type Dice Coefficient (%) IoU (%) Boundary Precision (%) Boundary Recall (%) Boundary F1-Score (%)
Tumor Core 95.2 92.1 94.7 93.9 94.3
Edema Region 93.8 90.5 92.3 91.7 92.0
Enhancing Tumor 94.7 91.6 93.9 93.1 93.5
Average 94.5 91.4 93.6 92.9 93.2

These classification outcomes confirm that the EfficientNetB7 backbone exhibits a strong discriminative capability across various tumour types, with minimal performance degradation across different thresholds. This high stability is essential for ensuring reliability in clinical decision-making. Following classification, the segmentation task was evaluated on tumour cores, oedema, and enhancing tumour regions. Table 3 presents the segmentation results in terms of Dice coefficient, IoU, and boundary-based metrics. High Dice and IoU values across regions indicate that the UNet branch effectively captures the overall tumour structure and preserves the fine details of tumour boundaries.

Discussion

The proposed model integrates both classification and segmentation within a single framework, enabling tumour type identification and anatomical localisation in one inference pass. This dual capability provides richer clinical insights compared to using separate models. The performance of the segmentation (tumour core, oedema region, and enhancing tumour) is evaluated using various parameters, including the dice coefficient, intersection over union (IoU), and boundary parameters, which determine the alignment and precision of tumour boundary detection. These metrics were calculated for each region type to assess the model’s ability to segment various anatomical features of the tumour accurately. Table 3 illustrates the segmentation performance of each tumor region. The dice coefficient and IoU of 0.945 and 0.914 demonstrate the model’s robustness in segmenting different regions after the classification task.

Furthermore, the model’s effectiveness was evaluated by obtaining boundary parameter metrics for the critical tumour boundaries and structures. The boundary of the region-type tumor is determined by considering the boundary precision, boundary recall, and boundary F1 score. The high F1 scores across tumour regions indicate that the model can delineate tumour edges precisely, which is essential for clinical applications.

However, determining different parameter metrics for region-type tumour segmentation alone cannot guarantee the accurate estimation of boundary alignments. Parameters such as Hausdorff Distance (HD) and Average Surface Distance (ASD) are necessary for the precise estimation of tumour size. When compared to the ASD, which provides an average measure of boundary correctness, the HD measures the most significant deviation that exists between the predicted boundaries and the ground truth bounds. Fig. 7 presents measurements of these two metrics for each region-type tumour. Further, the performance is compared with the existing literature and baseline models (standalone EfficientNetB7 for classification and UNet for segmentation). Table 4 compares the models and shows that the proposed model achieved a classification accuracy of 98.5%, indicating its robust ability to correctly identify tumour types. With a Dice coefficient of 94.5% and IoU of 91.3%, the proposed model outperformed the segmentation performance of both standalone UNet and hybrid literature models. Furthermore, the model is evaluated by measuring the inference time, which is found to be 42 ms per image. This indicates that it can be implemented for real-time applications, making it suitable for clinical deployment where rapid decision-making is crucial.

Figure 7: depicts the measurement for different region-type tumor.

Click here to view Figure

Table 4: Comparison between baseline and existing models

Model Classification Accuracy

(%)

Dice Coefficient (Segmentation) IoU (Segmentation) Boundary F1-Score (%) Inference Time (ms)
Díaz-Pernas et al.23 97.3 91.8 89.1 90.4 55
Sultan et al.19 96.1 90.3 87.5 88.9 52
EfficientNet B7 95.6 NA NA NA 32
UNet NA 92.1 88.7 90.8 37
Proposed EfficientNet B7-UNet 98.5 94.5 91.3 93.2 42
#NA: Not Applicable

These comparisons highlight the model’s improvements in classification and segmentation accuracy, as well as its computational efficiency, making it a comprehensive and reliable diagnostic tool. Table 5 presents an ablation study on four parameters: accuracy, dice coefficient, inference time, and IoU.

Table 5: Ablation Study of Proposed Model

Configuration Classification Accuracy (%) Dice Coefficient (%) IoU (%) Inference Time (ms)
EfficientNetB7-UNet (Proposed) 98.5 94.5 91.3 42
No Skip Connections 97.2 90.2 87.1 43
No Pre-trained Weights 95.1 92.8 89.5 44
EfficientNetB0 instead of B7 96.8 93.1 90.0 38

Initially, the performance was evaluated using the EfficientNetB0 configuration, followed by the EfficientNetB7 configuration. Then, integrating it with the U-Net model appears to be a drastic performance improvement. The analysis was further done by considering the removal and modifications of several layers and hyper-tuning the model. Finally, we obtained that the proposed architecture performed well under all conditions. Hence, it can be applied to clinical applications considering the computational demands and model efficiency. The joint evaluation of classification and segmentation confirms that both tasks reinforce each other in the proposed architecture, leading to high accuracy, strong boundary alignment, and low inference time. This makes the approach suitable for integration into neuro-oncology diagnostic workflows, where both tumor type and precise spatial extent are critical.

Conclusion

The hybrid framework combines EfficientNetB7-UNet for end-to-end brain tumor classification and segmentation, effectively addressing the diagnostic challenges with a unified architecture. Evaluated on the two different datasets, the model leverages EfficientNetB7-Unet robust feature extraction and precise spatial reconstruction to achieve the best performance. The classification of brain tumours achieves an accuracy of 98.5%, with high precision, recall, and F1 scores across glioma, meningioma, and pituitary adenoma, indicating its reliable capability for differentiating tumour types. Concurrently, the segmentation of tumour regions (tumour core, oedema, and enhancing areas) achieved an average Dice coefficient of 94.5% and an Intersection over Union (IoU) of 91.3%. With an inference time of 42 ms per image, the model outperforms prior hybrid approaches in terms of computational efficiency, making it viable for real-time clinical deployment. This integration of classification and segmentation into an end-to-end trainable framework provides a comprehensive diagnostic tool; classification informs tumour type for clinical decision-making, while segmentation defines precise boundaries that are critical for surgical planning and radiotherapy. Future work will focus on extending the model to multi-centre datasets, incorporating other imaging modalities such as CT scans, optimising for lower-resource hardware, and exploring the integration of this model into clinical decision support systems to enhance usability in diverse healthcare environments.

Acknowledgement

All authors express their heartfelt gratitude to Integral University, Lucknow, for graciously assigning manuscript number 1U/R&D/2025-MCN0003315, thus enabling the progression of our current research endeavours.

Funding Sources

The author(s) received no financial support for the research, authorship, and/or publication of this article.

Conflict of Interest

The author(s) do not have any conflict of interest.

Data Availability Statement

The associated data are included in the article.

Ethics Statement

This research did not involve human participants, animal subjects, or any materials that require ethical approval.

Informed Consent Statement

This study did not involve human participants, and therefore, informed consent was not required.

Clinical Trial Registration

This research does not involve any clinical trials

Permission to reproduce material from other sources

Not Applicable

Author Contributions

  • Gaurav Kumar Verma and Shailendra Kumar: Conceptualisation, Methodology, Data Collection, Analysis, Writing – Original Draft;
  • Shailendra Kumar, Maninder Singh and Arvind Mewada: Supervision, Writing – Review & Editing.

Reference

  1. Louis DN, Perry A, Reifenberger G, et al. The 2016 World Health Organization Classification of Tumors of the Central Nervous System: a Summary. Acta Neuropathologica. 2016;131(6):803-820.
    CrossRef
  2. Lapointe, Sarah, Arie Perry, and Nicholas A. Butowski. Primary brain tumours in adults. The Lancet. 2018;392(10145):432-446.
    CrossRef
  3. Choi JH, Ro JY. The 2020 WHO Classification of Tumors of Soft Tissue: Selected Changes and New Entities. Advances in Anatomic Pathology. 2020;28(1):44-58.
    CrossRef
  4. Miller KD, Ostrom QT, Kruchko C, et al. Brain and other central nervous system tumor statistics, 2021. CA: a cancer journal for clinicians. 2021; 71(5): 381-406.
    CrossRef
  5. Ostrom QT, Francis SS, Barnholtz-Sloan JS. Epidemiology of Brain and Other CNS Tumors. Current Neurology and Neuroscience Reports. 2021;21(12):68.
    CrossRef
  6. Hossein Mehnatkesh, Seyed Amir Jalali, Khosravi A, Saeid Nahavandi. An intelligent driven deep residual learning framework for brain tumor classification using MRI images. Expert Systems with Applications. 2023; 213:119087-119097.
    CrossRef
  7. Haq EU, Jianjun H, Li K, Haq HU, Zhang T. An MRI-based deep learning approach for efficient classification of brain tumors.” Journal of Ambient Intelligence and Humanised Computing. 2023; 14(6):6697-6718.
    CrossRef
  8. Dang K, Vo T, Ngo L, Ha H. A deep learning framework integrating MRI image preprocessing methods for brain tumor segmentation and classification. IBRO Neuroscience Reports. 2022; 13:523-532.
    CrossRef
  9. Goodenberger ML, Jenkins RB. Genetics of adult glioma. Cancer Genetics. 2012;205(12):613-621.
    CrossRef
  10. Ullah Z, Farooq MU, Lee SH, An D. A hybrid image enhancement-based brain MRI images classification technique. Medical Hypotheses. 2020;143:109-122.
    CrossRef
  11. Litjens G, Kooi T, Bejnordi BE, et al. A Survey on Deep Learning in Medical Image Analysis. Medical Image Analysis. 2017;42(1):60-88.
    CrossRef
  12. Paul JS, Plassard AJ, Landman BA, Fabbri D. Deep learning for brain tumor classification. SPIE-Medical Imaging 2017: Biomedical Applications in Molecular, Structural, and Functional Imaging. 2017;10137:253-268.
    CrossRef
  13. Zacharaki EI, Wang S, Chawla S, et al. Classification of brain tumor type and grade using MRI texture and shape in a machine learning scheme. Magnetic Resonance in Medicine. 2009;62(6):1609-1618.
    CrossRef
  14. Machhale K, Nandpuru HB, Kapur V, Kosta L. MRI brain cancer classification using a hybrid classifier (SVM-KNN). International Conference on Industrial Instrumentation and Control (ICIC). 2015; 4:60-65.
    CrossRef
  15. Sharif MI, Li JP, Amin J, Sharif A. An improved framework for brain tumor analysis using MRI based on YOLOv2 and a convolutional neural network. Complex and Intelligent Systems. 2021;7(4):2023-2036.
    CrossRef
  16. Nazir M, Shakil S, Khurshid K. Role of Deep Learning in Brain Tumor Detection and Classification: A Review. Computerised Medical Imaging and Graphics. 2021;91:10-19.
    CrossRef
  17. Tandel GS, Biswas M, Kakde OG, et al. A review on a deep learning perspective in brain cancer classification. Cancers. 2019; 11(1);1:32.
    CrossRef
  18. Khan SM, Nasim F, Ahmad J, Masood S. Deep Learning-Based Brain Tumor Detection. Journal of Computing and Biomedical Informatics. 2024;7(02):1-14.
  19. Sultan HH, Salem NM, Al-Atabany W. Multi-Classification of Brain Tumor Images Using Deep Neural Network. IEEE Access. 2019;7:69215-69225.
    CrossRef
  20. Convolutional neural network techniques for brain tumor classification (from 2015 to 2022): Review, challenges, and future perspectives. Diagnostics. 2022;12(8):1-46.
    CrossRef
  21. Tan M, Le Q. EfficientNetv2: Smaller models and faster training. International Conference on Machine Learning (PMLR). 2021;139:10096-10106.
  22. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. Lecture Notes in Computer Science. 2015; 9351:234-241.
    CrossRef
  23. Díaz-Pernas FJ, Martínez-Zarzuela M, Antón-Rodríguez M, González-Ortega D. A Deep Learning Approach for Brain Tumor Classification and Segmentation Using a Multiscale Convolutional Neural Network. Healthcare. 2021;9(2):153:1-14.
    CrossRef
  24. Babu Vimala B, Srinivasan S, Mathivanan SK, Mahalakshmi, Jayagopal P, Dalu GT. Detection and classification of brain tumor using hybrid deep learning models. Scientific Reports. 2023;13(1):23029:1-17.
    CrossRef
  25. Kumar S, Mankame DP. Optimisation-driven Deep Convolution Neural Network for brain tumor classification. Biocybernetics and Biomedical Engineering. 2020;40(3):1190-1204.
    CrossRef
  26. Senan EM, Jadhav ME, Rassem TH, Aljaloud AS, Mohammed BA, Al-Mekhlafi ZG. Early Diagnosis of Brain Tumour MRI Images Using Hybrid Techniques between Deep and Machine Learning. Moraru L, ed. Computational and Mathematical Methods in Medicine. 2022; 2022:1-17.
    CrossRef
  27. Mohana Saranya, Dinesh Komarasamy, R. Dharshini, R. Gurudeepa, S. Mohanapriya, R. Dharani. Enhancing Brain Tumor Classification with Optimised Convolutional Neural Networks. 13th International Conference on Computing, Communication and Networking Technologies (ICCCNT). 2023:1-6.
    CrossRef
  28. Kumar A, Agarwal M, Aquib M. A Genetic Algorithm-Enhanced Deep Neural Network for Efficient and Optimised Brain Tumour Detection. Communications in Computer and Information Science. 2024;2054:311-321.
    CrossRef
Share Button
Visited 240 times, 1 visit(s) today

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.