<?xml version="1.0" encoding="UTF-8"?>



<records>

  <record>
    <language>eng</language>
          <publisher>Oriental Scientific Publishing Company</publisher>
        <journalTitle>Biomedical and Pharmacology Journal</journalTitle>
          <issn>0974-6242</issn>
            <publicationDate>2025-12-30</publicationDate>
    
        <volume>18</volume>
        <issue>4</issue>

 
    <startPage>2766</startPage>
    <endPage>2782</endPage>

	 
      <doi>10.13005/bpj/3292</doi>
        <publisherRecordId>68973</publisherRecordId>
    <documentType>article</documentType>
    <title language="eng">ESAM: Efficient Spatial Attention based Deep Learning Approach for Fundus Images Classification</title>

    <authors>
	 


      <author>
       <name>Richa Gupta</name>

 
		
	<affiliationId>1</affiliationId>
      </author>
    

	 


      <author>
       <name>Vidit Kumar</name>


		
	<affiliationId>1</affiliationId>

      </author>
    

	 


      <author>
       <name>Vikas Tripathi</name>

		
	<affiliationId>1</affiliationId>
      </author>
    

	


	


	
    </authors>
    
	    <affiliationsList>
	    
		
		<affiliationName affiliationId="1">Department of CSE, Graphic Era Deemed to be University, Dehradun, India</affiliationName>
    

		
		<affiliationName affiliationId="2">Department of CSE, Graphic Era Hill University, Dehradun, India, </affiliationName>
    
		
		
		
		
	  </affiliationsList>






    <abstract language="eng">Colour fundus photography is a valuable tool for detecting key biomarkers and early-stage lesions linked to various illnesses, including diabetic retinopathy (DR). DR remains a leading cause of vision impairment and blindness globally, resulting from capillary endothelial damage, increased vascular permeability. In recent years, substantial advancements have been made in developing automated DR classification techniques utilizing retinal fundus imaging. However, the detection of multiple DR stages remains an underexplored domain. Furthermore, challenges such as high interclass similarity, subtle lesion size variations, and redundant features within fundus images significantly complicate the classification process. Existing methodologies often suffer from either inefficient lesion-specific feature extraction or rely heavily on manual lesion annotations, leading to suboptimal grading performance or increased annotation burden. To mitigate these challenges, this study introduces a novel attention-driven technique, termed Efficient Spatial Attentional Deep Learning approach (ESADL), which enhances multistage DR classification by improving feature selection and lesion localization. Given a feature map extracted from intermediate layer of CNN, our module called Efficient Spatial Attention Module (ESAM) parallelly infers attention map along four separate pathways, then the attention map is multiplied to the input feature map for feature refinement. By leveraging multi-branch convolutions, diverse receptive fields, and efficient attention mechanisms, ESAM enhances the model’s ability to distinguish subtle retinal abnormalities, leading to more accurate and reliable DR classification. The ESADL model is evaluated on multiple retinal fundus image datasets, including IDRID_Rgrade, Messidor, IDRID_Edema, and Mendley. The proposed model achieves superior performance on IDRID_Edema dataset, attaining an accuracy of 84% with EfficientNet-B0 and 80% with MobileNetV2, On IDRID_Rgrade proposed model achieves 65% using the EfficientNet-B0 base model and 57% using the MobileNetV2 base model, on Messidor Fold1 got 61% accuracy with MobileNetV2 and 60% with EfficientNetB0.</abstract>

    <fullTextUrl format="html">https://biomedpharmajournal.org/vol18no4/esam-efficient-spatial-attention-based-deep-learning-approach-for-fundus-images-classification/</fullTextUrl>

<keywords language="eng">

      
        <keyword>Attention based model</keyword>
      

      
        <keyword> CBAM</keyword>
      

      
        <keyword> Deep learning</keyword>
      

      
        <keyword> EfficieNetB0</keyword>
      

      
        <keyword> MobileNetV2</keyword>
      
</keywords>
  </record>
</records>