Divakaran S, Gurumurthy H. K, Ganesan M, Tukaram S, John B. J. AI-Integrated Digital Auscultation System for Early Detection and Monitoring of Chronic Obstructive Pulmonary Disease in Resource-Limited Settings. Biomed Pharmacol J 2025;18(3).
Manuscript received on :21-04-2025
Manuscript accepted on :12-09-2025
Published online on: 30-09-2025
Plagiarism Check: Yes
Reviewed by: Dr. Nicolas Padilla
Second Review by: Dr. Daniel Oyeka
Final Approval by: Dr. Anton R Kiselev

How to Cite    |   Publication History
Views  Views: 
Visited 306 times, 1 visit(s) today
 
Downloads  PDF Downloads: 
217

Sindu Divakaran1, Hari Krishnan Gurumurthy2*, Mohandass Ganapathy1, Sudhakar Tukkaram1and Bethanney Janney John3

1Department of Biomedical Engineering, Sathyabama Institute of Science and Technology, Chennai, India.

2 Department of Electrical and Electronics Engineering, School of Engineering, Mohan Babu University, Tirupati, India.

3Department of Biomedical Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, India

Corresponding Author E-mail: haris_eee@yahoo.com

DOI : https://dx.doi.org/10.13005/bpj/3237

Abstract

Auscultation remains a fundamental component of respiratory examination in clinical practice; however, its effectiveness is constrained by inter-observer variability and diagnostic subjectivity, particularly in the early detection of disease. This study presents a portable digital auscultation system, embedded within a smartphone-based platform, designed to enhance the diagnosis and monitoring of Chronic Obstructive Pulmonary Disease (COPD) through artificial intelligence (AI). The system employs a custom-built digital stethoscope equipped with an electret microphone, interfaced with an Android device, to acquire respiratory sounds. Signal processing techniques, including Mel-Frequency Cepstral Coefficients (MFCC), are applied to extract discriminative features from auscultated audio. Multiple deep learning classifiers—ANN, CNN, GRU, and LSTM—were evaluated for respiratory sound classification, with the ANN model achieving the highest diagnostic accuracy of 96.36%, along with precision, recall, and F1‑score of 96.23%, 96.10%, and 96.16%, respectively, outperforming CNN (93.45%), GRU (90.8%), and LSTM (88.25%). The diagnostic interface, developed using Streamlit, offers real-time feedback and supports remote respiratory health assessment. This AI-enhanced diagnostic tool has the potential to support biomedical practitioners in the early detection of COPD, monitoring disease progression, and assessing treatment responses, particularly in pharmacologically managed patients within low-resource healthcare environments.

Keywords

AI-Based Respiratory; Biomedical Signal Processing; Chronic Obstructive Pulmonary Disease (COPD); Diagnosis; Deep Learning in Healthcare; Digital Auscultation

Download this article as: 
Copy the following to cite this article:

Divakaran S, Gurumurthy H. K, Ganesan M, Tukaram S, John B. J. AI-Integrated Digital Auscultation System for Early Detection and Monitoring of Chronic Obstructive Pulmonary Disease in Resource-Limited Settings. Biomed Pharmacol J 2025;18(3).

Copy the following to cite this URL:

Divakaran S, Gurumurthy H. K, Ganesan M, Tukaram S, John B. J. AI-Integrated Digital Auscultation System for Early Detection and Monitoring of Chronic Obstructive Pulmonary Disease in Resource-Limited Settings. Biomed Pharmacol J 2025;18(3). Available from: https://bit.ly/42zpnPu

Introduction

Chronic Obstructive Pulmonary Disease (COPD) is a progressive and life-threatening respiratory disorder that significantly affects quality of life and contributes to high global mortality, particularly in low and middle-income countries.1,2 Despite advancements in pharmacological interventions and clinical management, timely diagnosis remains a persistent challenge, especially in primary care and rural settings.3 Auscultation, a fundamental component of respiratory examination, continues to serve as the frontline tool in diagnosing pulmonary diseases due to its non-invasive and real-time nature.4,5 However, the efficacy of auscultation is highly dependent on the clinician’s auditory skills and interpretive experience, which introduces substantial variability and can lead to misdiagnosis, particularly among junior medical staff.6,7 To overcome these limitations, digital auscultation devices and AI-based classification systems are gaining attention for their potential to standardize respiratory diagnosis and support clinical decision-making.

Digital stethoscopes, when combined with biomedical signal processing techniques, have shown promise in capturing and analyzing lung sounds with higher fidelity.10,11 Early systems applied signal features such as Fast Fourier Transform (FFT), spectral energy, and zero-crossing rates to classify abnormal sounds like wheeze and crackles.12.13 However, these methods proved inadequate for capturing the dynamic characteristics of respiratory signals.14,15 To address this, signal processing techniques such as Mel-Frequency Cepstral Coefficients (MFCC) and wavelet-based transforms were introduced to extract time-frequency features that better represent the complexities of respiratory audio signals.16,17 MFCC features, modeled on the human auditory system, are particularly effective in differentiating subtle acoustic variations across respiratory conditions and have become a standard in respiratory sound classification studies.18,19 These features are often used in conjunction with machine learning classifiers such as Support Vector Machines (SVM), K-Nearest Neighbors (KNN), and Decision Trees for pattern recognition in respiratory diagnostics.20,21

The emergence of deep learning has further elevated the diagnostic potential of automated auscultation systems. Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks are increasingly being employed to learn hierarchical representations of lung sounds from raw or preprocessed audio input.22,23 CNNs, in particular, are adept at learning spatial features from spectrograms and MFCC maps, while LSTM networks excel at modeling the temporal dependencies of respiratory cycles.24,25 Although these models report high classification accuracy, their implementations are often restricted to research environments and lack real-world integration.26,27 Many existing systems focus solely on algorithmic performance while neglecting hardware portability, environmental noise handling, user interface design, and overall accessibility for clinicians and patients.28,29 Furthermore, very few solutions address the end-to-end pipeline from audio acquisition to AI-based diagnosis and user interaction, leaving a gap between research prototypes and deployable clinical tools.

Several digital stethoscope systems, such as Eko Core, Thinklabs One, and Littmann Electronic, have demonstrated the feasibility of high-fidelity cardiopulmonary sound recording and AI-enabled analysis for disease detection.30,31 These platforms leverage cloud connectivity and machine learning for automated abnormal sound detection, but are often limited by cost, accessibility, and region-specific validation.32,33 Recently published reviews have evaluated these technologies and their integration of AI algorithms, establishing the state of the art in digital auscultation tools.

To address these limitations, this study proposes an integrated digital auscultation system that combines a compact stethoscope embedded with an electret microphone, a smartphone-based audio interface, and a cloud-enabled AI diagnostic platform. Respiratory sounds captured through the embedded hardware are processed using MFCC feature extraction, followed by classification using multiple deep learning models—Artificial Neural Network (ANN), CNN, Gated Recurrent Unit (GRU), and LSTM. Experimental evaluation reveals that the ANN model outperforms others, achieving a classification accuracy of 96.36%. The diagnostic system is further equipped with a web-based graphical interface built using Streamlit, enabling real-time prediction, visualization, and feedback.21 Designed for deployment in resource-constrained environments, this work provides a cost-effective, scalable, and intelligent solution for early COPD diagnosis, treatment assessment, and ongoing respiratory monitoring.

Materials and Methods

The development of a portable digital auscultation system integrated with deep learning-based diagnostic support was structured across four key domains: signal acquisition hardware, signal conditioning and feature extraction, implementation of deep learning architectures, and software deployment for user interaction.34,35 The methodology was designed to ensure low-cost implementation, clinical relevance, and compatibility with mobile platforms, particularly targeting use in resource-constrained environments.36,37

Design of the Acquisition Hardware

A conventional stethoscope chest piece was adapted to enable electronic respiratory sound acquisition.38,39 An electret condenser microphone was embedded within the acoustic channel of the chest piece tubing to detect respiratory sounds with sufficient sensitivity in the 100–2000 Hz range.40,41 The microphone output was interfaced with an Android-based smartphone using a 3.5 mm TRRS connector to facilitate audio capture and power delivery.42,43 To enhance portability, the entire hardware assembly was embedded into a custom-designed smartphone case, eliminating the need for external modules or additional interfaces, as shown in Figure 1. The integration is illustrated through component-level visuals, including the embedded microphone configuration and the smartphone casing with internal mounts.44 This compact arrangement allows clinicians to perform auscultation and access diagnostic results using a single handheld device.

Figure 1: Block diagram of the proposed AI-integrated digital auscultation system, 

Click here to view Figure

Signal Processing and Feature Engineering

Respiratory sound recordings obtained from the microphone undergo signal conditioning before feature extraction. Preprocessing steps included amplitude normalization, de-noising using a high-pass filter at 100 Hz to remove ambient interference, and segmentation into overlapping time frames of 25 ms duration with a 10 ms hop size. Feature extraction was carried out using Mel-Frequency Cepstral Coefficients (MFCC), which effectively map the spectral characteristics of human breath sounds into a low-dimensional feature space. MFCC computation followed a standard pipeline comprising framing, windowing, application of the Fast Fourier Transform (FFT), mel filterbank analysis, logarithmic scaling, and the Discrete Cosine Transform (DCT). The MFCC-based representation enables enhanced discrimination between normal and abnormal respiratory patterns and has been widely validated in biomedical sound classification.

Model Architecture and Training

The extracted MFCC features were utilized to train and evaluate multiple supervised deep learning architectures for respiratory sound classification. Four model types were considered: Artificial Neural Network (ANN), Convolutional Neural Network (CNN), Gated Recurrent Unit (GRU), and Long Short-Term Memory (LSTM) networks. Each model was implemented in Python using the TensorFlow framework and trained using a labeled dataset of respiratory sounds. The training data was split into training and testing sets with an 80:20 ratio, and categorical cross-entropy was used as the loss function. Optimization was performed using the Adam optimizer with a learning rate of 0.001, and early stopping was employed to prevent overfitting.

Figure 2: Software network flow diagram of the proposed MFCC-based deep learning classification system for respiratory sound diagnosis,

Click here to view Figure

The ANN model consisted of two hidden layers with ReLU activation as shown in Figure 2, followed by a softmax output layer for multi-class classification. CNN and recurrent models were configured with standard convolutional kernels and memory units, respectively, to leverage temporal and spatial signal characteristics.

Interface Development and Model Deployment

To translate the trained classification models into a clinically usable format, a lightweight diagnostic interface was developed using the Streamlit library. The interface supports real-time audio input from the smartphone hardware or file upload from local memory. The application performs feature extraction on the fly and invokes the ANN model to provide diagnostic output. Visual representations of MFCC spectrograms and classification probabilities are rendered to aid interpretability. The application is platform-independent and deployable on cloud or local devices, ensuring compatibility across healthcare environments.

System Workflow

The complete signal and diagnostic pipeline comprises sound acquisition via embedded hardware, preprocessing, and MFCC feature extraction, deep learning-based classification, and user-oriented diagnostic feedback. The modular workflow supports both real-time analysis and offline processing.

Results

The performance of the proposed AI-integrated auscultation system was evaluated through hardware validation, model accuracy benchmarking, and user interface implementation. Each component was tested independently and in an integrated manner to ensure clinical reliability, computational efficiency, and user accessibility. The results highlight the feasibility of deploying the system for automated respiratory sound classification in real-time settings.

Hardware Prototype Validation

Figure 3: Orthogonal views of the final hardware unit, 

Click here to view Figure

The fabricated prototype, consisting of the electret microphone and modified stethoscope, was embedded seamlessly into a mobile-compatible casing. Mechanical integration, noise insulation, and signal fidelity were verified through multiple iterations of auscultation on volunteers under controlled environmental conditions. The resulting recordings demonstrated clear acquisition of both normal and adventitious respiratory sounds, such as wheezes and crackles. Figure 3 presents orthogonal views of the final hardware unit, revealing the ergonomic form factor and integration of diagnostic components within the smartphone case. This validates the physical usability of the device in point-of-care environments.

Comparative Performance Analysis of Classification Models

A total of 600 respiratory sound recordings were collected from 220 individuals, including 120 clinically diagnosed COPD patients and 100 healthy controls, as given in Table 1. Each recording was acquired using the proposed AI-integrated digital stethoscope system, sampled at 44.1 kHz, and had a duration of approximately 30 to 60 seconds. Recordings covered multiple lung auscultation points (anterior and posterior chest, upper and lower lobes) to capture comprehensive respiratory profiles. All samples were independently annotated and validated by experienced pulmonologists to confirm diagnostic labels. This stratified dataset formed the basis for training and evaluation of all deep learning models in this study.

Table 1: Summary of Respiratory Sound Recordings Used in the Study

Parameter COPD Group Healthy Control Total
Number of subjects 120 100 220
Number of recordings 360 240 600
Recording locations Multiple lung auscultation sites (anterior & posterior, upper & lower lobes) Same as COPD group  –
Recording duration 30 – 60 seconds per file 30 – 60 seconds per file
Sampling frequency 44.1 kHz 44.1 kHz  –
Annotation Expert pulmonologists Expert pulmonologists  –

The extracted MFCC features were evaluated using four deep learning models: ANN, CNN, GRU, and LSTM. Each model was trained on a stratified dataset of respiratory sound samples and evaluated based on four performance metrics: accuracy, precision, recall, and F1-score. Table 2 presents a comparative analysis of the classification results across these models. The ANN classifier outperformed the others with an overall accuracy of 96.36%, indicating high generalization to unseen respiratory patterns. The CNN model achieved slightly lower accuracy due to limited sample diversity affecting spatial feature extraction. GRU and LSTM models exhibited competitive performance but showed increased training time and resource demands due to their recurrent architecture complexity.

Table 2: Comparative analysis of the classification results across different models used, Source: Authors’ work

Model Accuracy (%) Precision (%) Recall (%) F1-Score (%)
ANN 96.36 96.23 96.1 96.16
CNN 93.45 93.6 93.1 93.35
GRU 90.8 91.2 90.6 90.9
LSTM 88.25 88.5 88 88.24

The ANN model’s simplicity, fast convergence, and superior classification performance make it the most suitable candidate for deployment in mobile-based real-time applications. Its structure, illustrated in Figure 2, includes two hidden layers with ReLU activation, optimized using categorical cross-entropy loss and the Adam optimizer. The network architecture balances complexity and speed, making it ideal for low-power embedded deployments

User Interface Integration and Interpretation

To ensure interpretability and ease of use, the trained ANN model was embedded into a web-based graphical user interface using the Streamlit framework. This interface enables clinicians or community health workers to upload respiratory audio files, visualize MFCC representations, and receive diagnostic predictions in a structured layout. Figure 4 displays the GUI window where users can observe classification probabilities and spectrogram overlays. The design supports input validation, real-time feedback, and classification history logging for longitudinal patient monitoring.

The user interface not only facilitates automated diagnosis but also enhances the system’s clinical relevance by allowing repeated auscultation and tracking of pharmacological intervention outcomes. For instance, COPD patients undergoing bronchodilator therapy can be monitored periodically to observe changes in auscultatory profiles over time. This integration of AI classification with visual analytics empowers clinicians with decision-support tools in primary care and remote diagnostics.

Figure 4: User Interface app Window, Source: Authors’ work

Click here to view Figure

Discussion

The results of this study demonstrate that the proposed AI‑integrated digital auscultation system can effectively classify respiratory sounds for the detection and monitoring of COPD, with the ANN model achieving the highest accuracy, precision, recall, and F1‑score among the evaluated classifiers. The superior performance of ANN in our study aligns with prior findings that simpler feed‑forward architectures can sometimes outperform deeper or recurrent architectures when the extracted features, such as MFCC coefficients, are highly discriminative for the task at hand.1

Recent studies have also explored the use of AI in pulmonary sound analysis. For example, convolutional neural networks have been applied for abnormal respiratory sound detection, achieving performance in the lower 90% range for classification accuracy.5 Hybrid CNN–LSTM6 approaches for chronic pulmonary disease classification have reported accuracies around 92–94%. In comparison, our ANN model achieved an accuracy of 96.36%, outperforming these reported figures, suggesting that combining MFCC feature extraction with ANN architecture is highly effective for COPD‑focused classification.3,12

These findings suggest that the selection of model architecture and feature representation significantly impacts AI-based auscultation performance. Additionally, integrating our model into a portable, smartphone-linked stethoscope system addresses a major barrier in rural healthcare delivery by making advanced diagnostics accessible at the point of care. Similar portable AI‑assisted stethoscope systems have been trialed for pneumonia and asthma diagnosis, supporting the feasibility of deploying such technologies in broader respiratory disease screening.

However, it is important to acknowledge that, like other studies in this domain, our work is limited by dataset size, environmental noise conditions, and patient diversity. Future research should focus on validating the system in large‑scale clinical trials, incorporating multi‑site auscultation data, and exploring advanced architectures such as attention‑based networks or transformer models to further enhance classification performance.

Future Scope

While the proposed system achieved 96.36% accuracy for COPD detection, these findings are based on a limited, stratified dataset and should be validated with larger and more diverse patient cohorts. Future work will involve multi-centre data collection, comparative evaluation with existing commercial auscultation systems, and clinical trials to ensure robustness. We also plan to integrate cloud connectivity for remote monitoring applications.

Conclusion

This research presents the design and implementation of a portable AI-assisted digital auscultation system for the early diagnosis and monitoring of respiratory conditions, specifically Chronic Obstructive Pulmonary Disease (COPD). The system integrates a modified stethoscope with an embedded electret microphone and a smartphone-based diagnostic interface, offering an end-to-end solution from sound acquisition to automated classification. Utilizing Mel-Frequency Cepstral Coefficients (MFCC) for feature extraction and deep learning models for prediction, the system achieved a peak classification accuracy of 96.36% using an Artificial Neural Network (ANN). The developed Streamlit-based interface provides a user-friendly platform for real-time diagnosis and monitoring, making it suitable for deployment in resource-constrained settings. This work demonstrates the potential of combining embedded hardware and machine learning for scalable, intelligent respiratory healthcare. Future enhancements will focus on clinical deployment, broader dataset collection, and cloud integration for remote monitoring applications.

Acknowledgment

The authors express their sincere gratitude to the management, faculty, and technical staff of the Department of Biomedical Engineering, Sathyabama Institute of Science and Technology, Chennai, Department of Electrical and Electronics Engineering, School of Engineering, Mohan Babu University, Tirupati; and the Department of Biomedical Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences (SIMATS), Saveetha University, Chennai

Funding Sources

The author(s) received no financial support for the research, authorship, and/or publication of this article.

Conflict of Interest

The author(s) do not have any conflict of interest.

Data Availability Statement

This statement does not apply to this article.

Ethics Statement

This research did not involve human participants, animal subjects, or any material that requires ethical approval

Informed Consent Statement

This study did not involve human participants, and therefore, informed consent was not required.

Clinical Trial Registration

This research does not involve any clinical trials

Permission to reproduce material from other sources

Not Applicable

Author Contributions

  • Sindu Divakaran was responsible for the conceptualization, hardware design, methodology, data collection, and writing of the original draft.
  • Hari Krishnan G contributed to deep learning model development, signal processing, supervision, writing, review, and editing.
  • Mohadass G handled software interface development, visualization, validation, and resource management.
  • Sudhakar T was involved in project administration, literature review, technical support, writing, review, and editing.
  • Bethanney Janney J contributed to data analysis, user interface testing, documentation and provided final approval of the manuscript.

References

  1. Guntupalli AA, Dalmasso R, Muppalla MK. Classification of lung sounds using MFCC and machine learning models. Biomed Signal Process Control. 2021;65:102365.
  2. Mukherjee H, Obaidullah SM, Santosh KC, et al. A lazy learning-based language identification from speech using MFCC-2 features. Int J Mach Learn Cybern. 2020;11:1–14. doi:10.1007/s13042-019-00928-3.
    CrossRef
  3. Pramono J, Bowyer E, Rodriguez-Villegas F. Automatic adventitious respiratory sound analysis: a systematic review. PLoS One. 2017;12(5):1–30.
    CrossRef
  4. Hafke-Dys M, Czarnecka A, Zieliński A. Machine learning algorithms for respiratory sound classification: comparison of effectiveness and interpretability. Expert Syst Appl. 2022;198:116821.
  5. Alkhodari N, Khandoker A. Detection of abnormal respiratory sounds using convolutional neural networks. In: Proceedings of the IEEE Engineering in Medicine and Biology Society (EMBC); July 2020:2126–2129.
  6. Acharya S, Sree R, Sahu D. Lung sound classification using CNN and LSTM for chronic pulmonary diseases. Comput Biol Med. 2022;140:105087.
  7. Oliveira LB, Lima RF. Deployment of AI-based stethoscopes in rural clinics: a feasibility J Biomed Inform. 2021;122:103889.
  8. Reddy NM, Krishnan GH, Prabhu S. Advanced brain tumor detection through multimodal image fusion and segmentation techniques. In: Proceedings of the 2024 International Conference on Distributed Systems, Computer Networks and Cybersecurity (ICDSCNC); 2024:1–6.
    CrossRef
  9. Babu CB, Krishnan GH, Prabhu S. Enhancing accuracy in spatial satellite image classification through transfer learning and model ensemble. In: Proceedings of the 2024 International Conference on Distributed Systems, Computer Networks and Cybersecurity (ICDSCNC); 2024:1–5.
    CrossRef
  10. Krishnan GH, Prabhu S, Himabindu M, Reddy NK, Mounika S, Santhosh S. Comparative analysis fault detection in HT electrical insulators using modified transfer learning-based ResNet50, EfficientNetB0, and VGG16 through image processing. In: Proceedings of the 2024 10th International Conference on Electrical Energy Systems (ICEES); August 2024:1–5.
    CrossRef
  11. Rocha BM, Filos D, Mendes L, Vogiatzis I, Perantoni E, Natsiavas P. A respiratory sound database for the development of automated classification. Physiol Meas. 2019;40(3):035001.
    CrossRef
  12. Kim J, Jung H, Choi B, Lee T. Development of a digital stethoscope for chronic obstructive pulmonary disease patients. Sensors (Basel). 2020;20(3):730.
  13. Krishnan GH, Santhosh S, Mohandass G, Sudhakar T. Non-invasive bio-impedance diagnostics: Delving into signal frequency and electrode placement effects. Biomed Pharmacol J. 2024;17(2):769–778.
    CrossRef
  14. Krishnan GH, Sudhakar T, Santhosh S, Mohandass G. Development of a non-invasive jaundice meter using transcutaneous bilirubinometry. Biomed Pharmacol J. 2024;17(1):97–103.
    CrossRef
  15. Krishnan GH, Sukanya K, Sainath Reddy J, Niharika N, Siva Sai Kiran M. CNN based image processing for crack detection on HT insulator’s surface. In: Proceedings of the International Conference on Sustainable Computing and Smart Systems (ICSCSS); 2023:618–621.
    CrossRef
  16. Krishnan GH, Umashankar G, Sudhakar T, Devika V, Shaina Banu S. Eye blink based biometric authentication system. In: Proceedings of the 9th International Conference on Biosignals, Images and Instrumentation (ICBSII);
    CrossRef
  17. Umashankar G, Krishnan GH, Vimala JA. Elbow joints for upper-limb prosthesis: Analysis of biomedical EEG signals using discrete wavelet transform. Int J Eng Trends Technol. 2022;70(7):190–197.
    CrossRef
  18. Ganesan U, Krishnan GH, Paul NEE, Aarthi S, Swamy IK. Detecting diabetes mellitus from tongue image hybrid features and neural network classifier. In: Proceedings of the 4th International Conference on Cybernetics, Cognition and Machine Learning Applications (ICCCMLA); 2022:425–427.
    CrossRef
  19. Sudhakar T, Krishnan GH, Prem Kumar J, Devanesan PS, Shalini S. Inducement of artificial sleep using low strength magnetic waves. J Phys Conf Ser. 2022;2318(1).
    CrossRef
  20. Sudhakar T, Krishnan GH, Krishnamoorthy NR, Pradeepa M, Raghavi JP. Sleep disorder diagnosis using EEG based deep learning techniques. In: Proceedings of the 2021 IEEE 7th International Conference on Bio Signals, Images and Instrumentation (ICBSII);
    CrossRef
  21. Sudhakar T, Krishnan GH, Umashankar G, Bhurnima U, Shanchita B. Drug retrieving system in hospitals using robotics. Biomed Pharmacol J. 2020;13(3):1239–1244.
    CrossRef
  22. Mohandass G, Krishnan GH, Hemalatha RJ. An approach to automated retinal layer segmentation in SDOCT images. Int J Eng Technol (UAE). 2018;7(2):56–63.
    CrossRef
  23. Krishnan GH, Arun Kumar GS, Arun V, Saravana R. Comparative study of diabetes foot diagnosis using ABPI. Int J Eng Technol (UAE). 2018;7(2):40–42.
    CrossRef
  24. Sudhakar T, Krishnan GH, Santosh S, Meenakshi S, Thomas L. Prosthetic arm control using processing device, a comparative approach. Biomed Res (India). 2018;29(13):2904–2907.
    CrossRef
  25. Abinaya N, Krishnan GH, Hemalatha RJ, Mohandass G. Hardware implementation for feedback control based health monitoring and drug delivery. Biomedicine (India). 2017;37(1):123–126.
  26. Reddy N, Krishnan GH, Raghuram D. Real time patient health monitoring using raspberry PI. Res J Pharm Biol Chem Sci. 2016;7(6):570–575.
  27. Krishnan GH, Umashankar G, Abraham S. Cerebrovascular disorder diagnosis using MR angiography. Biomed Res (India). 2016;27(3):773–775.
  28. Umashankar G, Krishnan GH, Abraham S, Kirubika TR, Rajendran M. Proximity sensing system for retinal surgery patients. J Chem Pharm Sci. 2015;8(4):607–610.
  29. Krishnan GH, Natarajan RA, Nanda A. Microcontroller based non-invasive diagnosis of knee joint diseases. In: Proceedings of the International Conference on Information Communication and Embedded Systems (ICICES);
    CrossRef
  30. Thinklabs Medical LLC. Thinklabs One digital stethoscope: User guide and technical overview. Thinklabs Documents; Centennial, CO, USA; 2020.
  31. Sociedad Espanola de Neumologia y Cirugia Toracica. Clinical validation of tele-stethoscope system digital (NCT03596541). gov; 2018. Available from: https://clinicaltrials.gov/study/NCT03596541.
  32. Hafke-Dys H, Kuźnar-Kamińska B, Grzywalski T, Maciaszek A, Szarzyński K, Kociński J. Artificial intelligence approach to the monitoring of respiratory sounds in asthmatic patients. Front Physiol. 2021;12:745635.
    CrossRef
  33. Ogawa S, Namino F, Mori T, Sato G, Yamakawa T, Saito S. AI diagnosis of heart sounds differentiated with super StethoScope. J Cardiol. 2024;83(4):265–271.
    CrossRef
  34. Sabarivani A, Krishnan GH. Home health assistive system for critical care patients. Res J Pharm Biol Chem Sci. 2015;6(2):629–633.
  35. Margreat L, Krishnan GH. Statistical approach for diagnosis of diseases using histopathology data. Int J Pharma Bio Sci. 2015;6(2):B199–B203.
  36. Ilangovan N, Krishnan GH. Wheel chair movement control using human input: Comparative study approach. Res J Pharm Biol Chem Sci. 2015;6(3):568–570.
  37. Radhakrishna Rao G, Krishnan GH. Comparative study of pacemaker energy harvesting techniques. Res J Pharm Biol Chem Sci. 2015;6(1):1545–1547.
  38. Mohandass G, Natarajan RA, Krishnan GH. Comparative analysis of optical coherence tomography retinal images using multidimensional and cluster methods. Biomed Res (India). 2015;26(2):273–285.
  39. Krishnan GH, Hemalatha RJ, Umashankar G, Ahmed N, Nayak SR. Development of magnetic control system for electric wheel chair using tongue. Adv Intell Syst Comput. 2015;308(1):635–641.
    CrossRef
  40. Krishnan GH, Natarajan RA, Nanda A. Impact of upper limb joint fluid variation on inflammatory diseases diagnosis. J Electr Eng Technol. 2014;9(6):2114–2117.
    CrossRef
  41. Krishnan GH, Nanda A, Natarajan A. Synovial fluid density measurement for diagnosis of arthritis. Biomed Pharmacol J. 2014;7(1):221–224.
    CrossRef
  42. Krishnan GH, Natarajan RA, Nanda A. Comparative study of rheumatoid arthritis diagnosis using two methods. Biomed Pharmacol J. 2014;7(1):379–381.
    CrossRef
  43. Hemalatha RJ, Krishnan GH, Umashankar G, Abraham S. Computerized breast cancer detection system. Biosci Biotechnol Res Asia. 2014;11(2):907–910.
    CrossRef
  44. Krishna GH, Guru Anand V, Mohandass G, Hemalatha RJ, Sundaram S. Predicting grade of prostate cancer using image analysis software. In: Proceedings of the 2nd International Conference on Trendz in Information Sciences and Computing (TISC); 2010:122–124.
    CrossRef
Share Button
Visited 306 times, 1 visit(s) today

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.