Dogra A, Goyal B, Agrawal S. Current and Future Orientation of Anatomical and Functional Imaging Modality Fusion. Biomed Pharmacol J 2017;10(4).
Manuscript received on :December 05, 2017
Manuscript accepted on :December 15, 2017
Published online on: --
Plagiarism Check: Yes
How to Cite    |   Publication History
Views Views: (Visited 405 times, 1 visits today)   Downloads PDF Downloads: 734

Ayush Dogra, Bhawna Goyal and Sunil Agrawal

UIET,Department of electronics and communications, Panjab University, Chandigarh-160017, India.

Corresponding Author E-mail: ayush123456789@gmail.com

DOI : https://dx.doi.org/10.13005/bpj/1277

Abstract

The need of image fusion stems from the inherent inability of several imaging modalities to provide the complete diagnostic information about the ailment under study. The radiographic scanning provides a wide range of divergent information. The evolution in the interfacing of the signal analysis theory and technological advancements has made it possible to device highly efficient image fusion techniques. In this manuscript the fundamentals of multisensory image fusion are discussed briefly.  The various key factors related to the future exertion of medical image fusion have also been presented

Keywords

Diagnostic; Divergent Multisensory; Radiographic

Download this article as: 
Copy the following to cite this article:

Dogra A, Goyal B, Agrawal S. Current and Future Orientation of Anatomical and Functional Imaging Modality Fusion. Biomed Pharmacol J 2017;10(4).

Copy the following to cite this URL:

Dogra A, Goyal B, Agrawal S. Current and Future Orientation of Anatomical and Functional Imaging Modality Fusion. Biomed Pharmacol J 2017;10(4). Available from: http://biomedpharmajournal.org/?p=18073

Introduction

Biology, nuclear medicine and radiology are witnessing an enormous amount of data acquisition provided by instrumental technology of high precision. It is apparent in the radiographic scanning that different imaging modalities provide a wide range of heterogeneous information.1 Due to the inherent inability of a single imaging modality to provide the holistic information about the diseased tissue, the integration of different imaging modality is requisite for the higher comprehension of the true ailment in the human body.2 For instance, the conventional MRI does not enable the extended visualization of the gliomatus tissue after therapeutic procedures. Anatomical and functional imaging modalities have served as a paradigm in planning surgical procedures for brain tumour treatment. It is evident that the fusion of co-registered PET/MRI can significantly improve the specificity for the precise evaluation recurrent tumour and its treatment. Also for precise localization of the abnormal vascularisation in ankylosing spondylitis patients, US and CT scan are fused to evaluate the inflammation severity of the sacroiliac joints.3-8

 Figure 1: T1 Weighted MRI and PET Fusion Figure 1: T1 Weighted MRI and PET Fusion

 

 

Click here to View figure

The main objective of the image fusion is to substantiate the joint analysis of the imagery data using various sensors for the same patient. Image fusion generates a single fused image which provides a more reliable and accurate information in which intracranial features are more distinguishable. For example to direct neuro-surgical resection of epileptogenic lesions or to segment cerebral iron deposits T1 weighted and T2 weighted MRI images have been fused together. Image fusion has also demonstrated its advantages in detection and localization of lesions in patients with neuro-endocrine tumours. The fusion of images has rather incurred as a phenomena that is subconsciously practised by radiologist to compare and identify abnormalities, even if not performed explicitly using a CAD system.9 The interfacing of the signal analysis theory and technological advancements in the hardware implementation has materialized the blending of the pixel values of multi-modal images to integrate information while preserving the contrast. The predicament of the image fusion technology pertains to the relentless effort of the researcher for the increased information transfer rate so to generate a relatively ideal case of image fusion. Nonetheless, along with higher information rate in 2-D image fusion, the focus of researchers is altering towards the triple modality fusion.10-24 There is a constricted hardware as well as software implementation practise of the tri-modality fusion technology. The development of new tri-modality image fusion method which can display all the image sets together in one operation is the milestone in medical imaging technology. Recently for better localisation of gross tumour volume delineation in patients with brain tumour, a tri-modality fusion scheme (MRI/PET/CT) has been proposed.25 This technology holds colossal potential usability for radiotherapy treatment planning of various brain tumours.

References

  1.  Ayush D.,  Goyal B and Agrawal S. From Multi-Scale Decomposition to Non-Multi-Scale Decomposition Methods: A Comprehensive Survey of Image Fusion Techniques and Its Applications. IEEE Access. 2017;5:16040-16067.
    CrossRef
  2.  Charles R. M., Jennifer L. B.,  Kim B., Peyton H. B., Kenneth R. Z., Paul V. K., Koral K., Kirk A. F and Richard L. W. Demonstration of accuracy and clinical versatility of mutual information for automatic multimodality image fusion using affine and thin-plate spline warped geometric deformations. Medical image analysis. 1997;1(3):195-206.
    CrossRef
  3. Zhenlong H., Zhu J., Liu F.,  Wang N and  Xue Q. Feasibility of US-CT image fusion to identify the sources of abnormal vascularization in posterior sacroiliac joints of ankylosing spondylitis patients. Scientific reports. 2015;5
  4. Maintz  J. B. A and  Viergever A. M.  A survey of medical image registration. Medical image analysis. 1998;2(1):1-36.
    CrossRef
  5.  Haiping R. Medical image fusion. Foreign Medical Sciences. Section of Radiation Medicine and Nuclear Medicine. 2001;25(3):107-111.
  6. Alex J. P and Belur V. D. Medical image fusion A survey of the state of the art. Information Fusion. 2014;19:4-19.
    CrossRef
  7. Shangli C., He J and Zhongwei L. v. Medical image of PET/CT weighted fusion based on wavelet transform. In Bioinformatics and Biomedical Engineering, 2008. ICBBE 2008. The 2nd International Conference on.  2008;2523-2525. IEEE.
    CrossRef
  8.  Rui S., Cheng I and Basu A. Cross-scale coefficient selection for volumetric medical image fusion. IEEE Transactions on Biomedical Engineering. 2013;60(4):1069-1079.
    CrossRef
  9. de Plas V., Junhai R. Y.,Spraggins J and Richard M. C. Image fusion of mass spectrometry and microscopy a multi modality paradigm for molecular tissue mapping. Nature methods. 2015;12(4):366-372.
    CrossRef
  10. Ayush D., Agrawal S., Goyal B., Khandelwal N and Kamal C. A. Color and grey scale fusion of osseous and vascular information. Journal of Computational Science. 2016;17:103-114.
    CrossRef
  11. Jyotica Y., Dogra A., Goyal B and Agrawal S.  A Review on Image Fusion Methodologies and Applications. Research Journal of Pharmacy and Technology. 2017;10(4):1239-1251.
    CrossRef
  12.  Arora A et al. Development, characterization & processing of quantum dots for imaging in UV-visible range. International Journal Of Pharmacy & Technology. IJPT.  2016;8(2):12811-12825.
  13.  Bhawna G., Agrawal S.,  Sohi B. S and Dogra A. Noise Reduction in MR brain image via various transform domain schemes. Research Journal of Pharmacy and Technology.  2016;9(7):919-924.
    CrossRef
  14.  Ayush D. Performance Comparison of Different Wavelet Families Based on Bone Vessel Fusion. Asian Journal of Pharmaceutics (AJP): Free full text articles from Asian J Pharm. 2017;10:04.
  15.  Singh M. P and  Ayush D. CT and MRI brain images registration for clinical applications. J Cancer Sci Ther. 2014;6:018-026.
  16.  Bhalla P and  Ayush D. Image Sharpening By Gaussian And Butter worth High Pass Filter. Biomedical and Pharmacology Journal.  2014;7(2):707-713.
    CrossRef
  17. Ayush D and  Bhalla P. CT and MRI Brain Images Matching Using Ridgeness Correlation. Biomedical & Pharmacology Journal. 2014;7(2 ):691-696.
  18.  Bhawna G., Dogra A., Agrawal S and Sohi B. S. Dual Way Residue Noise Thresholding along with feature preservation. Pattern Recognition Letters. 2017;94:194-201.
  19.  Ayush D., Agrawal S and  Goyal B. Efficient representation of texture details in medical images by fusion of Ripplet and DDCT transformed images. Tropical Journal of Pharmaceutical Research. 2016;15(9):1983-1993.
    CrossRef
  20. Dogra  A and Agrawal S. Efficient Image Representation Based on Ripplet Transform and Pure-Let. Int. J. Pharm. Sci. Rev. Res. 2015;34(2):93-97. September –October Article No. 16.
  21. Ayush D., Agrawal S., Khandelwal N and Ahuja C. Osseous and vascular information fusion using various spatial domain filters. Research Journal of Pharmacy and Technology. 2016;9(7):937-941.
    CrossRef
  22. Dogra A., Goyal B., Agrawal S and Ahuja C. K.  Efficient fusion of osseous and vascular details in wavelet domain. Pattern Recognition Letters. 2017;17:103-114. Nov. 2016.
  23. Dogra A., Goyal B and Agrawal S. Bone vessel image fusion via generalized reisz wavelet transform using averaging fusion rule. Journal of Computational Science.  2016;21:371-378.
  24. Dogra A and Agrawal S . 3-Stage enhancement of medical images using ripplet transform, high pass filters and histogram equalization techniques. International Journal Of Pharmacy And Technology. 2015;7:9748-9763.
  25. Lu G., Shen S.,Harris E., Wang Z., Jiang W., Guo Y and Feng Y. A tri-modality image fusion method for target delineation of brain tumors in radiotherapy. PloS one. 2014;9(11):e112187.
    CrossRef
Share Button
(Visited 405 times, 1 visits today)

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.