Dogra A, Goyal B, Agrawal S. Current and Future Orientation of Anatomical and Functional Imaging Modality Fusion. Biomed Pharmacol J 2017;10(4).
Manuscript received on :December 05, 2017
Manuscript accepted on :December 15, 2017
Published online on:
How to Cite    |   Publication History
Views Views: (Visited 200 times, 1 visits today)    PDF Downloads: 232
Current and Future Orientation of Anatomical and Functional Imaging Modality Fusion

Ayush Dogra, Bhawna Goyal and Sunil Agrawal

UIET,Department of electronics and communications, Panjab University, Chandigarh-160017, India.

Corresponding Author E-mail:



The need of image fusion stems from the inherent inability of several imaging modalities to provide the complete diagnostic information about the ailment under study. The radiographic scanning provides a wide range of divergent information. The evolution in the interfacing of the signal analysis theory and technological advancements has made it possible to device highly efficient image fusion techniques. In this manuscript the fundamentals of multisensory image fusion are discussed briefly.  The various key factors related to the future exertion of medical image fusion have also been presented


Radiographic Multisensory; Diagnostic; Divergent

Download this article as: 
Copy the following to cite this article:

Dogra A, Goyal B, Agrawal S. Current and Future Orientation of Anatomical and Functional Imaging Modality Fusion. Biomed Pharmacol J 2017;10(4).

Copy the following to cite this URL:

Dogra A, Goyal B, Agrawal S. Current and Future Orientation of Anatomical and Functional Imaging Modality Fusion. Biomed Pharmacol J 2017;10(4). Available from:


Biology, nuclear medicine and radiology are witnessing an enormous amount of data acquisition provided by instrumental technology of high precision. It is apparent in the radiographic scanning that different imaging modalities provide a wide range of heterogeneous information [1]. Due to the inherent inability of a single imaging modality to provide the holistic information about the diseased tissue, the integration of different imaging modality is requisite for the higher comprehension of the true ailment in the human body [2]. For instance, the conventional MRI does not enable the extended visualization of the gliomatus tissue after therapeutic procedures. Anatomical and functional imaging modalities have served as a paradigm in planning surgical procedures for brain tumour treatment. It is evident that the fusion of co-registered PET/MRI can significantly improve the specificity for the precise evaluation recurrent tumour and its treatment. Also for precise localization of the abnormal vascularisation in ankylosing spondylitis patients, US and CT scan are fused to evaluate the inflammation severity of the sacroiliac joints [3-8].

 Figure 1: T1 Weighted MRI and PET Fusion Figure 1: T1 Weighted MRI and PET Fusion



Click here to View figure

The main objective of the image fusion is to substantiate the joint analysis of the imagery data using various sensors for the same patient. Image fusion generates a single fused image which provides a more reliable and accurate information in which intracranial features are more distinguishable. For example to direct neuro-surgical resection of epileptogenic lesions or to segment cerebral iron deposits T1 weighted and T2 weighted MRI images have been fused together. Image fusion has also demonstrated its advantages in detection and localization of lesions in patients with neuro-endocrine tumours. The fusion of images has rather incurred as a phenomena that is subconsciously practised by radiologist to compare and identify abnormalities, even if not performed explicitly using a CAD system [9]. The interfacing of the signal analysis theory and technological advancements in the hardware implementation has materialized the blending of the pixel values of multi-modal images to integrate information while preserving the contrast. The predicament of the image fusion technology pertains to the relentless effort of the researcher for the increased information transfer rate so to generate a relatively ideal case of image fusion. Nonetheless, along with higher information rate in 2-D image fusion, the focus of researchers is altering towards the triple modality fusion [10-24]. There is a constricted hardware as well as software implementation practise of the tri-modality fusion technology. The development of new tri-modality image fusion method which can display all the image sets together in one operation is the milestone in medical imaging technology. Recently for better localisation of gross tumour volume delineation in patients with brain tumour, a tri-modality fusion scheme (MRI/PET/CT) has been proposed [25]. This technology holds colossal potential usability for radiotherapy treatment planning of various brain tumours.


  1. Dogra, Ayush, Bhawna Goyal, and Sunil Agrawal. “From Multi-Scale Decomposition to Non-Multi-Scale Decomposition Methods: A Comprehensive Survey of Image Fusion Techniques and Its Applications.” IEEE Access5 (2017): 16040-16067.
  2. Meyer, Charles R., Jennifer L. Boes, Boklye Kim, Peyton H. Bland, Kenneth R. Zasadny, Paul V. Kison, Kenneth Koral, Kirk A. Frey, and Richard L. Wahl. “Demonstration of accuracy and clinical versatility of mutual information for automatic multimodality image fusion using affine and thin-plate spline warped geometric deformations.” Medical image analysis1, no. 3 (1997): 195-206.
  3. Hu, Zhenlong, Jiaan Zhu, Fang Liu, Niansong Wang, and Qin Xue. “Feasibility of US-CT image fusion to identify the sources of abnormal vascularization in posterior sacroiliac joints of ankylosing spondylitis patients.” Scientific reports5 (2015).
  4. Maintz, JB Antoine, and Max A. Viergever. “A survey of medical image registration.” Medical image analysis2, no. 1 (1998): 1-36.
  5. Ren, Haiping. “Medical image fusion.” Foreign Medical Sciences. Section of Radiation Medicine and Nuclear Medicine25, no. 3 (2001): 107-111.
  6. James, Alex Pappachen, and Belur V. Dasarathy. “Medical image fusion: A survey of the state of the art.” Information Fusion19 (2014): 4-19.
  7. Cheng, Shangli, Junmin He, and Zhongwei Lv. “Medical image of PET/CT weighted fusion based on wavelet transform.” In Bioinformatics and Biomedical Engineering, 2008. ICBBE 2008. The 2nd International Conference on, pp. 2523-2525. IEEE, 2008.
  8. Shen, Rui, Irene Cheng, and Anup Basu. “Cross-scale coefficient selection for volumetric medical image fusion.” IEEE Transactions on Biomedical Engineering 60, no. 4 (2013): 1069-1079.
  9. Van de Plas, Raf, Junhai Yang, Jeffrey Spraggins, and Richard M. Caprioli. “Image fusion of mass spectrometry and microscopy: a multimodality paradigm for molecular tissue mapping.” Nature methods12, no. 4 (2015): 366-372.
  10. Dogra, Ayush, Sunil Agrawal, Bhawna Goyal, Niranjan Khandelwal, and Chirag Kamal Ahuja. “Color and grey scale fusion of osseous and vascular information.” Journal of Computational Science17 (2016): 103-114.
  11. Yadav, Jyotica, Ayush Dogra, Bhawna Goyal, and Sunil Agrawal. “A Review on Image Fusion Methodologies and Applications.” Research Journal of Pharmacy and Technology10, no. 4 (2017): 1239-1251
  12. Ayushi Arora et al. “Development, characterization & processing of quantum dots for imaging in UV-visible range”, International Journal Of Pharmacy & Technology. IJPT| June-2016 | Vol. 8 | Issue No.2 | 12811-12825.
  13. Goyal, Bhawna, Sunil Agrawal, B. S. Sohi, and Ayush Dogra. “Noise Reduction in MR brain image via various transform domain schemes.” Research Journal of Pharmacy and Technology9, no. 7 (2016): 919-924.
  14. Dogra, Ayush. “Performance Comparison of Different Wavelet Families Based on Bone Vessel Fusion.” Asian Journal of Pharmaceutics (AJP): Free full text articles from Asian J Pharm 10, no. 04 (2017).
  15. Manjeet Singh Patterh and Dogra, Ayush,. “CT and MRI brain images registration for clinical applications.” J Cancer Sci Ther6 (2014): 018-026.
  16. Parvinder Bhalla and Dogra, Ayush,. “Image Sharpening By Gaussian And Butterworth High Pass Filter.” Biomedical and Pharmacology Journal.2014. 7(2) 707-713
  17. Dogra, Ayush, and Parvinder Bhalla. “CT and MRI Brain Images Matching Using Ridgeness Correlation.” Biomedical & Pharmacology Journal7, no. 2 (2014): 691-696
  18. Goyal, Bhawna, Ayush Dogra, Sunil Agrawal, and B. S. Sohi. “Dual Way Residue Noise Thresholding along with feature preservation.” Pattern Recognition Letters, 2017, vol 94, pp-194-201, 2017
  19. Dogra, Ayush, Sunil Agrawal, and Bhawna Goyal. “Efficient representation of texture details in medical images by fusion of Ripplet and DDCT transformed images.” Tropical Journal of Pharmaceutical Research15, no. 9 (2016): 1983-1993.
  20. Dogra, A., and Sunil Agrawal.”Efficient Image Representation Based on Ripplet Transform and Pure-Let.”Int. J.Pharm. Sci. Rev. Res. , 34(2),September – October 2015: Article No. 16, Pages :93-97
  21. Dogra, Ayush, Sunil Agrawal, Niranjan Khandelwal, and Chiraj Ahuja. “Osseous and vascular information fusion using various spatial domain filters.” Research Journal of Pharmacy and Technology9, no. 7 (2016): 937-941.
  22. Dogra, A., Goyal, B., Agrawal, S. and Ahuja, C.K., 2017. Efficient fusion of osseous and vascular details in wavelet domain. Pattern Recognition Letters, vol. 17, pp. 103-114, Nov. 2016.
  23. Dogra, A., Goyal, B. and Agrawal, S., 2016. Bone vessel image fusion via generalized reisz wavelet transform using averaging fusion rule. Journal of Computational Science., vol 21, pp-371-378, 2017
  24. Dogra, A., and Sunil Agrawal. “3-Stage enhancement of medical images using ripplet transform, high pass filters and histogram equalization techniques.”International Journal Of Pharmacy And Technology 7 (2015): 9748-9763.
  25. Guo, Lu, Shuming Shen, Eleanor Harris, Zheng Wang, Wei Jiang, Yu Guo, and Yuanming Feng. “A tri-modality image fusion method for target delineation of brain tumors in radiotherapy.” PloS one 9,no.11(2014):e112187.
(Visited 200 times, 1 visits today)

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.