In medical imaging, the use of different imaging modalities has significantly enhanced the diagnostic information available to physicians. Each type of modality exhibits unique information about the subject being imaged. In many cases, physicians are interested in multimodal information about the same organ, which is complementary in nature and its fusion is often required. A lot of methods are proposed to address the medical image fusion problem. However, the main downside of all such methods is the loss of key features from the input images to the fused image. Additionally, such methods also introduce unwanted artifacts in the fused image. In this project, we present a method for multimodal medical image fusion based on the gradient compass in the spatial domain, which can effectively fuse a pair of multimodal medical images. An edge detail is extracted from source multimodal medical images in eight different directions, which provides significant data for the construction of an edge map of source medical images. With the help of constructed edge maps two detailed medical images are generated. The statistical properties of detailed medical images are used to construct weight matrices, which are then used to perform adaptive pixel fusion. We benchmark our method on multiple Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) images of the same subject. The performance of the proposed algorithm has outperformed the existing methods by transferring only related information from the source image to the fused image.

Let's Talk