Image fusion is the process of combining information from multiple images of the same scene into a single image that ideally contains all the important features from each input image. The resulting image is more suitable for human and machine perception or for further image processing tasks. The various image fusion schemes available in the literature can be roughly classified into pixel-based and region-based methods. Among the former, the weighted average (WA) technique proposed by Burt and Kolczinsky remains one of the most effective, yet simple and easy to implement. WA essentially consists in calculating a normalised correlation (match measure) between the input images' subband decompositions. Then, the fused subband coefficient is calculated from this measure and the local variance (saliency measure) via a weighted average of the input image coefficients.

In this talk, we first describe a generalization of the WA method to the case when the data to be fused exhibit impulsive characteristics. Specifically, our approach is based on modelling wavelet decomposition coefficients of images with symmetric alpha-stable (SaS) distributions. Since in general no second or higher order moments can be defined for alpha-stable distributions, we need to introduce new match and saliency measures. Hence, we employ the dispersion of the alpha-stable distribution as measure of saliency, while we propose the use of symmetrised and normalised covariation coefficients in order to define new match measures for a-stable random vectors. This approach outperforms the standard WA fusion algorithm, being based on a more accurate statistical characterisation of data.

In the second part of the talk, we extend the above pixel-based approaches by introducing a novel region-based image fusion framework based on multiscale image segmentation and statistical feature extraction. A dual-tree complex wavelet transform and a statistical region merging algorithm are used to produce a region map of the source images. The input images are partitioned into meaningful regions containing salient information by employing again SaS distributions. The region features are then modelled using bivariate alpha-stable distributions, and the statistical measure of similarity between corresponding regions of the source images is calculated as the Kullback-Leibler distance between the estimated stable models. Finally, a segmentation-driven approach is used to fuse the images, region by region, in the complex wavelet domain.

Finally, we describe a novel approach to image fusion which is based on the newly introduced compressive sampling theory. Specifically, recent theoretical studies demonstrated that if a signal is sparse or nearly sparse in some basis, then it can be recovered from a small number of linear projections onto a second basis, called the measurement basis, which is incoherent with the first basis. In this context, we show how some simple standard image fusion algorithms can be adapted and used for fusing compressive measurements rather than the actual image pixels.