A Review of Digital Image Fusion and its Application

Image Fusion is applied to get back a group of data from two or more than two images and put it into one image to create additional wealthy information and profitable more than any of the input data that led to increase the features and performance of information. The quality of the resultant data relies on the implementation of the process. Image fusion is excessively utilized in stereo camera fusion, medical application, manufacture process monitoring, electronic circuit design and inspection, complex machine diagnostics and in intelligent robots on assembly lines. This study displays a literature review on different types of algorithm and theories which apply on images. Many quality criteria have debated to do a brief comparison of these methods. The applications of image fusion are showed in this paper.

1. Pixel level fusion: This kind of image fusion is created based on the data which it is fixed at a group of pixels in input signals to increase the accuracy of implementation [46][47][48].
2. Block level fusion: Based on the neighborhood points of the information provided. 3. Feature level fusion: It turns on the distinguished countenance of the data like pixel Intensities, size and edge. These features of the input images are combined. 4. Decision level fusion: The results from multiple algorithms are joined to get a final decision image. At the beginning, input images are managed then the gained information is collected to apply the decision rules. The decision level methods are dealing with symbolic implementation of images [39]. The levels of image fusion are illustrated in

Averaging Technique:
This is the simplest spatial domain module. The average technique decreases the resultant image quality by introducing noise into fused image. This leads to reduce an unwanted side effect like reduced contrast [43]. The resultant image is applied by equation 1 [32].
(i, j) = A(i,j)+B(i,j) 2 (1) Where A(i,j) ,B(I,j) are input matrixes and F(i,j) is the output matrix. This technique does not give guaranteed to have a fine data from the group of images.

Greatest Pixel Value Technique:
This technique selects the greatest value from corresponding pixels .The fused image is obtained as [41].
(2) Where A(i,j) ,B(I,j) are input matrixes and F(i,j) is the output matrix.
The fused image is highly concentrated from the input image. This technique is influenced by the blurring action that impacts the contrast of the image [43].

Minimum Pixel Value Technique:
The minimum value of intensity is chosen and is inserted the resultant data in the fused image. The fused image can be calculated by the following equation [32].
(3) Where A(i,j) ,B(I,j) are input data and F(i,j) is the output data.

Max-Min Technique:
The minimum and maximum values of input images are chosen and the averages are calculated. The final value is saved as the values of the pixel in the output data [32].

Weighted average Technique:
This method is suggested by Song et. Al [43]. The different weights are allocating to all input images and the Pixels (x,y) of fused image are applied by computing weighted sum of all corresponding pixels in input images. It improves the detection reliability but increases the signal to noise ratio (SNR)of the fused image [47]: (4) Where A(i,j) ,B(i,j) are input data and F(i,j) is output data and W is the weight factor.

The Principal Component Analysis Technique (PCA):
Principal component analyses "PCA" is a vector space conversion applied to decrease the multi-dimensional information groups to a lower dimension. PCA is a plain and useful of the correct eigenvector based multivariate analyses, due to detect the internal features of information in an equitable way [32].
The Principal component analysis is utilized to convert the correlated numbers into uncorrelated numbers. At first, the PCA is used to get a maximum alteration with the along of direction. Second, the principal components are specified in a subspace vertical to a first within this subspace; this component dots the way of maximum change. The step three of the principal components is executed depending on the direction of maximum difference in a subspace perpendicular to the first, two and so on. This algorithm is also known as Karhunen-Loève or the Hotelling transform [45]. This technique does not have a fixed set vectors like "FFT", "DCT "and wavelet etc. and its basis vectors based on the data set [1 ,13]. The PCA is very easy to apply for image fusion and output fused images have high spatial quality. But the results in spatial degradation. The flow diagram of PCA image fusion algorithm is shown in Fig.3. Image fusion algorithm using PCA is characterized in fig.4 [45]:

High Pass Filter Technique (HPF):
High Resolution Multispectral Image "HPMI "is made by applied high pass filtering. A high frequency data which is produced by High Resolution Panchromatic Image "HRPI " is collected with a low resolution multispectral image to acquire the fused 2D signals. This algorithm is implemented either by using a high pass filter with HRPI or by subtracting Low Resolution Panchromatic Image "LRPI "from original HRPI [40].

Brovey Transform Technique:
It is known as "the colour normalization transform " [46]. The mean advantage of Brovey Transform is the simplicity and grants attractive images and high contrast RGB image can be obtained by Brovey Transform [43]. This method is not applied for the original scene radiometry and bounded to only three bands [48]. Brovey Transform Technique can be applied by the following equation.
Where Fusion i is the output image, i=1,2,3.

PYRAMID TECHNIQUE
A pyramid method can operate a lot of data at various scales which is representing together the original image. In general, this method consists of three major stages [49][50]:

The Decomposition Stage:
The operation of this stage is produced in succession at every steps of the fusion. The levels of fusion are determined early. This stage consists of the following steps. 1-The input images are filtered through a low pass filter. 2-Change the size of the signals to half size by using the decimated algorithm [47]. These steps are repeated number of times.

The maximum-minimum/ average Stage
This step are defined as second stage of Pyramid technique which are using the final decimated input matrices, and a new image matric is calculated either by selecting minimum or maximum or taking average.

Re-composition Stage:
The resultant image which is created at each level of decomposition is used in the re- The disadvantage of spatial domain methods is that they perform spatial distortion in the fused image. Spectral distortion has negative values while we go for further processing. Spatial distortion can be very well replaced by frequency domain theories on image fusion. The discrete transform and multi resolution analysis becomes a very efficient tool for analyzing and calculating in image fusion [43].

Discrete Transform Based Fusion Technique
The Discrete Transform based fusion can be described by the following scheme in fig. 6 [51]: The input image is un-decimated to the level of re-composition The un-decimated matrix is filtered by applying the transpose of the filter vector which is used in the decomposition stage The filtered matrix is combined, by the process of pixel intensity Value addition, with the pyramid formed at the respective level of decomposition.
This last produced image matrix acts as the input to the next level of re-composition.
The resultant image is the fused Image Step 1 Step 2 Step 3 Step 4 Step

Discrete Cosine Transform:
A DCT is applied to represent an order of restricted data bounded as far as capacities of cosine at different frequencies. The DCT is widely used as a part of computing image setting. The Universal DCT coefficients are crowded in the low repeating district. The DCT coefficients are calculated for every blocks of input image [46]. apply image fusion are DCT av, DCTma, DCTah, DCTch, DCTe and DCTcm. All these method applied the main steps of Image fusion using 'DCT' with some little different. The steps of DCT image fusion are as follows:

Discrete Wavelet Transform:
Discrete wavelet transform is multiresolution decomposition modules which supply a multi-mode to represent the image features by various frequency sub-bands in multi-scale [49]. When the forward DWT is implemented, the approximation and detail coefficients are calculated separately. The output image can be displayed as four Sub band images "LL, LH, HL and HH". As shown in fig,.7 [50].

Stationary wavelet transforms:
Stationary Wavelet Transform "SWT "can be derived from the Discrete Wavelet Transform but the down sampling operation is deleted. the SWT is translation-invariant. The two dimensional module of SWT can be explained in Fig.8 [51-53].

KEKRE'S TRANSFORM :
Kekre matrix is derived from Kekre's LUV color space matrix. The condition of the most other conversions matrices which have to be in powers of 2 is not demanded in Kekre  Kekre matrix can be calculated depending on the following equation [54,55].
The above equation can be represented as shown in fig. 9. This means the upper triangle and diagonal values of Kekre's conversion are 1, while the lower triangle except the elements just below diagonal is zero [51].

KEKRE'S HYBRID WAVELET
MATRIX Technique: This method of transformation is adding the values of two various orthogonal conversion wavelets to employ the intensities of wavelets. A size of hybrid transform matrix are NxN which called "TAB ". This transform can be produced from two matrices A and B with sizes pxp and qxq respectively. The N=p*q=pq as shown in figure 10 [28]. A first row of the hybrid matrix can be produced by multiplying every value of first row of A with every value the columns of the B. another row of hybrid matrix are the second row of the matrix A which are shifted rotate after being appended with zeros as shown in fig.9. The chosen one of the image fusion techniques depends on the application which fused image are applied, But knowing the advantage and disadvantage of each one of the image fused techniques as shown in Table .1 can help to determine which one of them more suitable for specific application.

Image Quality Metrics
The estimation methods of images quality can be organized into two types according to exist reference image: Full Reference Methods "FR " and No Reference Methods "NR". In first estimation method, the features are calculated depends on an original image which is assumed to be perfect in quality. Another type of estimation methods do not depends on original image [56].

Full Reference Image quality assessment (FR-IQA)
The efficiency of fusion modules can be calculated using the following standardization which is illustrated in Table.2. Assume A is the input references data, B is the output data. i,j is the pixel index of row and column respectively. The dimension of the image is M x N [57][58][59].

NO REFERENCE IMAGE QUALITY ASSESSMENT (NR-IQA)
The "NR-IQA " or "blind " quality criteria approach is very useful in which the reference image is not available. There are some NR-IQA criteria as shown in the following table 3 [63][64][65].

APPLICATIONS AND USES OF IMAGE FUSION
1) It is utilized in satellite or remote space due to suitable view of satellite vision [66][67][68].
2) It is employed in medical image to analysis the disease through imaging vision depends on frequency and spatial resolution [69][70][71][72].
3) It is utilized in military applications to discover the threats and other work impedance [73,74]. 4) In robotics applications, the fused image is mostly utilized to detect the frequency divergences in the image [75][76][77][78]. 5) It is employed in artificial neural network where a centric length according to wavelength conversion [79][80][81][82][83]. 6) Some application of image fusion deal with enhancement road map extraction which plays an important rule especially in big city in high resolution image with focusing on edge extracted by using some theory of image fusion [84].

Working on Spatial domain
Maximum Pixel Value Technique [32] These techniques introduced blurred output which is affect to the contrast of image , so these methods are not applied into real time applications Minimum Pixel Value Technique [32] Max-Min Pixel Value Technique [32] Weighted average technique [47] Improves the detection reliability Increasing SNR of resultant image.
Principle Component Analysis Algorithm [45] Very simple, fast processing, time, high spatial quality and efficient for computational

Spectral degradation and colour distortion
Brovey technique [48] Very simple, fast processing, time, and efficient for computational . It introduced RGB image with high contrast degree.

Colour distortion
Laplacian / Gaussian Pyramid [60] Good visual quality of multi focus images.
All these techniques produce more or less similar output image. The number of decomposition level effect on the result of fused image

Working on Frequency domain
Gradient Pyramid [60] Ratio of low pass Pyramid [61] Morphological Pyramid [62] Discrete cosine Transform (DCT) [46] This method decrease the complexity and analysis the image into series of waveform. This technique can be applied on real applications.
Provide not good quality on image fusion if block based smaller then 8X8 or the image size itself.

Working on Frequency domain
Discrete wavelet transform (DWT) [49] Introduce good quality resultant image fusion and high SNR and decreases the spectral noise The resultant image is less than spatial resolution.   [64] Is assessing the quality of image. If the entropy value is large, it is an indication that the fusion performance has progressed.
Spatial Frequency (SF) [63] A large value of Spatial Frequency means the higher clarity of the image. SF = Rf 2 + CF 2 Where Standard Deviation(SD) [65] is measured on the low frequency of the input image by using a 3X3 windows and getting higher values of mean were selected as the fusion coefficients of low frequency components. If the value of mean square is large that means image quality is a poor.
Peak Signal to Noise Ratio (PNSR) [59] The Mean squared error is given by PSNR = 10 * log 10 peak 2 MSE Root Mean Squared Error (RMSE) [59] Are applied to detect the contrast between the input and the resultant images. Low values of RMSE signalize that the resultant information is similar to the input data.
Mutual Information (MI) [63] is implemented to calculate the similarity of intensity between images A and B. the value of I(A,B) is High means the similarity between A and B is very close.
where PAB(a,b) is the joint distribution probability, PA(a) and PB(b) are the distribution probabilities of A and B, respectively. Structural Similarity Index Metrics (SSIM) [59] is used to estimate the similarity of two images A and B. A high value of this index indicates a great similarity of the two images.

CONCLUSION
Different algorithms of spatial and frequency domain theories have been reviewed in this paper. The main purpose of this study is giving a guide for any specialist who wants to work on image fusion. At the beginning, the application to which the image fusion need to be applied must be determined, then the suitable mentioned technique (or techniques) of image fusion can be chosen to give the best result and serve the application. According to the properties of the image fusion techniques, one of them can be chosen according to the highlighted simple comparison of advantage and disadvantage of image fusion methods as shown in Table 1.
The simple fusion algorithms like averaging, minimum, maximum, max-min and Weighted average Technique are producing a noisy, blurred and low contrast image; they cannot be used for real time applications. Principal component analysis (PCA), HPF and Brovey are simple, easy for computational and fast algorithms but these techniques result in colour distortion. The fused image using PCA has high spatial quality but it results in spectral degradation. All pyramid theories depends on Fusion methods introduce more or less similar output. These methods are typically utilized for multi-focus application. Discrete Cosine Transform (DCT) method is used in real time system but cannot be applied with block size less than 8×8. The DWT algorithm has a high Signal to Noise Ratio and minimum spectral distortion. But, the isotropic property of wavelet techniques which can't be capture the long edge and the curve of the image, make this technique is not suitable for some application. The SWT algorithm can describe of the details components of images and can decompose the details effectively compared to the ordinary wavelet transform. Every technique represents some advantages and drawbacks. Thus, we can deduce that no fusion theory outperforms the others. Some applications need to make Hybrid between these algorithms to get a good resultant image fusion like combination between PCA and discrete wavelet transform (DWT) give a good result and this is served some allocations .