How Advanced Image Processing Helps For SAR Image Restoration and Analysis

By Florence Tupin


The past few years have seen important advances in remote sensing imagery. The new sensors have improved resolutions in all dimensions, spatial resolution with reduced pixel sizes, temporal resolution with shorter revisit times and spectral resolution with increased number of spectral bands. With these new specifications, new challenges have appeared. The huge amount of remote sensing data raises new computational issues [1] and asks for faster processing approaches. New applications are accessible or can achieve new results like change detection, natural disaster monitoring, urban and landscape planning, biomass measurement. Theses advances are especially true for SyntheticAperture Radar (SAR) sensors, withmetric resolution available for civil satellite data, new spectral bands (L bandwith ALOS, X band for TerraSAR-X and COSMO-SkyMed), new interferometric potential thanks to TanDEM-X [2], reduced revisit time with constellations like COSMO-SkyMed. In spite of these improvements, SAR images remain difficult to interpret. New difficulties arose with the increase of spatial resolution: previously unnoticeable targets are now visible, bright scatterers are more numerous. Beyond speckle noise intrinsic to coherent imagery, geometric distortions due to distance sampling limit our visual understanding of such images, and direct interpretation of an urban area imaged by a SAR sensor is still reserved to expert photo-interpreters.

Together with progress made with recent sensors, new powerful image processing methods have emerged in the recent years. Among the major advances made last decade by the image processing and computer vision communities, we have chosen to emphasize three of them for their long-term potential and applicative interest for SAR imaging.

The first family of advances in signal and image processing is related to the progress in statistical modeling of multiplicative noise, which is particularly important to deal with SAR imagery. Therefore, the first point we would like to mention is the Mellin framework proposed in [3] to deal with positive random variables and their multiplication.

The second family of methods is based on the idea of “patches”. Patches are small image parts (typically 5 × 5 or 7 × 7 pixels). They capture fine scale information such as texture, bright dots or edges. Given their very local extent, they are highly redundant, i.e., many similar patches can be found in an image. These similar patches can then be combined to reduce noise [4]. But patch similarity can also be applied to stereovision or change detection.

The third family are the “graph-cut” approaches, where an image processing problem is converted into the search of a minimumcut in a graph [5]. Efficientminimumcut algorithms have been proposed for computer vision problems [6] and the focus is put on designing a graph to solve a given image processing task. Theses approaches have been mainly used to optimize functionals or energies derived from Markovian modeling or regularization approaches. A famous model is the Total Variation minimization [7] which can be exactly minimized in one of its discrete form using a multiple layers graph [8], [9]. Graphcut based approaches have also become very popular for many denoising and partitioning problems.

We will see in this letter how these three theories (among others) have contributed to the development of efficient tools for SAR image processing.


One of the main difficulties of SAR imagery is the speckle phenomenon. Radar are coherent imagery systems, leading to interferences between electro-magnetic waves backscattered by the reflectors inside a pixel. These interferences cause a strong variability of radiometric values, even for a physically homogeneous area. In his seminal work [10], Goodman has derived the gray level distributions of radar images: Rayleigh distribution of amplitude image, Nakagami for multi-looked data (multilook meaning that some pixels have been averaged), Gamma for multi-looked intensity image. However, these models have shown some limits when dealing with high resolution images. Since the beginning of SAR images, many distributions have been proposed to model radar data: K distribution [11], lognormal distribution, Weibull distribution etc. These distributions can be well adapted to some specific cases. They are usually defined by some parameters that have to be empirically learnt on some small local areas of the images. The tradeoff between bias and variance of the estimators requires large window sizes while keeping a homogeneous statistical population.

In the past recent years, a powerful framework has been developed by J.-M. Nicolas to unify the set of distributions and to provide efficient tools to compute parameter estimators [3]. The whole theory is built on the observation that radar amplitude or intensity is intrinsically positive. Therefore, the Fourier transform, which is an integral over the set of all real values, should be replaced by some transform defined on positive values only. This is the case of the Mellin transform, which has the following form:

where s is a complex number, and p stands here for the random variable distribution. Mimicking the characteristic function and all the definitions that can be derived from it, like moments and cumulants, a second kind characteristic function based on Mellin transform has been defined, leading to log-moments and log-cumulants. The Mellin convolution, which is the counterpart of the convolution in the positive value domain, provides a natural way to define the distribution of products of independent random variables (whereas the regular convolution deals with sum of variables). Without going too far into the details of this still evolving theory, we would like to mention what seems to us important contributions of this work. First, parameter estimation based on log-cumulants gives low variance estimators, allowing the use of analysis windows of reduced sizes (figure 1). Secondly, this work has enlightened the relationships between the different distributions (Gamma, K, inverse Gamma, Weibull, log-normal,…) thanks to Mellin convolution and thanks to a diagram defined by the second and third logcumulants (figure 2). Thirdly, the Fisher distribution has appeared as a “generic” distribution with 3 parameters adapted to a wide range of surfaces (urban areas, vegetation, etc.) [12].

This work has been first developed for amplitude or intensity images, and has been adapted later by different authors to polarimetric data. We would like to mention the work of Anfinsen on the extension of the use ofMellin transform for polarimetric data by developing the matrix-variate Mellin transform framework, and exploiting it to better process polarimetric data [13].


Whereas the Mellin framework takes into account the variability of the scene within a region with a variety of distributions seen as Mellin products, denoising approaches try to suppress signal-dependent speckle variability to recover the scene reflectivity.

Non-local approaches and graph-cut based optimization have proven to lead to very efficient denoising methods. We will illustrate in this section how these recent and popular image processing approaches can be adapted to the case of SAR images.

A. Non-local approaches

The first family of methods described in the introduction is based on patch similarity. They are known as non-local approachesn or NL-means [4]. The main idea of non-local methods is to find similar patches in the image. In the case of image denoising, this set of similar patches is then used to suppress the noise, for instance by averaging the central pixels of each patch.

Let us consider the Gaussian filter for comparison. Its principle is to average spatially close pixels to suppress the noise. Spatially close pixels can belong to different populations, though. Therefore, improvements of this basic idea have been proposed. Instead of taking “spatially close” pixels, we can take “radiometrically close” pixels [4]. In this case, the problem is to select a pixel which should be “radiometrically” close from another pixel. And here comes the idea of patch comparison. A pixel can reasonably be assumed to be radiometrically close from another one, if their surrounding patches are similar (see figure 3). To denoise a pixel s, the values of pixels t are averaged with a weight depending on the similarity of the two patches surrounding s and t. This is a powerful approach since there is no connectivity constraint between s and t compared to [14], [15], and far apart patches can be considered to denoise a given pixel (hence the term “non-local” denoising).

This framework has been initially developed for Gaussian noise: the denoising is done by averaging the noisy samples, and the similarity criterion is based on the Euclidean distance between the two patches. To adapt this framework to other kinds of noise while keeping the principle of patch comparison, Deledalle et al. have proposed a probabilistic framework [16]. The denoising task is expressed as a weighted maximum likelihood estimation, and the weight definition is established thanks to a probabilistic approach. Besides, this probabilistic framework leads to similarity weights formed by two terms, one related to the noisy data (likelihood similarity) and the other one

to the denoised data (prior similarity). For this second term, an iterative scheme has been proposed which greatly improves the results when strong noise is present on the data. This framework can be applied to any noise having a known distribution like Gamma or Poisson. In the case of SAR amplitude images, the denoising scheme is the following:

The final algorithm is thus rather simple and results are interesting, with preserved edges and smoothed areas as can be observed on figure 4.

Other efficient denoising methods have been proposed in the recent years like wavelet based methods [18], [19], [20] or BM3D based approaches [21]. One of the strengths of the proposed probabilistic framework is that it allows the application of non-localmethods for complex data or vectorial data as soon as noise is well modeled by a parametric distribution. Thus, it can be used efficiently to process interferometric or polarimetric data using the speckle noise described by a zero-mean complex circular Gaussian distribution [10]. For instance in the case of interferometric images, weighted likelihood estimators for reflectivity, interferometric phase and coherence are derived, and the weights measure the probability that the observations come from the same parameters for all the couples of pixels of the two patches. Figure 5 illustrates the potential of such approaches. Instead of computing local hermitian products to derive interferometric information and thus losing spatial resolution, such approaches can be used to compute interferograms at the nominal resolution of the data. The case of polarimetric data is similar with the estimation of the underlying covariance matrix. Application of such a framework is described in [22].

Beyond the denoising application, patch similarity of amplitude, interferometric or polarimetric data can be very useful for change detection or movement monitoring.

Fig. 4. Illustration of the NL-means SAR denoising. Figure a) on the left is a 100-looks image obtained by multi-looking a Very High resolution image (image acquired by ONERA, multi-looked by CNES ONERA CNES). This image can be considered as a ground truth. Figure b) is a 1-look image of resolution 1×1 meter. Figure c) is the denoised version of the 1-look image b). Fine details are well preserved by this approach.

Fig. 5. Illustration of NL-InSAR. On the top, the original interferometric data (amplitude, phase and coherence, with 1-look). On the bottom, the non-local estimation of amplitude, phase, and coherence with no loss of resolution.The weights of the likelihood estimations are computed using the similarity of the complex patches of the two interferometric images. Results are from [17].

B. Regularization Approaches

Other powerful approaches for denoising are regularization based methods which have also been extensively studied in the past 10 years in the image processing and computer vision communities. The idea is to express the problem as an energy minimization one, the energy being divided into two terms, one related to the noise distribution (likelihood term) and the other one to the properties we expect for the solution (prior term). This energy can be derived for instance by a probabilistic approach (discrete point of view), but also from variational methods establishing a functional to minimize (continuous point of view). The likelihood term is usually linked to the model of noise perturbating the data. The prior term or regularization term usually imposes the “smoothness” of the solution and is expressed through interactions between neighboring pixels. A popular model is a low total variation (TV model [7]) corresponding to almost piecewise constant image or equivalently to a sparse gradient (only few values of the gradient can be non zero). But other models like truncated quadratic or phifunctions can be chosen [23].

Beyond the difficult choice of the right model to express our prior knowledge on the scene, the minimization of the energy or functional is generally not easy. Indeed, for many cases, and especially for radar imagery, the neg-log-likelihood is not convex. In this case, usual continuous optimization methods similar to gradient descent can not be applied or risk to get stuck in a local minimum. Recent approaches of combinatorial optimization based on graph-cut allow for exact optimization of energies composed of a convex prior term (like TV minimization) and a (possibly non-convex) data term [8], [9]. Theses approaches build a multiple layer graph, each layer corresponding to a possible gray level of the solution and search for the minimum cut in this graph. The minimum cut gives the exact solution of the optimization problem in the discrete space (spatially discrete image and discrete gray level set). There are two main limitations to this important result. The first one is the quantization of the gray levels which may not be easy for high dynamic images like SAR data. It can be solved by combining a discrete optimization step and a continuous one [24]. The second limit is the memory size. Indeed, the size of the graph is the size of the image multiplied by the number of considered gray levels and it should be stored in memory for the minimum cut computation. This size is prohibitive for remote sensing images and block cutting is not an acceptable solution. Recent approaches based on multi-label partition moves [25] or dichotomy [26] largely reduce the memory cost, but loosing the optimality guarantee.

These models can bring interesting results for SAR imagery.The first application is the amplitude denoising of a radar image. In this case, adapted prior can be defined. In [27], the scene is decomposed as the sum of two terms, a component with low total variation representing the “background” of the scene in a cartoon-like model, and a sparse component representing the bright scatterers of the image with few non zero pixels. This model can be solved exactly using graph-cut optimization. Another interesting application is the joint regularization of phase and amplitude of InSAR data [28]. In this case, it is possible to take into account the exact distribution of the Mlook interferometric data for the likelihood term, and to introduce some prior knowledge preserving simultaneously phase and amplitude discontinuities. The phase and amplitude information are hopefully linked since they reflect the same scene. Amplitude discontinuities thus usually have the same location as phase discontinuities and conversely. To combine the discontinuities, a disjunctive max operator has been used, providing well preserved fine structures [28]. Figure 6 shows an example of 3D reconstruction using a joint regularization of the interferometric phase.

These approaches can also be particularly useful for multichannel phase unwrapping [29]. Indeed, they provide a very efficient way to combine different interferometric phases in a multi-modal likelihood term, whereas a regularization term imposes to the unwrapped phase some smoothness constraints. It is also possible to introduce atmospheric corrections in the optimization scheme in an iterative way. These approaches could provide a highly flexible framework to introduce prior knowledge in Digital Terrain Model reconstruction in multi-channel interferometry or in ground movement monitoring in differential interferometry [30]. Figure 7 illustrates the global combination of multi-baseline interferograms with automatic atmospheric corrections using an affine model of phase variation with elevation [31].


We have tried to illustrate in the previous sections how advanced image processing methods which have been recently developed by the computer vision community can help SAR image processing. We have focused on three of them, distribution modeling, non-local methods, regularization approaches with graph-cut optimization. Of course, the cited references are far from being exhaustive on these different subjects and other methods like wavelets-based methods would have deserved a more detailed presentation.

Another recent and powerful theory which might well have a great impact in the coming years is compressive sensing [32], [33]. This theory has shown that, despite Shannon theory, for many signals only few measurements are required to allow a faithful reconstruction, provided the signal has a sparse representation in a suitable space (i.e., few non-zero coefficients in that representation). Reconstruction of sparse signals has a long history in radar literature. Recent results in compressed sensing have fueled several works in the areas of compressed SAR acquisitions systems [34], SAR tomography [35] and for SAR GMTI data [36] to cite only a few. We refer the reader to the recent review [37] for more on this very active subject.

Nevertheless, whatever the progress for low-level tasks such as denoising, it is unlikely that they will allow SAR image understanding without high level methods. The influence of geometric configurations combined with distance sampling is predominant on the appearance of the objects in the image. Therefore, a step of object recognition highlighting the relationship between the different signals is usually necessary to fully understand SAR information. Many works have been led in this direction like [38] for optical data, or [39], [40], [41] exploiting jointly SAR and optical images, or an external database. The object level that could be available with metric resolution is still difficult to reach with SAR images on their own. Dictionaries and learning methods could provide some keys for the next step of understanding.


I would like to thank Jean-Marie Nicolas for our long collaboration, Lo¨ıc Denis and J´erˆome Darbon for our more recent ones. Special thanks for all the past or actual members of the SAR team of Telecom ParisTech, but particularly to the PhD students Charles Deledalle, Aymen Shabou and Helene Sportouche, whose results have illustrated this letter. Acknowledgments also to ONERA and CNES for providing the images.


[1] A. Plaza. Computational issues in remote sensing data analysis. IEEE

Geoscience and Remote Sensing Newsletter, (156):11–15, 2010.

[2] M. Zink. TanDEM-X: close formation achieved. IEEE Geoscience and

Remote Sensing Newsletter, (157):23–25, 2010.

[3] J.M. Nicolas. Introduction aux statistiques de deuxi`eme esp`ece : applications

des log-moments et des log-cumulants `a l’analyse des lois

d’image radar. Traitement du signal (french peer review journal;

translated by S. Anfinsen ”Introduction to Second Kind Statistics: Application

of Log-moments and log-cumulants to Analysis of Radar

Images” (, 19(3):139–

167, 2002.

[4] A. Buades, B. Coll, and J.M. Morel. Nonlocal Image and Movie Denoising.

International Journal of Computer Vision, 76(2):123–139, 2008.

[5] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization

via graph cuts. IEEE Transactions on Pattern Analysis and Machine

Intelligence, 26(2):147–159, 2001.

[6] Y. Boykov and V. Kolmogorov. An experimental comparison of mincut/

max-flow algorithms for energy minimization in vision. Pattern Analysis

and Machine Intelligence, IEEE Transactions on, 26(9):1124–1137,


[7] L. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise

removal algorithms. Physica D, 60:259–268, 1992.

[8] H. Ishikawa. Exact optimization for Markov random fields with convex

priors. IEEE Trans. on Pattern Analysis and Machine Intelligence,

25(10):1333–1336, oct 2003.

[9] J. Darbon and M. Sigelle. Image restoration with discrete constrained Total

Variation part I: Fast and exact optimization. Journal of Mathematical

Imaging and Vision, 26(3):261–276, December 2006.

[19] S. Solbo and T. Eltoft. Homomorphic wavelet-based statistical despeckling

of SAR images. IEEE Transactions on Geoscience and Remote Sensing,

42(4):711–721, 2004.

[20] T. Bianchi, F. Argenti, and L. Alparone. Segmentation-Based MAP Despeckling

of SAR Images in the Undecimated Wavelet Domain. IEEE

Transactions on Geoscience and Remote Sensing, 46(9):2728–2742,


[21] S. Parrilli, M. Poderico, C.V. Angelino, G. Scarpa, and L. Verdoliva. A

non local approach for SAR image denoising. IEEE International Conference

on Geoscience and remote Sensing (IGARSS), 2010.

[22] C.-A. Deledalle, F. Tupin, and L. Denis. Polarimetric SAR estimation

based on non-local means. IEEE International Geoscience and Remote

Sensing Symposium (IGARSS), 2010, pages 2515–2518, 2010.

[23] A. Blake and A. Zisserman. Visual Reconstruction. MIT Press, 1987.

[24] A. Shabou, J. Darbon, and F. Tupin. A Markovian Approach for InSAR

Phase Reconstruction WithMixed Discrete and Continuous Optimization.

IEEE Geoscience and Remote Sensing Letters, pages 526–530, 2010.

[25] A. Shabou, J. Darbon, and F. Tupin. A graph-cut based algorithm for approximate

MRF optimization. IEEE International Conference on Image

Processing (ICIP), 2009, pages 2413–2416, 2009.

[26] L. Denis, F. Tupin, J. Darbon, and M. Sigelle. SAR Image Regularization

with Fast Approximate Discrete Minimization. IEEE Transactions on

Image Processing, 18(7):1588–1600, 2009.

[27] L. Denis, F. Tupin, and X. Rondeau. Exact discrete minimization for

TV+L0 image decomposition models. IEEE International Conference on

Image Processing (ICIP), 2010.

[28] L. Denis, F. Tupin, J. Darbon, and M. Sigelle. Joint Regularization of

Phase and Amplitude of InSAR Data: Application to 3D reconstruction.

IEEE Transactions on Geoscience and Remote Sensing, 47(11):3774 –

3785, 2009.

[29] V. Pascazio and G. Schirinzi. Multifrequency InSAR height reconstruction

through maximum likelihood estimation of local planes parameters.

IEEE Transactions on Image Processing, 11(12):1478–1489, 2002.

[30] A. Shabou. Multi-label MRF Energy Minimization with Graph-cuts: application

to Interferometric SAR Phase Reconstruction. PhD thesis, Telecom

ParisTech, France, 2010.

[31] F. Chaabane, A. Avallone, F. Tupin, P. Briole, and H. Maˆıtre. Multitemporal

correction of tropospheric effects in differential sar interferometry.

IEEE Transactions on Geoscience and Remote Sensing, may 2006.

[32] E. Cand`es. Compressive sampling. Int. Congress of Mathematics,

3:1433–1452, 2006.

[33] D. Donoho. Compressed sensing. IEEE Trans. On Information Theory,

52(4):1289–1306, 2006.

[34] G. Rilling, M. Davies, and B. Mulgrew. Compressed sensing based compression

of SAR raw data. SPARS’09 Signal Processing with Adaptive

Sparse Structured Representations, 2009.