John Kerekes
Chester F. Carlson Center for Imaging Science
Rochester Institute of Technology
Rochester, New York

I. Introduction

As passive optical systems develop higher spatial and spectral resolution, the data rate coming out of the sensor increases tremendously. This puts pressure on the communications link (power, bandwidth) and on storage and distribution requirements. Often the increase in the amount of “information” inherent in the data is not commensurate with the increase in data quantity. Data compression techniques offer a way to reduce the demands on system capacity without degrading data utility.

II. Technology Status

The compression of remote sensing imagery involves a variety of system issues [1] including the scene, sensor and resulting data characteristics, the processing that is applied and the ultimate user application requirements. The best choice and impact of various compression approaches can be very dependent upon these system issues. In general, there are two approaches to image data compression, lossless and lossy, and the technology of compression can best be discussed in the context of these two approaches.

Lossless compression takes advantage of redundancy in the scene and through efficient coding allows the reconstruction of the original data exactly. The reduction ratios in data quantity achievable are approximately 1.5 to 2 for single band images, but can be as high as 3 or 4 for multiband images when band-to-band correlation is present [2]. One technique involves the use of predictive coding of image pixels based on neighbors (spatial and/or spectral) and a model of the differences, followed by entropy encoding/decoding of the predictive coefficients using Huffman, Rice or Arithmetic codes. Compression occurs in these techniques by developing a “codebook” which assigns codes with short bit length to frequently appearing image grayscale levels and longer code words to infrequent levels. Thus, images with high spatial uniformity (small differences from pixel to pixel) can be highly compressed with no loss of information. However, these schemes can become sensitive to bit errors in the communications link and require a certain amount of overhead to ensure no loss of data.

Lossy compression techniques allow one to tradeoff compression ratio with image quality. Usable lossy compression ratio’s range from 4 or 5 with minimal image degradation to 40 or 50 with image quality that may still be adequate for qualitative interpretation. There are several approaches to data reduction [3]. Scalar or vector quantizers use discrete codebooks similar to lossless compression techniques but compression is achieved by limiting the number of possible codes. Model-based approaches such as Fractal techniques represent the data with an algorithm and a minimal number of coefficients and work best with images that have self-similarity at several scales. Transform techniques achieve compression by truncating and quantizing the number of coefficients and basis functions that are used to represent the image. Several studies have shown that for multispectral images a Karhunen-Loeve (or Principle Components) Transform (KLT) to spectrally decorrelate the bands followed by a 2-D Discrete Cosine (DCT) or Wavelet Transform to spatially compress the imagery achieve the lowest information degradation according to several metrics among lossy compression techniques. However, the KLT requires the computation of the eigenvectors of the spectral covariance matrix of the scene, and unless these can be precomputed, this burden may make this approach unfeasible for many applications. For single band images the DCT, which forms the basis for the popular JPEG image compression technique, appears to be the best choice for many applications.

It is important to emphasize that the choice of compression technique and ratio, and the resulting impact on the data, is very dependent upon the system characteristics. Features in the scene, atmospheric effects, sensor noise, scanning vs. staring sensors, spectral band misregistration, spectrally varying spatial response and the ultimate application all can affect compression issues. Also, most compression schemes require that statistical or other model parameters be computed from the data, or at least from similar data, and the speed of computation, accuracy and robustness of these parameters are important considerations.

III. Application Areas

Data compression can be applied in almost all remote sensing applications, but is of most use in systems with very high spatial resolution with extremely high data rates and whose imagery are to be used qualitatively. It has been fairly easy to show that typical remote sensing imagery can be compressed by factors of 5 to 10 with no visible degradation in image quality. Lossy compression schemes will be especially applicable to the high resolution (1 to 3 meter ground sample distance) imagery planned for commercial remote sensing systems of the late 1990’s. In fact, data compression is one of the enabling technologies for such systems; without it the data rates would put extreme burdens on recording capacity, communication system power and antenna sizes. However, the utility of a compression scheme will vary by the type of data and the application. Accurate measurements of sea surface temperature from infrared data, for example, could put requirements on the data accuracy that limit one to lossless compression approaches.

IV. User Requirements

In general, the user requirement concerning data compression is that no degradation of the information content of the data occur as a result of the compression. The metric that is used to determine this can be very application specific. Popular metrics range from subjective ones such as visual comparisons and interpretations, to content-independent ones such as mean-squared error or signal-to-noise ratio, and even to measures that consider machine processing and the final application of the data. Examples of these application metrics include accuracy in pixel classification (supervised or unsupervised/clustering), edge detection (region segmentation), scene parameter measurement (sea surface temperature, vegetation index, water vapor) and other nonliteral processing. These issues were considered in [4] in the context of environmental and crop monitoring where radiometric, spectral and spatial distortions are important effects. As a bound, however, compression schemes that introduce distortion less than the inherent noise of the image should be acceptable to all users.

V. Recommendations for Further Activity

An area of emphasis in this topic should be the development of metrics for determining the effect of various compression schemes on the information content of the data. Many good compression algorithms exist, but the question that most often arises is how do they affect the quality of the data for the end user? As mentioned above, this question is application specific, but without some type of standard metric the selection of compression algorithm becomes ad hoc. Other aspects which should receive attention are the computational techniques and memory storage required to apply the compression algorithm. Compression schemes which can be applied to data as collected should be encouraged.


[1] Vaughn, V.D. and T.S. Wilkinson, “System Considerations for Multispectral Image Compression Designs,” IEEE Signal Processing Magazine, January 1995, pp. 19-31.

[2] Tate, S.R., “Band Ordering in Lossless Compression of Multispectral Images,” Proceedings of the 1994 Data Compression Conference, IEEE Computer Society Press, pp. 311-320.

[3] Conference Record of The 27th Asilomar Conference on Signals, Systems & Computers, edited by A. Singh, Vol. 2, IEEE Computer Society Press, November, 1993.

[4] Lurie, J.B, B.W. Evans, B. Ringer, M. Yeates, “Image Quality Measures to Assess Hyperspectral Compression Techniques,” Proceedings of Conference on Microwave Instrumentation and Satellite Photogrammetry for Remote Sensing of the Earth, SPIE Vol. 2313, 1994, pp. 2-14.