The possibility of extracting information from remotely sensed data and interpreting this will depend not only on the capabilities of the sensors but also on how those data are processed to convert the raw values into image displays that improve their information content for analysis and applications. To some extent this will also depend on the knowledge and skills of the human interpreter (remote sensing is an art also!). The key to a favorable outcome lies in the methods of image processing. Image processing methods rely on obtaining good approximations of the spectral response curves and tying these into the spatial context of objects and classes making up the scene.
Processing for digital images involves the manipulation and interpretation of digital images with the aid of a computer. The approach to digital image processing is quite simple. The digital image is fed into a computer one pixel at a time with its brightness value, or its digital number (DN). The computer is programmed to insert these data into an equation, or series of equations, and then store the results of the computation for each pixel. These results form a new digital image that may be displayed or recorded in pictorial format, or may be further manipulated by additional programs.
All image processing is aimed at enhancing the contrast between different landcover types. Enhancement in turn, is the selective transformation of radiance data from the scanner to gray levels on film. Useful enhancements commonly include contrast stretching, destriping, and edge enhancement. Contrast stretching is the process of expanding the scanner radiance values to fit the range of the film playback device. Commonly the radiance values at the extreme low and high ends of the frequency distribution are compressed into a single minimum or maximum gray level on film, a process referred to as saturation. Such compression of the end points permits the expansion of the middle radiance values over a larger range. A commonly used contrast stretch has a 2 percent saturation, which implies that one percent of the low radiance values and one percent of the high radiance values are compressed to the minimum and maximum film density levels, i.e., black and white, respectively.
All these processing and classifying activities are done to lead to some sort of end results or "bottom lines". The purpose is to gain new information, derive applications, and make action decisions. For example, a Geographic Information System program will utilize a variety of data that may be gathered and processed simply to answer a question like: "Where is the best place in a region of interest to locate (site) a new electric power plant?" Both machine (usually computers) and humans are customarily involved in seeking the answer.
Although the possible forms of digital image manipulation are literally infinite, from the point of view of this course, we only discuss some important operations related to thematic information identification and extraction, namely, contrast stretch and density slicing, spatial filtering, principal component analysis, band ratioing, supervised classification, maximum likelihood classification, etc.
This website is hosted by
Department of Geology
Aligarh Muslim University, Aligarh - 202 002 (India)