Image Interpretation and Analysis
Images of the Earth can be acquired by cameras mounted on aircraft or satellites in space. Images from aircraft are called aerial photographs. Images from satellites are called satellite imageries. All images you see in Google Earth are taken from satellites. Images are fundamentally different from maps. Maps are representational drawings of Earth features that have been labeled, while images are actual unlabelled pictures of the Earth. Many features in images are not always visible, as for instance roads when they are concealed beneath trees planted alongside the road. It is therefore necessary to interpret a road in such a situation. You must interpret what you see on an image because it is not labeled for you.
Image interpretation is therefore defined as the act of examining images for the purpose of (1) identifying objects and (2) judging their significance. Interpreters study remotely sensed data and attempt through logical processes to detect, identify, measure, and evaluate the significance of objects, their patterns and spatial relationships, and environmental and cultural features.
Inspecting individual bands of a multispectral dataset
The tone of a single band displayed on the monitor is similar to a black-and-white photograph, and ranges from white to black, usually a shade of gray. Complete reflection of light by a feature gives a white tone, while complete absorption gives a black tone.
Interpretation of objects based on examination of a single band has inherent drawbacks and should not form sole basis of identification. However, such an examination can help one understand the reflectance characteristics of various land cover features in different wavelength regions or spectral bands
The tone of all objects depends upon:
a) Light reflectivity by an object in the spectral region of the band under examination
b) Light absorbed, scattered or transmitted by the object in the spectral region of the band under examination
c) The radiometric scale
d) The intensity of incident light (sun angle and azimuth)
Landsat Band Information
Landsat images are composed of seven different bands, each representing a different portion of the electromagnetic spectrum. In order to work with Landsat band combinations (RGB composites of three bands) we must first understand the reflectance characteristics of objects in each band.
Here is a list of the bands with some information about them:
Spectral Characteristics of Common Land Cover Features
There are several clues you can use to help you interpret images. Sometimes you can use the clues separately but most of the time you have to use more than one clue to figure out what you are seeing.
The first thing you should do with an image is to determine the scale-either actual or approximate. Actual scale can be determined by looking for the pixel dimensions in the metadata file hat came with the image or by measuring objects whose dimensions you know - cars, road widths, etc. -- or -- approximate by simply comparing objects, some of which you have a general idea as to size. Given below are descriptions of the clues that are used to interpret imagery.
1. Shape: The shape of an object as it is viewed from above will be imaged by a sensor on board a satellite. Therefore the shape from a vertical view should be visualized. For example, the crown of a coniferous tree looks like a circle, while that of a deciduous tree has an irregular shape. Airports, harbors, factories etc., can also be identified by their shape. The general outline of objects can help you determine what they are. Some objects have very distinct shapes while others are more difficult to distinguish. Man-made features tend to have straight edges while natural features do not.
It is easy to identify the thin dark line in this image as a river because it does not follow a straight line path.
.Whereas this straight feature is a man-made canal.
2. Size: A proper image-scale should be determined before beginning to interpret the image. The scale can be determined by multiplying the number of pixels along a row by the size of the pixel as specified in the metadata. The approximate size of an object can be measured by multiplying the length on the image by the inverse of the scale. The size of an object can help you interpret what it is. In this example, there is a hall of residence in a university located next to private houses.
Can you tell which is the hall of residence and which are the houses?
3. Pattern: Pattern is a regular, usually repeated shape representing a cluster of objects of a kind. For example, rows of houses or apartments, regularly spaced rice fields, interchanges of highways, orchards etc., can provide information from their unique patterns. Certain objects have a distinct pattern. Man-made features, such as cities tend to have very regular patterns, while natural features do not have regular patterns.
This is an image of the regular street pattern in Aligarh.
And this of a regular pattern of agricultural fields around a village
4. Tone: In case of single band data, the continuous gray scale varying from white to black is called tone. In panchromatic band, any object will reflect its unique tone according to the reflectance in the visible region. For example dry sand has a high reflectance and therefore appears white, while wet sand reflects less and appears dark. In near infrared bands, water appears dark and healthy vegetation white to light gray.
5. Color: Color is more convenient for the identification of object details. For example, vegetation types and species can be more easily interpreted by less experienced interpreters using color information. Sometimes false color images will give more specific information, depending on the spectral band and the object being imaged. For example this true colour image shows many different types of crops in agricultural fields. You can see this from the many different shades of green.
6. Shadow: Shadow is usually a visual obstacle for image interpretation. However, shadow can also give height information about towers, tall buildings etc., as well as shape information from the non-vertical perspective-such as the shape of a bridge. This is an image of buildings in AMU campus. Shorter structures and trees have smaller shadows while taller ones have longer shadows.
7. Association or Context: A specific combination of elements, geographic characteristics, configuration of the surroundings or the context of an object can provide the user with specific information for image interpretation. Sometimes you can identify an object by what is surrounding it, or what it is associated with. In the pictures below, can you tell which image is a mountain lake and which one is a high desert lake? What are the features around the lake that helped you make that decision?
One of the most important things you need to know when working with satellite imagery on a computer is spatial resolution. This refers to how much detail of the terrain you can see. For example, when you are zoomed out in Google Earth (looking at large areas with less detail), you are looking at Landsat Thematic Mapper satellite imagery that has a spatial resolution of 30 meters. As you zoom in, the satellite imagery changes to a very high spatial resolution imagery of about 1 meter, sometimes less. Google acquires these images from different sources. Note that not all areas in Google Earth have high spatial resolution imagery.
Image analysis refers to extraction of meaningful information from digital images; mainly from digital images by means of digital image processing techniques. Image analysis tasks can be as simple as reading the metadata to as sophisticated as identifying a person from their face. Computer processing of image data is indispensable for image analysis. This can involve complex computation for extraction of quantitative information. Image Analysis largely includes the fields of computer or machine vision, and medical imaging, and makes heavy use of pattern recognition, digital geometry, and signal processing.
In order to take advantage of and make good use of digital data, we must be able to extract meaningful information from the imagery. Interpretation and analysis of digital imagery involves the identification and/or measurement of various targets in an image in order to extract useful information about them. Targets in digital images may be any feature or object which can be observed in an image, and have the following characteristics:
· Targets may be a point, line, or area feature. This means that they can have any form, from a bus in a parking lot or plane on a runway, to a bridge or roadway, to a large expanse of water or a field.
· The target must be distinguishable, that is, it must contrast with other features around it in the image.
Much interpretation and identification of targets in digital imagery is performed manually or visually, i.e. by a human interpreter. In many cases this is done using imagery displayed in a pictorial or photograph-type format, independent of what type of sensor was used to collect the data and how the data were collected. In this case we refer to the data as being in analog format. Digital images are represented in a computer as arrays of pixels, with each pixel corresponding to a digital number, representing the brightness level of that pixel in the image. Digital imagery can be displayed as black and white (also called monochrome) images, or as colour images by combining different channels or bands representing different wavelengths.
Digital processing may be used to enhance data (as described below) as a prelude to visual interpretation. Digital processing and analysis may also be carried out to automatically identify targets and extract information completely without manual intervention by a human interpreter. However, rarely is digital processing and analysis carried out as a complete replacement for manual interpretation. Often, it is done to supplement and assist the human analyst.
Common Procedures Adopted in Image Interpretation and Analysis
A logical prerequisite to image interpretation is image processing, which consists of the following:
Spatial Feature Extraction
Initial processing on the raw data is usually carried out to correct for any distortion due to the characteristics of the imaging system and imaging conditions. Depending on the user's requirement, some standard correction procedures may be carried out by the ground station operators before the data is delivered to the end-user. These procedures include radiometric correction to correct for uneven sensor response over the whole image and geometric correction to correct for geometric distortion due to Earth's rotation and other imaging conditions (such as oblique viewing). The image may also be transformed to conform to a specific map projection system. Furthermore, if accurate geographical location of an area on the image needs to be known, ground control points (GCP's) are used to register the image to a precise map (geo-referencing).
In order to aid visual interpretation, visual appearance of the objects in the image can be improved by image enhancement techniques such as grey level stretching to improve the contrast and spatial filtering for enhancing the edges.
Image enhancement is basically improving the interpretability or perception of information in images for human viewers and providing `better' input for other automated image processing techniques. The principal objective of image enhancement is to modify attributes of an image to make it more suitable for a given task and a specific observer. During this process, one or more attributes of the image are modified. The choice of attributes and the way they are modified are specific to a given task. Moreover, observer-specific factors, such as the human visual system and the observer's experience, will introduce a great deal of subjectivity into the choice of image enhancement methods. There exist many techniques that can enhance a digital image without spoiling it.
Different land cover types in an image can be discriminated using some image classification algorithms using spectral features, i.e. the brightness and "colour" information contained in each pixel. The classification procedures can be "supervised" or"unsupervised".
In supervised classification, the spectral features of some areas of known landcover types are extracted from the image. These areas are known as the "training areas". Every pixel in the whole image is then classified as belonging to one of the classes depending on how close its spectral features are to the spectral features of the training areas.
In unsupervised classification, the computer program automatically groups the pixels in the image into separate clusters, depending on their spectral features. Each cluster will then be assigned a landcover type by the analyst.
Each class of landcover is referred to as a "theme"and the product of classification is known as a "thematic map".
Spatial Feature Extraction:
In high spatial resolution imagery, details such as buildings and roads can be seen. The amount of details depends on the image resolution. In very high resolution imagery, even road markings, vehicles, individual tree crowns, and aggregates of people can be seen clearly. Pixel-based methods of image analysis will not work successfully in such imagery. In order to fully exploit the spatial information contained in such imagery, algorithms utilizing the textural, contextual and geometrical properties are required. Such algorithms make use of the relationship between neighbouring pixels for information extraction. Incorporation of a-priori information is sometimes useful. A multi-resolution approach (i.e. analysis at different spatial scales and combining the results) is also a useful strategy when dealing with very high resolution imagery. In this case, pixel-based method can be used for the lower resolution imageries and merged with the results of contextual and textural method applied to higher resolution imageries.
This website is hosted by
Department of Geology
Aligarh Muslim University, Aligarh - 202 002 (India)