Another processing procedure falling into the enhancement category that often divulges valuable information of a different nature is spatial filtering. Although less commonly performed, this technique explores the distribution of pixels of varying brightness over an image and is especially sensitive to detecting and sharpening boundary discontinuities between adjacent pixel sets with notable differences in DN values. Patterns of sharp changes in scene illumination, which are typically abrupt rather than gradual, produce a relation that we express quantitatively as "spatial frequencies". The spatial frequency is defined as the number of cycles of change in image DN values per unit distance (e.g., 10 cycles/mm) applicable to repeating tonal discontinuities along a particular direction in the image. An image with only one spatial frequency consists of equally spaced stripes (in a TV monitor, raster lines). For instance, a blank TV screen with the set turned on, can have horizontal stripes. This situation corresponds to zero frequency in the horizontal direction and a measurable spatial frequency in the vertical, evidenced by line boundary repeats.
In general, images of practical interest consist of several dominant spatial frequencies. Fine detail in an image involves a larger number of changes per unit distance than the gross image features. The mathematical technique for separating an image into its various spatial frequency components is called Fourier Analysis. After an image is separated into its components (done as a "Fourier Transform"), it is possible to emphasize certain groups (or "bands") of frequencies relative to others and recombine the spatial frequencies into an enhanced image. Algorithms for this purpose are called "filters" because they suppress (de-emphasize) certain frequencies and pass (emphasize) others. Filters that pass high frequencies and, hence, emphasize fine detail and edges, are called highpass filters. Lowpass filters, which suppress high frequencies, are useful in smoothing an image, and may reduce or eliminate "salt and pepper" noise.
Convolution filtering is a common mathematical method of implementing spatial filters. In this, each pixel value is replaced by the average over a square area centered on that pixel. Square sizes typically are 3 x 3, 5 x 5, or 9 x 9 pixels but other values are acceptable. This square array is called a moving window, or kernel, since in the computer manipulation of an image the array can be envisioned as moving systematically from one pixel to the next. As applied in lowpass filtering, this tends to reduce deviations from local averages and thus smoothes the image. The difference between the input image and the lowpass image is the highpass-filtered output. Generally, spatially filtered images must be contrast stretched to use the full range of image display. Nevertheless, filtered images tend to appear flat unless strong frequency repeat patterns are brought out.
Next, we will apply three types of filters to TM Band 2 from Morro Bay. The first that we display is a lowpass (mean) filter product, which tends to generalize the image:
An edge enhancement filter highlights abrupt discontinuities, such as rock joints and faults, field boundaries, and street patterns:
In this example, the scene has been radically modified. The Sobel Edge Enhancement algorithm finds an overabundance of discontinuities but we have chosen this program (using Idrisi) to emphasize the sharp boundaries that can result from applying this mode of enhancement.
The highpass filter image for Morro Bay also brings out boundaries, but more subdued than above:
Here, streets and highways, and some streams and ridges, are greatly emphasized. The trademark of a highpass filter image is that linear features commonly appear as bright lines with a dark border. Details in the water are mostly lost. Much of the image is flat.
1-13: Comment further (evaluate) the three filter images shown above in terms of what information you extract visually. Include detrimental aspects. ANSWER
You will note in the answer to this question that the IDRISI filter products leave something to be desired. They can't really convince one that spatial filtering brings about helpfully different imagery. (Many of the linear patterns are spurious, i.e., they are artifacts of processing rather than real features.) To assuage that criticism, we are adding five images extracted from the textbook by Avery and Berlin (Macmillan Publ.; see Overview for reference). The Landsat TM 5 subscene shows a plateau landscape north of Flagstaff, AZ, with both sedimentary rocks and volcanic flows. The first image is just a normal contrast-stretched version.
The next view shows this scene as a low bandpass image.
This is followed by a high pass filter image in which the convolution matrix is 11 x 11 pixels. This is made by subtracting the low pass filter image from the original (unstretched) data set. In this instance, the image is a form of edge enhancement.
As the number of pixels in the convolution window is increased, the high frequency components are more sharply defined. That is evident in this image which uses a 51 x 51 pixel matrix.
The last image in this set illustrates directional first differencing, analogous to determining the first derivative (a Calculus term) which establishes a gradient of spatial change as the filter, with its special algorithm, moves (computer calculational-wise) across the pixel array making up the image. Directional filtering from different directions tends to enhance or disclose linear features that lie preferentially near the perpendicular to the traverse direction. In the image below, the array was moved diagonally across the scene. Note the similarity to the third image above.