So far, we have considered mainly the nature and characteristics of EM radiation in terms of sources and behavior when interacting with materials and objects. It was stated that the bulk of the radiation sensed is either reflected or emitted from the target, generally through air until it is monitored by a sensor. The subject of what sensors consist of and how they perform (operate) is important and wide ranging. It is also far too involved to merit an extended treatment in this Tutorial. However, a synopsis of some of the basics is warranted on this page. Some useful links to reviews of sensors and their applications is included in this NASA site. A more specific treatment is given on the CNES (French) Remote Sensing website. (We point out here that many readers of this Tutorial are now using a sophisticated sensor that uses some of the technology described below: the Digital Camera; more is said about this everyday sensor near the bottom of the page.)
Most remote sensing instruments (sensors) are designed to measure photons. The fundamental principle underlying sensor operation centers on what happens in a critical component - the detector. This is the concept of the photoelectric effect (for which Albert Einstein, who first explained it in detail, won his Nobel Prize [not for Relativity which was a much greater achievement]; his discovery was, however, a key step in the development of quantum physics). This, simply stated, says that there will be an emission of negative particles (electrons) when a negatively charged plate of some appropriate light-sensitive material is subjected to a beam of photons. The electrons can then be made to flow as a current from the plate, are collected, and then counted as a signal. A key point: The magnitude of the electric current produced (number of photoelectrons per unit time) is directly proportional to the light intensity. Thus, changes in the electric current can be used to measure changes in the photons (numbers; intensity) that strike the plate (detector) during a given time interval. The kinetic energy of the released photoelectrons varies with frequency (or wavelength) of the impinging radiation. But, different materials undergo photoelectric effect release of electrons over different wavelength intervals; each has a threshold wavelength at which the phenomenon begins and a longer wavelength at which it ceases.
Now, with this principle established as the basis for the operation of most remote sensors, let us summarize several main ideas as to sensor types (classification) in these two diagrams:
The first is a functional treatment of several classes of sensors, plotted as a triangle diagram, in which the corner members are determined by the principal parameter measured: Spectral; Spatial; Intensity.
The second covers a wider array of sensor types:
From this imposing list, we shall concentrate the discussion on optical-mechanical-electronic radiometers and scanners, leaving the subjects of camera-film systems and active radar for consideration elsewhere in the Tutorial and holding the description of thermal systems to a minimum (see Section 9 for further treatment). The top group comprises mainly the geophysical sensors to be examined near the end of this Section.
The common components of a sensor system are shown in this table (not all need be present in a given sensor, but most are essential):
The two broadest classes of sensors are Passive (energy leading to radiation received comes from an external source, e.g., the Sun; the MSS is an example) and Active (energy generated from within the sensor system is beamed outward, and the fraction returned is measured; radar is an example). Sensors can be non-imaging (measures the radiation received from all points in the sensed target, integrates this, and reports the result as an electrical signal strength or some other quantitative attribute, such as radiance) or imaging (the electrons released are used to excite or ionize a substance like silver (Ag) in film or to drive an image producing device like a TV or computer monitor or a cathode ray tube or oscilloscope or a battery of electronic detectors (see further down this page for a discussion of detector types); since the radiation is related to specific points in the target, the end result is an image [picture] or a raster display [for example: the parallel horizontal lines on a TV screen]).
Radiometer is a general term for any instrument that quantitatively measures the EM radiation in some interval of the EM spectrum. When the radiation is light from the narrow spectral band including the visible, the term photometer can be substituted. If the sensor includes a component, such as a prism or diffraction grating, that can break radiation extending over a part of the spectrum into discrete wavelengths and disperse (or separate) them at different angles to an array of detectors, it is called a spectrometer. One type of spectrometer (used in the laboratory for chemical analysis) passes multiwavelength radiation through a slit onto a dispersing medium which reproduces the slit as lines at various spacings on a film plate (discussed on page I-2a). The term spectroradiometer is reserved for sensors that collect the dispersed radiation in bands rather than discrete wavelengths. Most air/space sensors are spectroradiometers.
Sensors that instantaneously measure radiation coming from the entire scene at once are called framing systems. The eye, a photo camera, and a TV vidicon belong to this group. The size of the scene that is framed is determined by the apertures and optics in the system that define the field of view, or FOV. If the scene is sensed point by point (equivalent to small areas within the scene) along successive lines over a finite time, this mode of measurement makes up a scanning system. Most non-camera sensors operating from moving platforms image the scene by scanning.
Moving further down the classification tree, the optical setup for imaging sensors will be either an image plane or an object plane set up depending on where lens is before the photon rays are converged (focused), as shown in this illustration.
For the image plane arrangement, the lens receives parallel light rays after these are deflected to it by the scanner, with focusing at the end. For the object plane setup, the rays are focused at the front end (and have a virtual focal point in back of the initial optical train), and are intercepted by the scanner before coming to a full focus at a detector.
Another attribute in this classification is whether the sensor operates in a non-scanning or a scanning mode. This is a rather tricky pair of terms that can have several meanings in that scanning implies motion across the scene over a time interval and non-scanning refers to holding the sensor fixed on the scene or target of interest as it is sensed in a very brief moment. A film camera held rigidly in the hand is a non-scanning device that captures light almost instantaneously when the shutter is opened, then closed. But when the camera and/or the target moves, as with a movie camera, it in a sense is performing scanning as such. Conversely, the target can be static (not moving) but the sensor sweeps across the sensed scene, which can be scanning in that the sensor is designed for its detector(s) to move systematically in a progressive sweep even as they also advance across the target. This is the case for the scanner you may have tied into your computer; here its flatbed platform (the casing and glass surface on which a picture is placed) also stays put; scanning can also be carried out by put a picture or paper document on a rotating drum (two motions: circular and progressive shift in the direction of the drum's axis) in which the scanning illumination is a fixed beam.
Two other related examples: A TV (picture-taking) camera containing a vidicon in which light hitting that photon-sensitive surface produces electrons that are removed in succession (lines per inch is a measure of the TV's performance) can either stay fixed or can swivel to sweep over a scene (itself a spatial scanning operation) and can scan in time as it continues to monitor the scene. A digital camera contains an X-Y array of detectors that are discharged of their photon-induced electrons in a continuous succession that translate into a signal of varying voltage. The discharge occurs by scanning the detectors systematically. That camera itself can remain fixed or can move.
The gist of all this (to some extent obvious) is that the term scanning can be applied both to movement of the entire sensor and, in its more common meaning, to the process by which one or more components in the detection system either move the light gathering, scene viewing apparatus or the light or radiation detectors are read one by one to produce the signal. Two broad categories of most scanners are defined by the terms "optical-mechanical" and "optical-electronic", distinguished by the former containing an essential mechanical component (e.g., a moving mirror) that participates in scanning the scene and by the latter having the sensed radiation move directly through the optics onto a linear or two-dimensional array of detectors.
Another attribute of remote sensors, not shown in the classification, relates to the modes in which those that follow some forward-moving track (referred to as the orbit or flight path) gather their data. In doing so, they are said to monitor the path over an area out to the sides of the path; this is known as the swath width. The width is determined by that part of the scene encompassed by the telescope's full angular FOV which actually is sensed by a detector array - this is normally narrower than the entire scene's width from which light is admitted through the external aperture (usually, a telescope). The principal modes are diagrammed in these two figures:
The Cross Track mode normally uses a rotating (spinning) or oscillating mirror (making the sensor an optical-mechanical device) to sweep the scene along a line traversing the ground that is very long (kilometers; miles) but also very narrow (meters; yards), or more commonly a series of adjacent lines. This is sometimes referred to as the Whiskbroom mode from the vision of sweeping a table side to side by a small handheld broom. A general scheme of a typical Cross-Track Scanner is shown below.
The essential components (most are shared with Along Track systems) of this instrument as flown in space are 1) a light gathering telescope that defines the scene dimensions at any moment (not shown); 2) appropriate optics (e.g., lens) within the light path train; 3) a mirror (on aircraft scanners this may completely rotate; on spacecraft scanners this usually oscillates over small angles); 4) a device (spectroscope; spectral diffraction grating; band filters) to break the incoming radiation into spectral intervals; 5) a means to direct the light so dispersed onto an array or bank of detectors; 6) an electronic means to sample the photo-electric effect at each detector and to then reset the detector to a base state to receive the next incoming light packet, resulting in a signal stream that relates to changes in light values coming from the ground targets as the sensor passes over the scene; and 7) a recording component that either reads the signal as an analog current that changes over time or converts the signal (usually onboard) to a succession of digital numbers, either being sent back to a ground station. A scanner can also have a chopper which is a moving slit or opening that as it rotates alternately allows the signal to pass to the detectors or interrupts the signal (area of no opening) and redirects it to a reference detector for calibration of the instrument response.
Each line is subdivided into a sequence of individual spatial elements that represent a corresponding square, rectangular, or circular area (ground resolution cell) on the scene surface being imaged (or in, if the target to be sensed is the 3-dimensional atmosphere). Thus, along any line is an array of contiguous cells from each of which emanates radiation. The cells are sensed one after another along the line. In the sensor, each cell is associated with a pixel (picture element) that is tied to a microelectronic detector; each pixel is characterized for a brief time by some single value of radiation (e.g., reflectance) converted by the photoelectric effect into electrons.
The areal coverage of the pixel (that is, the ground cell area it corresponds to) is determined by instantaneous field of view (IFOV) of the sensor system. The IFOV is defined as the solid angle extending from a detector to the area on the ground it measures at any instant (see above illustration). IFOV is a function of the optics of the sensor, the sampling rate of the signal, the dimensions of any optical guides (such as optical fibers), the size of the detector, and the altitude above the target or scene. The electrons are removed successively, pixel by pixel, to form the varying signal that defines the spatial variation of radiance from the progressively sampled scene. The image is then built up from these variations - each assigned to its pixel as a discrete value called the DN (a digital number, made by converting the analog signal to digital values of whole numbers over a finite range [for example, the Landsat system range is 28, which spreads from 0 to 255]). Using these DN values, a "picture" of the scene is recreated on film (photo) or on a monitor (image) by converting a two dimensional array of pixels, pixel by pixel and line by line along the direction of forward motion of the sensor (on a platform such as an aircraft or spacecraft) into gray levels in increments determined by the DN range.
The Along Track Scanner has a linear array of detectors oriented normal to flight path. The IFOV of each detector sweeps a path parallel with the flight direction. This type of scanning is also referred to as pushbroom scanning (from the mental image of cleaning a floor with a wide broom through successive forward sweeps). The scanner does not have a mirror looking off at varying angles. Instead there is a line of small sensitive detectors stacked side by side, each having some tiny dimension on its plate surface; these may number several thousand. Each detector is a charge-coupled device (CCD), as described in more detail below on this page. In this mode, the pixels that will eventually make up the image correspond to these individual detectors in the line array. Some of these ideas are evident in this image.
As the sensor-bearing platform advances along the track, at any given moment radiation from each ground cell area along the ground line is received simultaneously at the sensor and the collection of photons from every cell impinges in the proper geometric relation to its ground position on every individual detector in the linear array equivalent to that position. The signal is removed from each detector in succession from the array in a very short time (milliseconds), the detectors are reset to a null state, and are then exposed to new radiation from the next line on the ground that has been reached by the sensor's forward motion. The result is a build up of linear array data that forms a 2-dimension areal array. As signal sampling improves, the possibility of sets of continuous linear arrays all exposed simultaneously to radiation from the scene, leading to areal arrays, all being sampled at once will increase the equivalent area of ground coverage.
With this background, on to some more specific information. This next figure is a diagrammatic model of an electro-optical sensor that does not contain the means to break the incoming radiation into spectral components (essentially, this is a panchromatic system in which the filter admits a broad range of wavelengths). The diagram contains some of the elements found in the Return Beam Vidicon (TV-like) on the first two Landsats. Below it is a simplified cutaway diagram of the Landsat Multispectral Scanner (MSS) which through what is here called a shutter wheel or mount, containing filters each passing a limited range of wavelength, the spectral aspect to the image scanning system is added, i.e., produces discrete spectral bands:
As this pertains to the Landsat Multispectral Scanner (the along-track type), check this cutaway diagram:
The front end of a sensor is normally a telescopic system (in the image denoted by the label 11.6°) to gather and direct the radiation onto a mirror or lens. The mirror rocks or oscillates back and forth rapidly over a limited angular range (the 2.9 ° to each side). In this setup, the scene is imaged only on one swing, here forward, and not scanned on the opposing or reverse swing (active scanning can occur on both swings, especially on slow-moving aircraft sensors). Some sensors allow the mirror to be pointed off to the side at specific fixed angles to capture scenes adjacent to the vertical mode ground track (SPOT is an example). In some scanners, a chopper may be in the optic train near the mirror. It is a mechanical device to interrupt the signal either to modulate or synchronize it or, commonly, to allow a very brief blockage of the incoming radiation while the system looks at an onboard reference source of radiation of steady, known wavelength(s) and intensity in order to calibrate the final signals tied to the target. Other mirrors or lenses may be placed in the train to further redirect or focus the radiation.
The radiation - normally visible and/or Near and Short Wave IR, and/or thermal emissive in nature - must then be broken into spectral intervals, i.e., into broad to narrow bands. The width in wavelength units of a band or channel is defined by the instrument's spectral resolution (see top of page 13-5). The spectral resolution achieved by a sensor depends on the number of bands, their bandwidths, and their locations within the EM spectrum. Prisms and diffraction gratings are one way to break selected parts of the EM spectrum into intervals; bandpass filters are another. In the above cutaway diagram of the MSS the filters are located on the shutter wheel. The filters select the radiation bands that are sensed and have detectors placed where each wavelength-dependent band is sampled. For the filter setup, the spectrally-sampled radiation is carried along optical fibers to dedicated detectors.
Spectral filters fall into two general types: Absorption and Interference. Absorption filters pass only a limited range of radiation wavelengths, absorbing radiation outside this range. Interference filters reflect radiation at wavelengths lower and higher than the interval they transmit. Each type may be either a broad or a narrow bandpass filters. This is a graph distinguishing the two types.
These filters can further be described as high bandpass (IRT2; selectively removes shorter wavelengths) or low bandpass (RC830; absorbs longer wavelengths) types.
Absorption filters are made of either glass or gelatin; they use organic dyes to selectively transmit certain wavelength intervals. These filters are the ones commonly used in photography (the various colors [wavelengths] transmitted are designated by Wratten numbers).
Interference filters work by using thin films that reflect unwanted wavelengths and transmit others through a specific interval, as shown in this illustration:
A common type of specialized filter used in general optics and on many scanning spectroradiometers is the dichroic filter.This uses an optical glass substrate over which are deposited (in a vacuum setup) from 20 to 50 thin (typically, 0.001 mm thick) layers of a special refractive index dielectric material (or materials in certain combinations) that selectively transmits a specific range or band of wavelengths. Absorption is nearly zero. These can be either additive or subtractive color filters when operating in the visible range (see page 10-2). Another type is the polarizing filter. A haze filter removes or absorbs much of the scattering effects of atmospheric moisture and other haze constituents.
The next step is to get the spectrally separated radiation to appropriate detectors. This can be done through lenses or by detector positioning or, in the case of the MSS and other sensors, by channeling radiation in specific ranges to fiber optics bundles that carry the focused radiation to an array of individual detectors. For the MSS, this involves 6 fiber optics leads for the six lines scanned simultaneously to 6 detectors for each of the four spectral bands, or a total of 24 detectors in all.
In the early days of remote sensing, photomultipliers served as detectors. Most detectors today are made of solid-state semiconductor metals or alloys. A semiconductor has a conductivity intermediate between a metal and an insulator. Under certain conditions, such as the interaction with photons, electrons in the semiconductor are excited and moved from a filled energy level (in the electron orbital configuration around an atomic nucleus) to another level called the conduction band which is deficient in electrons in the unexited state. The resistance to flow varies inversely with the number of incident photons. The process is best understood by quantum theory. Different materials respond to different wavelengths (actually, to photon energy levels) and are thus spectrally selective.
In the visible light range, silicon metal and PbO are common detector materials. Silicon photodiodes are used in this range. Photoconductor material in the Near-Ir includes PbS (lead sulphide) and InAs (indium-arsenic). In the Mid-IR (3-6 µm), InSb (indium-stibnium [an antimony compound]) is responsive.The most common detector material for the 8-14 µm range is Hg-Cd-Te (mercury-cadmium-tellurium); when operating it is necessary to cool the detectors to near zero Kelvin (using Dewars coolers) to optimize the efficiency of electron release. Other detector materials are also used and perform under specific conditions. This next diagram gives some idea of the variability of semiconductor detectivity over operating wavelength ranges.
Other detector systems, less commonly used in remote sensing function in different ways. The list includes photoemissive, photdiode, photovoltage, and thermal (absorption of radiation) detectors. The most important now are CCDs, or Charge-Coupled-Detectors (CCDs; the "D" is also substituted for by "Detector") , that are explained in the next paragraph. This approach to sensing EM radiation was developed in the 1970s, which led to the Pushbroom Scanner, which uses CCDs as the detecting sensor. The nature and operation of CCD's are reviewed in these two websites: CCD site 1 and CCD site 2. An individual CCD is an extremely small silicon (micro)detector, which is light-sensitive. Many individual detectors are placed on a chip side by side either in a single row as a linear array or in stacked rows of linear arrays in X-Y (two dimensional or areal) space. Here is a photograph of a CCD chip:
When photons strike a CCD detector, electronic charges develop whose magnitudes are proportional to the intensity of the impinging radiation during a short time interval (exposure time). From 3,000 to more than 10,000 detector elements (the CCDs) can occupy a linear space less than 15 cm in length. The number of elements per unit length, along with the optics, determine the spatial resolution of the instrument. Using integrated circuits each linear array is sampled very rapidly in sequence, producing an electrical signal that varies with the radiation striking the array. This changing signal goes through a processor to a recorder, and finally, is used to drive an electro-optical device to make a black and white image.
After the instrument samples the almost instantaneous signal, the array discharges electronically fast enough to allow the next incoming radiation to be detected independently. A linear (one-dimensional) array acting as the detecting sensor advances with the spacecraft's orbital motion, producing successive lines of image data (the pushbroom effect). Using filters to select wavelength intervals, each associated with a CCD array, leads to multiband sensing. The one disadvantage of current CCD systems is their limitation to visible and near IR (VNIR) intervals of the EM spectrum. (CCDs are also the basis for two-dimensional arrays - a series of linear CCDs stacked in parallel to extend over an area; these are used in the now popular digital cameras and are the sensor detectors commonly employed in telescopes of recent vintage.)
Each individual CCD corresponds to the "pixel" mentioned above. The size of the CCD is one factor in setting spatial resolution (smaller sizes represent smaller areas on the target surface); another factor is the height of the observing platform (satellite or aircraft); a third factor is tied to the use of a telescopic lens.
Once a scanner or CCD signal has been generated at the detector site, it needs to be carried through the electronic processing system. As stated above, one ultimate output is the computerized signal (commonly as DN [Digital Number] variations) used to make images or be analyzed by computer programs. Pre-amplification may be needed before the last stage. Onboard digitizing is commonly applied to the signal and to the reference radiation source used in calibration. The final output is then sent to a ground receiving station, either by direct readout (line of sight) from the spacecraft or through satellite relay systems like TDRSS (Tracking and Relay Satellite System) or other geosynchronous communications satellites). Another option is to record the signals on a tape recorder and play them back when the satellite's orbital position permits direct transmission to a receiving station (this was used on many of the earlier satellites, including Landsat [ERTS], now almost obsolete because of the much improved satellite communications network).
The subject of sensor performance is beyond the scope of this page. Three common measures are here mentioned: 1) S/N (signal to noise ratio; the noise can come from internal electronic components or the detectors themselves); 2) NEΔP and NEΔT, the Noise Equivalent Power (for reflectances) and 3) Noise Equivalent Temperature (for thermal emission detectors).
Sensors flown on unmanned spacecraft tend to be engineering marvels. Their components are of the highest quality. Before flight they are tested and retested to look for any functional weaknesses. With all this "loving care" it is no wonder that they can cost $millions to develop and fabricate. We show a photo of the MODIS sensor that now operating well on the Terra spacecraft launched in late 1999 and on Aqua two years later.
Finally, we need to consider one more vital aspect of sensor function and performance, namely the subject of resolution. There are three types: Spatial, Spectral, and Radiometric.
The spatial resolution concept is reviewed on page 10-3 as regards photographic systems and photogrammetry; check out that page at any time during these paragraphs. Here, we will attempt a generally non-technical overview of spatial resolution.
Most of us have a strong intuitive feeling for the meaning of spatial resolution. Think of this experiential example. Suppose you are looking at a forested hillside some considerable distance away. What you see is the presence of the continuous forest but at a great distance you do not see individual trees. As you go closer, eventually the trees, which may differ in size, shape, and species, become distinct as individuals. They have thus been resolved. As you draw much nearer, you start to see individual leaves. This means that the main components of an individual entity are now discernible and thus that category is being resolved. You can carry this ever further, through leaf macro-structure, then recognition of cells, and in principle with higher resolutions the individual constituent atoms and finally subatomic components. This last step is the highest resolution (related to the smallest sizes) achievable by instruments or sensors. All of these levels represent the "ability to recognize and separate features of specific sizes".
At this point in developing a feel for the meaning and importance of spatial resolution, consider this diagram:
At 1 and 2 meters, the scene consists of objects and features that, being resolvable, we can recognize and name. At 30 meters, the blocky pixels form a pattern but it is not very intelligible. However, when the image covers a wide area, as does a Landsat scene, the small area shown here becomes a small part of the total image, so that larger features appear sharp, discernible, and usually identifiable. As a general rule, high resolution images retain sharpness if the area covered is relatively small; IKONOS images (1-4 meter resolution) are a case in point.
The common sense definition of spatial resolution is often simply stated as the smallest size of an object that can be picked out from its surrounding objects or features. This separation from neighbors or background may or may not be sufficient to identify the object. Compare these ideas to the definition of three terms which have been extracted from the Glossary of Appendix D of this Tutorial:
These three terms will be defined as they apply to photographic systems (page 10-3 again). But resolution-related terms are also appropriate to electro-optical systems, standard optical devices, and even the human eye. The subject of resolution is more extensive and complicated than suggested from the above statements. Lets explore the ideas in more detail. The first fundamental notion is to differentiate resolution from resolving power. The former refers to the elements, features or objects in the target, that is the scene being sensed from a distance; the latter concerns the ability of the sensor, be it electronic or film or the eye, to separate the smallest features in the target that are the objects being sensed.
To help in the visualization of effective (i.e., maximum achieved) spatial resolution, lets work with a target that contains the objects that will be listed and lets use the human eye as the sensor, making you part of the resolving process, since this is the easiest notion involved in the experience. (A suggestion: Review the description of the eye's functionality given in the answer to question I-1 [page I-1] in the Introductory Section.)
Start with a target that contains rows and columns of red squares each bounded by a thin black line place in contact with each other. The squares have some size determined by the black outlines. At a certain distance where you can see the whole target, it appears a uniform red. Walk closer and closer - at some distance point you begin to see the black contrasting lines. You have thus begun to resolve the target in that you can now state that there are squares of a certain size and these appear as individuals. Now decrease the black line spacing, making each square (or resolution cell) smaller. You must move closer to resolve the smaller squares. Or you must have improved eyesight (the rods and cones in the eye determine resolution; their sizes define the eye's resolution; if some are damaged that resolution decreases). The chief variable here is distance to the target.
Now modify the experiment by replacing every other square with a green version but keeping the squares in contact. At considerable distance neither the red nor green individuals can be specifically discerned as to color and shape. They blend, giving the eye (and its brain processor) the impression of "yellowness" of the target (the effects of color combinations are treated also in Section 10). But as you approach, you start to see the two colors of squares as individuals. This distance at which the color pairs start to resolve is greater than the case above in which thin black lines form the boundary. Thus, for a given size, color contrast (or tonal contrast, as between black and white or shades of gray squares) becomes important in determining the onset of effective resolution. Variations of our experiment would be to change the squares to circles in regular alignments and have the spaces between the packed circles consist of non-red background, or draw the squares apart opening up a different background. Again, for a given size the distances at which individuals can first be discerned vary with these changing conditions. One can talk now in terms of the smallest individual(s) in a collection of varying size/shape/color objects that become visibly separable - hence resolved.
Three variables control the achieved spatial resolution: 1) the nature of the target features as just specified, the most important being their size; 2) the distance between the target and the sensing device; and 3) some inherent properties of the sensor embodied in the term resolving power. For this last variable, in the eye the primary factor is the sizes and arrangements of the rods and cones in the retina; in photographic film this is determined in part by the size of the AgCl grains or specks of color chemicals in the emulsion formed after film exposure and subsequent, although other properties of the camera/film system enter in as well.
For the types of spaceborne sensors discussed on this page, there are several variables or factors that specify the maximum (highest) resolution obtainable. Obviously, first is the spatial and spectral characteristics of the target scene features being sensed, including the smallest objects who presence and identities are being sought. Next, of course, is the distance between target and sensor (orbital altitude). (For sensors in aircraft or spacecraft the interfering aspects of the atmosphere can degrade resolution). Last, the speed of the platform, be it a balloon, an aircraft, an unmanned satellite, or a human observer in a shuttle or space station, is relevant in that it determines the "dwell time" available to the sensor's detectors on the individual features from which the photo signals emanate.
Most targets have some kind of limiting area to be sensed that is determined by the geometric configuration of the sensor system being used. This is implied by the above-mentioned Field of View, outside of which nothing is "visible" at any moment. Commonly, this FOV is related to a telescope or circular tube (whose physical boundaries select or collimate the outer limits of the scene to be sensed) that admits only radiation from the target at any particular moment. The optics (mainly, the lens[es]) in the telescope are important to the resolving power of the instrument. Magnification is one factor, as a lens system that increases magnifying capability also improves resolution. The spectral distribution of the incoming photons also plays a role. But, for sensors like those on Landsat or SPOT, mechanical and/or electronic functions of the signal collection and detector components become critical factors in obtaining improved resolution. This resolution is also equivalent to the pixel size. (The detectors themselves influence resolution by their inherent Signal to Noise (S/N) capability; this can vary in terms of spectral wavelengths). For an optical-mechanical scanner, the oscillation or rotation rate and arrangement of the scanning mirror motion will influence the Instantaneous Field of View IFOV); this is one of three factors that closely control the final resolution (pixel size) achieved. The second factor is related to the Size (dimensions) of an optical aperture that admits the light from an area on the mirror that corresponds to the segment of the target being sensed. For the Landsat MSS, as an example, the width of ground being scanned across-track at any instant is 480 m. The focusing optics and scanning rate serve to break the width into 6 parallel scan lines that are directed to 6 detectors thereby accounting for the 80 meters (480/6) of resolution associated with a pixel along each scan line.
The third factor, which controls the cross-track boundary dimension of each pixel, is Sampling Rate of the continuous signal beam of radiation (light) sent by the mirror (usually through filters) to the detector array. For the Landsat MSS, this requires that all radiation received during the cross-track sweep along each scan line that fits into the IFOV (sampling element or pixel) be collected after every 10 microseconds of mirror advance by sampling each pixel independently in a very short time by electronic discharge of the detector. This sampling is done sequentially for the succession of pixels in the line. In 10 microseconds, the mirror's advance is equivalent to sweeping 80 m forward on the ground; its cutoff to form the instantaneous signal contained in the pixel thus establishes the other two sides of the pixel (perpendicular to the sweep direction). In the first MSS (Landsat-1), the dimensions thus imposed are equivalent to 79 by 57 meters (79 is the actual value in meters, but 80 meters [rounded off] is often quoted), owing to the nature of the sampling process. Some of the ideas in the above paragraph, which may still seem tenuous to you at the moment, may be more comprehensible after reading page I-16 of this Introduction which describes in detail the operation of the Landsat Multispectral Scanner. That page further delves into this 79 x 57 pixel geometry.
The relevant resolution determinants (as pertains to pixel size) depend on the size of each fixed detector which in turn governs the sampling rate. That rate must be in "sync" with the ability to discharge each detector in sequence fast enough to produce an uninterrupted flow of photon-produced electrons. This is constrained by the motion of the sensor, such that each detector must be discharged ("refreshed") quickly enough to then record the next pixel that is the representative of the spatially contiguous part of the target next in line in the direction of platform motion. Other resolution factors apply to thermal remote sensing (such as the need to cool detectors to very low temperatures; see Section 9) and to radar (pulse directions, travel times, angular sweep, etc; see Section 8).
Since the pixel size is a prime factor in determining spatial resolution, one may well ask about objects that are smaller than the ground dimensions represented by the pixel. These give rise to the "mixed pixel" concept that is discussed on page 13-2. A resolution anomaly is that under circumstances of objects smaller than a pixel that have high contrasts to their surroundings in the ground space corresponding to the pixel dimensions sampled may actually so strongly affect the DN or radiance value of that pixel as to darken or lighten it relative to neighboring pixels that don't have the object(s). Thus, a 10 m wide light concrete road within a pixel's ground equivalent that has vegetation neighbors consisting of dark leaves, when contributing together, will reduce the averaged radiance of that pixel sufficient to produce a visual contrast such that the road is detectable along its linear trend in the image.
For sensed scenes that are displayed as photographic images, the optimum resolution is a combination of three actions: 1) the spatial resolution inherent to the sensor; 2) apparent improvements imposed during image processing; 3) the further improvement that may reside in the photographic process. To exemplify this idea: Landsat MSS images produced as pictures by the NASA Data Processing Facility, part of Goddard's Ground Data Handling System, were of notable quality; they had a certain degree of image manipulation imposed in reaching their end product pictures (later, this type of product was available through the EROS Data Center). But companies like the Earth Satellite Corp. took raw or corrected Landsat data and ran them through even more rigorous image processing algorthms which yielded superior pictures. These could be enlarged significantly without discernible loss of detail. The finest end product was then achieved by printing the images as generated electronically from the DN data on a type of film called Cibachrome, which maximizes the sharpness of the scene and enriches its colors, so that a viewer would rate the end result as of the highest quality.
Now, if you haven't already done so, go to page 10-3 to review that classic method by which spatial resolution is determined in photographs, but also applicable to electronically-generated images.
One goal for space-operated sensors in recent years has been improved spatial resolution (now, down to better than [smaller than] 1 meter) and greater spectral resolution (from band widths of about 10-30 nanometers [as pertains to the Landsat MSS] to 1 nanometer or less, which carries capabilities into the hyperspectral mode discussed on page I-24 of this Section and again in Section 13. Spectral resolution has also been considered above on this page. Suffice to remind you that high spectral resolution allows spectral signatures to be plotted and these are superior to band signatures as aids to material and class identification. As with spatial resolution, a price is paid to get better spectral resolution: the number of detectors must be significantly increased; these must be physically placed to capture the wavelength-dependent radiation spread over part of the spectrum by a dispersing medium other than filters. This impacts signal handling onboard and data handling on the ground.
We have not yet defined radiometric resolution on this page. It is a rather esoteric concept that relates to levels of quantization that can be detected or be established to improve scene quality (such as tonal contrast). Consider, for example, a range of radiation intensities (brightness levels). This continuous range can be subdivided into a set of values of steadily increasing intensity. Each subdivision is a "level" that in a black and white rendition of a scene is represented by some degree of grayness. A two level rendition would consist of just black and white (all intermediate levels have been assigned to one or the other). A four level scene would include two intermediate gray levels). A 64 level image would have a range of distinguishable increasing (from black) gray tones up to the highest (white). Most sensors convert intercepted radiation into a digital form, which consists of a number that falls within some range of values. Radiometric resolution defines this range of values. A sensor with 8-bit resolution (e.g. Landsat TM) has a range of 256 levels, or 28, values (since 0 is one level, the range is 0-255). A 6-bit sensor (e.g. Landsat Multispectral Scanner (MSS) 1) has a range of 64, or 26, level values. To illustrate how this affects a scene representation, consider these panels (produced by Dr. S. Liew) that depict a scene at 21, 22, 23, and 24, that is, 2(upper left), 4, 8, and 16 (lower right) gray levels, or quantized radiometric values:
Note that the human eye is inefficient at distinguishing differences in gray levels much beyond the limit of 16. Thus, a 64-level image would look closely like a 16-level image. And so also for a 256-level (0 to 255) image which is just a little sharper than a 16 level image. Where higher radiometric resolution (range of intensity levels) becomes important is in image processing by computer. The broader range - smaller increments from one level to the next - provides a better measure of variations in radiation intensity which in turn prompts better spatial and spectral resolution. In classifying a scene, different classes are more precisely identified if radiometric precision is high.
This is brought home by examining ancillary information in a Landsat scene as it was processed at NASA Goddard. At the bottom of an 8 by 10 inch print (the standard) is notation explained elsewhere in the Tutorial. Included is a 16-level gray scale or bar. When this 16 level bar was scanned to make the image below and then processed (by Photoshop), the resulting stretch of the full bar allowed only 11 levels to be distinguishable in this image (scroll right to see the full extent of the bar). In only, say, 6 levels were separable (as printed), the picture would be "flat" (less contrast). The goal of photo processing is to try to achieve the widest spread of levels which usually optimizes the tonal balance in a black and white image.
For more insight into radiometry, you are referred to this Web site prepared by CNES.
To sum up the above treatment of resolution, considerable effort and expense is applied to designing and constructing sensors that maximize the resolution needed for their intended tasks. Greater spectral resolution (as now achievable in hyperspectral sensors) means that individual entities (classes; features) can be more accurately identified as the details of their spectral signatures are sensed. Superior spatial resolution will permit ever smaller targets to be seen as individuals; many of the items we live with while on the Earth's surface are just a few meters or less in size, so that if these can be resolved their recognition could lead to their identity and hence their better classification. Increased radiometric resolution gives rise to sharper images and to better discrimination of different target materials in terms of their relative brightness.
The trend over the last 30 years has been directed towards improved resolution of each type. This diagram shows this for five of the satellite systems in orbit; the swath width (across-track distance) is also shown. (However, a reminder: coarser space resolution has its place because it is what is available on satellites that deliberately seek wide fields of view so as to provide regional information at smaller scales, which can be optimal for certain applications.)
The field of view controls the swath width of a satellite image. That width, in turn, depends on the optics of the observing telescope, on electronic sampling limits inherent to the sensor, and on the altitude of the sensor. Normally, the higher the satellite's orbit, the wider the swath width and the lower the spatial resolution. Both altitude and swath width determine the "footprint" of the sensed scene, i.e., its across track dimensions and the frequency of repeat coverage. Taken together with cloud cover variations, the number of "good" scenes obtained along the total adjacent swaths occupied during a stretch of coverage will vary, as suggested by this map of scene frequency for Landsat-7 over a 123 day period (note that the greatest "successes" occur over the United States, largely because the principal market for the imagery is there [not fortuitous cloudfree opportunities] upping the number of "trys" for scene acquisition).
As a closing ancillary thought for this page, we mention that today's popular Digital Cameras have much in common with some of the sensors described above. Light is recorded on CCDs after passing through a lens system and a means of splitting the light into primary colors. The light levels in each spectral region are digitized, processed by a computer chip, and stored (some cameras use removable disks) for immediate display in the camera screen or for downloading to a computer for processing into an image that can be reproduced on a computer's printer. Alternatively,commercial processing facilities - including most larger drug stores - can take the camera or the disk and produce high quality photos from the input. Here are three websites that describe the basics of digital camera construction and operation: Site 1, Site`2, and Site 3. The next three diagrams show schematically the components of typical digital cameras:
Let's move on to get an inkling as to how remote sensing data are processed and classified (this subject is detailed in Section 1).