Elements of Aerial Photography - Lecture Note - Completely Remote Sensing tutorial, GPS, and GIS - facegis.com
Some Elements of Photogrammetry

Let's move now from the spectral part of photogrammetry to the spatial part. Scale, mentioned before, is just the comparison of the dimensions of an object or feature in a photo or map to its actual dimensions in the target. We state scale in several ways, such as, "six inches to the mile", "1/30,000", and, most commonly "1:2,000". These mean that one measurement unit in the numerator (photo or map) is equivalent to the stated number of that unit in the denominator (scene). Thus, 1:2,000 simply states that one of any length unit, such as an inch, in the photo corresponds to 2,000 inches on the ground or air (cloud). Or, one cm is equivalent to 20,000 cm. "Six inches to the mile" translates to: six inches in the photo represents 63,360 (5,280 ft x 12 in/ft) inches in the real world, but we can further reduce it to 1:10,560, because six and 63,360 are divisible by six. Note that if we enlarge or contract a photo of a given scale, say by projection as a transparency onto a screen, then one inch on the screen no longer corresponds to the same denominator number but now represents some other scale determined by the magnification factor. However, the effective resolution, the area covered, and the relative details remain the same.

We determine the scale of the aerial photo, expressed as its Representative Fraction (RF), by the height of the moving platform and by the focal length of the camera, according to this equation: RF = f/H*, where H* = H - h, with H = height (elevation with reference to sea level) of the camera and h is the height of a reference point on the surface, so that H - h is the distance between the platform and the point (assuming a flat ground surface; in rugged terrain, scale in effect varies with the elevations). We can also show that RF is also proportional to resolution and distance ratios, as given by RF = rg/rs = d/D, where rg is ground resolution (in line pairs per meter; see below) and rs is the sensor system resolution (in line pairs per millimeter); d is the distance between two points in the photo and D is the actual distance between these points on the ground (the definition of scale).

We can elucidate the roles of f and H* further with the aid of the next diagram, which, although not strictly correct in terms of optics and simplified to two dimensions, does allow us to visualize the effects of changing focal length and platform height:

Diagram showing how changing the focal length and platform height of a sensor affect the resulting photographic image.

Lines such as 1-1" or a-a' are light rays passing through the lens L. G is the ground. A' is at the focal plane (holding the film) for a focal length of f', and A" is the shift of this plane to a new value of f ". A"' is the location of the focal plane for a case in which the lens, L, is now at a lower elevation. A line on the ground, a-b, passing through the upper lens, L, is focused on plane A', such that, it has a film dimension of b'-a' (note that a - b is reversed in position but this does not matter because we can turn over a transparent negative).

When we lengthen the focal length to f" to bring the focus onto A", b'-a' expands to b"-a". Look next at what happens when we lower the camera (and airplane) to the A"' position: a-b in this new arrangement (where L, the lens location relative to the film is the same distance as case 1, so that the focus, or focal length, is once more f ', i.e., f"' = f'), now is expressed by b'''-a''', which for these conditions is even longer than b"-a".

In these situations the frame size of the film (x-y in the two dimensional simplification) remains the same for A', A", and A'". Therefore, when a and b (ground points) are at the A" location, the fraction of the scene imaged occupied by them decreases, that is, they occupy a smaller segment of it. In the A"' case, the size of film needed to display all of 1-2 would have to be greater (would need a larger frame), since the x-y would have enclosed less of the scene. Both 1' and 2' for the A' case and 1" and 2" for the A" case fall within the original x-y frame dimensions. Keep in mind that the dimensions shown on the line G are ground-sized, whereas those in A', A", and A"' are film-sized, reduced relative to ground distances by the scales of the photographs.

We summarize these relations in a mnemonic:

Long is large/low is large/and larger is smaller.

Interpret this as follows: the scale is larger (denominator becomes smaller) as we lengthen the focal length or as we lower the platform. A large(r) scale image covers a small(er) ground area (but has increased resolution). To appreciate how scale affects scene content, you may return to the various photos that we brought on-line in the previous page. The scale of each is printed alongside it. To further understand the role of changing the focal length, consider this figure:

The effect of changing focal length on image size.

In this figure, in which the airplane stays at the same altitude, the longer focal length camera would produce a smaller image but it would have a larger scale. Progressively decreasing the focal length yields ever larger image sizes but at ever smaller scales (an inch covers less and less ground "real estate"). A panoramic camera has variable scales with the largest straight down (nadir) and decreasing scale outward.

Resolution has a popular meaning but is best defined in a technical sense. We normally think of resolution as the ability to separate and distinguish adjacent objects or items in a scene, be it in a photo or real life. We specify the resolution in terms of the smallest features we can discriminate. As we look outward, intuition (or experience) tells us that we can see small objects close to us but if the same objects are far away they probably can't be spotted (i.e., are unresolved). But, contrast influences resolution. If two items are the same color, they may be hard to separate, but if they are sharply different in color, tone, or brightness, we can identify them more easily. Shape also is a factor.

So, a rigorous definition of resolution, as measured quantitatively, relies on the ability to separate adjacent alternating black and white thin lines in a target. The resolution of a film is determined in a laboratory by photographing placard-sized charts like the one shown below containing black lines with different spacings on a white background (or the reverse) *. Then, the resolution is the smallest spacing, in which we can discriminate pairs. This, of course, varies with distance between camera and target.

A typical line-pair resolution chart.

The separation between light-dark lines varies in this target. The spatial frequency of these alternating lines can be given in lines pairs/unit distance (width). The sharp light-dark pattern can be approximated by sinusoidal waves of specified frequencies; a common measure is given in cycles/millimeter. Depending on relative brightnesses between the lines in a pair (or contrasting objects in a real scene), the brightness amplitude between peak and trough of the waves will vary or modulate. A technically more exacting measure of resolution is given by the Modulation Transfer Function (MTF) - this plots modulation (ratio of target amplitudes to sensor [film; electronic] amplitudes) against spatial frequency. For the human eye as a sensor, its ability to separate colors is optimal at low frequencies (wider spacings) and falls off rapidly at frequencies greater than 2 cycles per retinal degree (spacings closer than about 0.2 mm). Black and white pairs are optimally separated at 7 cycles per degree (about 0.05 mm) and fall off notably at both higher and lower frequencies. As one would expect, film MTFs are near maximum at low frequencies and drop progressively at higher frequencies.

(Note for the record: In real scenes, light reaching the film or sensor will consist of bundles of varying wavelengths (spectrally diverse) that act as a collection of different sine waves. To specify their influence on the recording medium, such as film, they can be subject to a procedure called Fourier Analysis which breaks them systematically into the actual wavelengths and amplitudes present. This is often applied when the spatial domain (variations of tonal patterns) of an image is being studied in scene interpretation. See pages 506-507 of Lillesand and Kiefer, Remote Sensing and Image Interpretaton, 5th Ed., J. Wiley, 2000 for a brief overview of this subject.)

In practice, we could place a resolution target in a scene (for example, painting black lines with different spacing on a concrete airport runway or road) to determine resolution for aerial conditions. Ground resolution is then the number of black/white line pairs within some width (normally one meter) that we can just discern in aerial photos taken at a particular height. Depending on the camera, film-resolving power, and platform height (the system), in the photo, the pair will either blend visually (not resolvable) or can be distinguished. We express system resolution, rs, (in which we combine the effects of sensor and film factors) in line pairs/mm within the print. A formula for ground resolution, rg, (in line pairs/meter), applicable to just separable ground lines, is:rg = f x rs/H. A typical example is a case where the lens focal length is 150 mm, the system resolution is 60 line pairs/mm, and the height is 3,000 meters, so that Rg is 3 line pairs/meter. From the relation 1 line pair/rg = width on ground of 1 line pair, this width is 0.33 m (each line is half that value). This means that the airborne camera can resolve an object on the ground that has a dimension of 0.165 m (about 6.5 inches), if it contrasts with its surroundings, using a film of appropriate resolving power. If the aircraft were flying higher, the camera could not detect this sized object.

Resolution in film (negatives and prints) is governed, in part, by the size distribution of the silver grains. In Landsat's Multispectral Scanner/Thematic Mapper (MSS/TM), and other electronic sensors, image resolution ties closely to the size of the pixels or to the dimensions of individual detectors in the arrays of Charge-Coupled Detectors (CCDs), such as on SPOT. At first thought, it would seem that we cannot resolve objects smaller than the ground dimensions represented in an individual pixel/detector. However, if the spectral characteristics of a subresolution spot on the ground are sufficiently different from surrounding areas, they can affect the average brightness of the pixel so that the spot is visible in the image. An example of this are roads that are narrower than a 30 m (98 ft) TM pixel, yet are quite visible in a TM image, if they are a bright and reflecting surface set within dark vegetation (such as dirt roads midst sage in Wyoming).

In an aerial photo, when we view features at ground points off the principal point (optical center, usually at nadir or normal to a flat surface), that is, along slant directions, they may appear to lean away from the center, especially if they are tall (e.g., buildings) or have high relief. This distortion is worse if the aircraft flies low to acquire large scale photos. This is one type of displacement, and is evident near the edges in the 1:4,000 aerial photo of a neighborhood in Harrisburg, shown on page 10-1. A better example is the top image in Section 11, page 11-4. There are other modes of displacement, such as apparent lateral movements of image points along slopes of differing angles.

A big plus for aerial photography: Mission controllers can fly aerial photo missions at any time during the day, but these usually occur between about 10:00 AM and 2:00 PM (in summer to avoid afternoon storms). Typically, the aircraft traverses the region to be photographed along back-and-forth flight lines and acquires pictures at intervals that allow about 50% overlap between successive photos and 20% to 50% sidelap between lines. The camera usually mounts below the plane, near its nose. Film in the camera advances automatically at time intervals that are synchronized with the speed of the aircraft. Especially in color photos, but also in black and white photos, blue and ultraviolet light that is scattered by the atmosphere may degrade the film image. We can reduce this degradation by using a haze filter that absorbs the ultraviolet and the very shortest visible blue wavelengths.

NASA has a stable of support aircraft that operate various sensors, including cameras, to gather ground reference data for remote sensing experiments (see page 13-4 which discusses this). An example of a small-scale image (about 1:150,000) obtained during a U-2 flight, which operated at an altitude of about 18,000 m (59,000 ft) over Utah (resolution about 5 meters), closes this section on aerial photography.

Small scale photo taken from a high-flying NASA U-2 aircraft that shows a part of the Colorado Plateau where the Green River has entrenched into the rock strata.

For anyone interested in viewing more aerial-type photos, including perhaps one of your home region, consult the on-line Net Home Page called TerraServer, sponsored by Microsoft. Aerial imagery, either individual photos or sections of orthophotoquads, of selected large parts of the United States, has been digitized from data collected by the U.S. Geological Survey. Photo resolution ranges from less than 2 to about 12 meters. Imagery taken with the KVR-1000 camera (resolution: 2 meters) flown on several Russian satellites shows regions in the rest of the world, mainly in Europe. They market these data worldwide as part of their SPIN-2 program. All photos are black and white.

Of course, the principles we have been applying in these three pages to aerial photos pertain in many respects to space imagery as well. Mapping once done primarily with air photos can now be done almost as well with space products. The main drawback is resolution, and that limitation is rapidly disappearing with the declassification of high resolution military imagery and with the ever-improving resolution capability of satellites now being flown or on the drawing boards for the foreseeable future. As we shall see in the next Section, the ability to gather data from space that pertain to three-dimensional surface variations allows the space image mapping community to essentially duplicate all the advantages once exclusive to air-based photography.


* As as aside, the writer [NMS] played a small but helpful role in the process by which the United States government decided to lift the ban on high resolution sensors that had been imposed because of militay considerations. Around 1987, the Soviet space program - strapped for hard cash - began selling in the open world market selected photos obtained by their cosmonauts and later by sensors on unmanned military satellites. A NASA official visiting Moscow was given one such product - a false color (but with no green band) KR-1000 photo that covered part of the Oregon northwest coast extending past Portland, OR. The Russians claimed at the time that this now-commercial product had better than 2 meter resolution. Concerned NASA Headquarter people wanted that claim to be tested/verified. I was given the photo and told to "solve the problem". But there was no travel money to check the site itself. So I contacted a colleague at Oregon State University, Dr. Charles Rosenfeld, Geography Dept., and later the Commanding General of the Oregon Air National Guard] who taught a remote sensing course. He came up with a neat solution: he took his class to the airport outside Astoria, OR. On the blacktop runway, a series of white lines 6 feet apart had been painted perpendicular to the runway length as a marker guide for landing. These were just visible with a magnifying glass on the Russian photo. This proved the veracity of their 2 meter claim.

Source: http://rst.gsfc.nasa.gov