Now that we’ve shown what a GIS can do, let’s consider next, some of the mechanics of conducting a GIS analysis. Today, we do this analysis almost exclusively on computers. We show some typical hardware in the following diagram (missing is a scanner):
From B. Davis, GIS: A Visual Approach, ©1996. Reproduced by permission of Onword Press, Santa Fe, NM.
Many organizations have developed software packages for GIS analysis. The most widely used are ARC/INFO (Unix and Windows NT platforms) and ArcView (a smaller PC desktop version) marketed by the Environmental Systems Research Institute (ESRI) of Redlands, California, a company founded in 1969 by Jack and Laura Dangermond. Another program, called GRASS, was developed by the Army Corps of Engineers and, although they no longer support it, you can learn about this tool at Baylor University's GRASSLINKS.
The following flow chart outlines a general system design for procedures and steps in a typical GIS data handling routine:
A similar analysis flow sequence is present in the USGS GIS tutorial referenced on page 15-5. We reproduce the sequence as the title heading in the subsections of the tutorial: Data Capture (scanning and digitizing) --> Data Integration --> Projection and Registration --> Data Structuring --> Modeling --> Overlay --> Data Output.
The objective in any GIS operation is to assemble a data base that contains all the ingredients to manipulate, through models and other decision-making procedures, into a series of outputs for the problem-solving effort.
Some input data may already exist in digital form, but we must convert most of it from maps and tables or other sources. We can scan some maps or satellite/aerial imagery and handle the resulting products as digital inputs to individual pixels that are data cell equivalents. Note that the digital system directly records pixels as colors. However, we must recode these values (geocoding), if we want them associated with specific attributes. Although recent technology has automated this conversion, in many instances we still must manually digitize maps, using a digitizing board or tablet, such as we show here:
Behind this table, on which a map is mounted, are closely spaced electrical wires arranged in a grid that we can reference as x-y coordinates. The operator (here, Bill Campbell, Chief, NASA GSFC Code 935, who authored the chapter on GIS in the Landsat Tutorial Workbook) places a mobile puck with centered crosshairs over each point on the map and clicks a button to enter its position, along with a numerical code that records its attribute(s) into a computer database. He then moves to the next point, repeats the process, and enters a tie command.
Any map consists of points, lines, and polygons that locate spots or enclose patterns (information fields) that describe particular attributes or theme categories. Consider this situation that refers to several fields (e.g., different types of vegetation cover) separated by linear boundaries:
We can capture the information contained in each field by either of two methods of geocoding: Vector or Raster. In the center panel, the approach creates polygons that approximate the curvature of the field boundary. If that field has irregular boundaries, we may need many lines. Each line consists of two end points (a vector), whose positions we mark by coordinates during digitizing. We show that process in this diagram:
From B.Davis, GIS: A Visual Approach, ©1996. Reproduced by permission of Onword Press, Santa Fe, NM.
Each point has a unique coordinate value. Two ends or node points define a line. We specify a polygon by a series of connecting nodes (any two adjacent lines share a node), which must close (the main pitfall is that some polygons don't close, so one must repeat or repair the process). We then identify each polygon by a proper code label (numerical or alphabetical). A look-up table associates the code characters with the attributes they represent. In this way, we can enter all the map fields that are large enough to be conveniently circumscribed, and their category values, into a digital database.
In the raster approach, we manually overlay or scan onto the map a grid array of cells, having some specific size. As shown in the right panel (grid format, above), an irregular polygon then includes a number of cells, completely contained therein. The system records these cells’ locations within the grid and a relevant code number for each data element assigned to them. But some cells straddle field boundaries. A preset rule assigns the cell to one or the other of adjacent fields, usually related to a relevant proportion of either field. The array of cells that comprise a field only approximate the field shape, but for most purposes the inaccuracy is tolerable for making calculations. While the raster approach can be less precise, it has the advantage of creating numbers that are more easily handled.
Generally, grid cells are larger than the enclosed pixels in pictorial map displays, but the cluster of pixels within a polygon approximates the shape of the field. The relation of cells to pixels makes this raster format well adapted to digital manipulation. The size of a cell depends partly on the internal variability of the represented feature or property. Smaller cells increase accuracy but also require more data storage. Note that multiple data layers referenced to the same grid cell share this spatial dimensionality but have different coded values for the various attributes associated with any given cell.
Data management is sensitive to storage retrieval methods and to file structures. A good management software package should be able to:
Developing a GIS can be a costly, complex, and somewhat frustrating experience for the novitiate. We stress that data base design and encoding are major tasks that demand time, skilled personnel, and adequate funds. However, once developed, the information possibilities are exciting, and the intrinsic worth of the output more than compensates for the marginal costs of handling the various kinds of data. In plain language, GIS is a systematic, versatile, and comprehensive way to present, interpret, and recast spatial (geographic) data into more intelligible output.