Multispectral pattern recognition

Multispectral remote sensing is the collection and analysis of reflected, emitted, or back-scattered energy from an object or an area of interest in multiple bands of regions of the electromagnetic spectrum (Jensen, 2005). Subcategories of multispectral remote sensing include hyperspectral, in which hundreds of bands are collected and analyzed, and ultraspectral remote sensing where many hundreds of bands are used (Logicon, 1997). The main purpose of multispectral imaging is the potential to classify the image using multispectral classification. This is a much faster method of image analysis than is possible by human interpretation.

Multispectral remote sensing systems

edit

Remote sensing systems gather data via instruments typically carried on satellites in orbit around the Earth. The remote sensing scanner detects the energy that radiates from the object or area of interest. This energy is recorded as an analog electrical signal and converted into a digital value though an A-to-D conversion. There are several multispectral remote sensing systems that can be categorized in the following way:

Multispectral imaging using discrete detectors and scanning mirrors

edit
  • Landsat Multispectral Scanner (MSS)
  • Landsat Thematic Mapper (TM)
  • NOAA Geostationary Operational Environmental Satellite (GOES)
  • NOAA Advanced Very High Resolution Radiometer (AVHRR)
  • NASA and ORBIMAGE, Inc., Sea-viewing Wide field-of-view Sensor (SeaWiFS)
  • Daedalus, Inc., Aircraft Multispectral Scanner (AMS)
  • NASA Airborne Terrestrial Applications Sensor (ATLAS)

Multispectral imaging using linear arrays

edit
  • SPOT 1, 2, and 3 High Resolution Visible (HRV) sensors and Spot 4 and 5 High Resolution Visible Infrared (HRVIR) and vegetation sensor
  • Indian Remote Sensing System (IRS) Linear Imaging Self-scanning Sensor (LISS)
  • Space Imaging, Inc. (IKONOS)
  • Digital Globe, Inc. (QuickBird)
  • ORBIMAGE, Inc. (OrbView-3)
  • ImageSat International, Inc. (EROS A1)
  • NASA Terra Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER)
  • NASA Terra Multiangle Imaging Spectroradiometer (MISR)

Imaging spectrometry using linear and area arrays

edit
  • NASA Jet Propulsion Laboratory Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)
  • Compact Airborne Spectrographic Imager 3 (CASI 3)
  • NASA Terra Moderate Resolution Imaging Spectrometer (MODIS)
  • NASA Earth Observer (EO-1) Advanced Land Imager (ALI), Hyperion, and LEISA Atmospheric Corrector (LAC)

Satellite analog and digital photographic systems

edit

Multispectral classification methods

edit

A variety of methods can be used for the multispectral classification of images:

  • Algorithms based on parametric and nonparametric statistics that use ratio-and interval-scaled data and nonmetric methods that can also incorporate nominal scale data (Duda et al., 2001),
  • Supervised or unsupervised classification logic,
  • Hard or soft (fuzzy) set classification logic to create hard or fuzzy thematic output products,
  • Per-pixel or object-oriented classification logic, and
  • Hybrid approaches

Supervised classification

edit

In this classification method, the identity and location of some of the land-cover types are obtained beforehand from a combination of fieldwork, interpretation of aerial photography, map analysis, and personal experience. The analyst would locate sites that have similar characteristics to the known land-cover types. These areas are known as training sites because the known characteristics of these sites are used to train the classification algorithm for eventual land-cover mapping of the remainder of the image. Multivariate statistical parameters (means, standard deviations, covariance matrices, correlation matrices, etc.) are calculated for each training site. All pixels inside and outside of the training sites are evaluated and allocated to the class with the more similar characteristics.

Classification scheme

edit

The first step in the supervised classification method is to identify the land-cover and land-use classes to be used. Land-cover refers to the type of material present on the site (e.g. water, crops, forest, wet land, asphalt, and concrete). Land-use refers to the modifications made by people to the land cover (e.g. agriculture, commerce, settlement). All classes should be selected and defined carefully to properly classify remotely sensed data into the correct land-use and/or land-cover information. To achieve this purpose, it is necessary to use a classification system that contains taxonomically correct definitions of classes. If a hard classification is desired, the following classes should be used:

  • Mutually exclusive: there is not any taxonomic overlap of any classes (i.e., rain forest and evergreen forest are distinct classes).
  • Exhaustive: all land-covers in the area have been included.
  • Hierarchical: sub-level classes (e.g., single-family residential, multiple-family residential) are created, allowing that these classes can be included in a higher category (e.g., residential).

Some examples of hard classification schemes are:

  • American Planning Association Land-Based Classification System
  • United States Geological Survey Land-use/Land-cover Classification System for Use with Remote Sensor Data
  • U.S. Department of the Interior Fish and Wildlife Service
  • U.S. National Vegetation and Classification System
  • International Geosphere-Biosphere Program IGBP Land Cover Classification System

Training sites

edit

Once the classification scheme is adopted, the image analyst may select training sites in the image that are representative of the land-cover or land-use of interest. If the environment where the data was collected is relatively homogeneous, the training data can be used. If different conditions are found in the site, it would not be possible to extend the remote sensing training data to the site. To solve this problem, a geographical stratification should be done during the preliminary stages of the project. All differences should be recorded (e.g. soil type, water turbidity, crop species, etc.). These differences should be recorded on the imagery and the selection training sites made based on the geographical stratification of this data. The final classification map would be a composite of the individual stratum classifications.

After the data are organized in different training sites, a measurement vector is created. This vector would contain the brightness values for each pixel in each band in each training class. The mean, standard deviation, variance-covariance matrix, and correlation matrix are calculated from the measurement vectors.

Once the statistics from each training site are determined, the most effective bands for each class should be selected. The objective of this discrimination is to eliminate the bands that can provide redundant information. Graphical and statistical methods can be used to achieve this objective. Some of the graphic methods are:

  • Bar graph spectral plots
  • Cospectral mean vector plots
  • Feature space plots
  • Cospectral parallelepiped or ellipse plots

Classification algorithm

edit

The last step in supervised classification is selecting an appropriate algorithm. The choice of a specific algorithm depends on the input data and the desired output. Parametric algorithms are based on the fact that the data is normally distributed. If the data is not normally distributed, nonparametric algorithms should be used. The more common nonparametric algorithms are:

Unsupervised classification

edit

Unsupervised classification (also known as clustering) is a method of partitioning remote sensor image data in multispectral feature space and extracting land-cover information. Unsupervised classification require less input information from the analyst compared to supervised classification because clustering does not require training data. This process consists in a series of numerical operations to search for the spectral properties of pixels. From this process, a map with m spectral classes is obtained. Using the map, the analyst tries to assign or transform the spectral classes into thematic information of interest (i.e. forest, agriculture, urban). This process may not be easy because some spectral clusters represent mixed classes of surface materials and may not be useful. The analyst has to understand the spectral characteristics of the terrain to be able to label clusters as a specific information class. There are hundreds of clustering algorithms. Two of the most conceptually simple algorithms are the chain method and the ISODATA method.

Chain method

edit

The algorithm used in this method operates in a two-pass mode (it passes through the multispectral dataset two times. In the first pass, the program reads through the dataset and sequentially builds clusters (groups of points in spectral space). Once the program reads though the dataset, a mean vector is associated to each cluster. In the second pass, a minimum distance to means classification algorithm is applied to the dataset, pixel by pixel. Then, each pixel is assigned to one of the mean vectors created in the first step.....

ISODATA method

edit

The Iterative Self-Organizing Data Analysis Technique (ISODATA) algorithm used for Multispectral pattern recognition was developed by Geoffrey H. Ball and David J. Hall at Stanford Research Institute.[2]

The ISODATA algorithm is a modification of the k-means clustering algorithm, with added heuristic rules based on experimentation. In outlines:[3]

INPUT. dataset, user specified configuration values.

Initialize cluster points for k-means algorithm randomly.

DO UNTIL. termination conditions are satisfied

Run a few iterations of the k-means algorithm.

Split a cluster point into two if the standard deviation of the points in the cluster is too high.

Merge two cluster points into one if the distance between their mean is too low.

Delete a cluster point if it contains too few data points.

Delete data points that are too distant from its cluster point.

Check heuristic conditions for termination.

RETURN. clusters found

There are many possible heuristic conditions for termination, depending on the implementation.

References

edit
  1. ^ Ran, Lingyan; Zhang, Yanning; Wei, Wei; Zhang, Qilin (2017-10-23). "A Hyperspectral Image Classification Framework with Spatial Pixel Pair Features". Sensors. 17 (10): 2421. Bibcode:2017Senso..17.2421R. doi:10.3390/s17102421. PMC 5677443. PMID 29065535.
  2. ^ Ball, Geoffrey H.; Hall, David J. (1965). Isodata, a Novel Method of Data Analysis and Pattern Classification. Stanford Research Institute.
  3. ^ Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; Le Moigne, Jacqueline (February 2007). "A Fast Implementation of the Isodata Clustering Algorithm". International Journal of Computational Geometry & Applications. 17 (1): 71–103. doi:10.1142/S0218195907002252. ISSN 0218-1959.
  • Ball, Geoffrey H., Hall, David J. (1965) Isodata: a method of data analysis and pattern classification, Stanford Research Institute, Menlo Park, United States. Office of Naval Research. Information Sciences Branch
  • Duda, R. O., Hart, P. E., & Stork, D. G. (2001). Pattern Classification. New York: John Wiley & Sons.
  • Jensen, J. R. (2005). Introductory Digital Image Processing: A Remote Sensing Perspective. Upper Saddle River : Pearson Prentice Hall.
  • Belokon, W. F. et al. (1997). Multispectral Imagery Reference Guide. Fairfax: Logicon Geodynamics, Inc.