Environmental Remote Sensing GEOG 2021 Lecture 4 Image

Environmental Remote Sensing GEOG 2021 Lecture 4 Image

Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification Purpose categorising data data abstraction / simplification data interpretation mapping

for land cover mapping use land cover class as a surrogate for other information of interest (ie assign relevant information/characteristics to a land cover class) 2 Multispectral image classification Very widely used method of extracting thematic information Use multispectral (and other) information Separate different land cover classes based on

spectral response, texture, . i.e. separability in feature space 3 Basis for 'classifying' method: pattern recognition use any/all of the following properties in an image to differentiate between land cover classes:

spectral spatial temporal directional [time / distance-resolved (LIDAR)] 4 Spatial pattern recognition use spatial context to distinguish between different classes e.g. measures of image texture, spatial context of 'objects ' derived from data. Temporal pattern recognition the ability to distinguish based on spectral or spatial

considerations may vary over the year use variations in image DN (or derived data) over time to distinguish between different cover types e.g. variations in VI over agricultural crops 5 Directional pattern recognition surface with different structures will tend to give different trends in reflectance as a function of view and illumination angles Spectral pattern recognition most widely used distinguish between different land cover classes from differences in the spectral reflectance (or more

typically, image DN) in different wavebands 6 i.e. separate in feature space Use different spectral response of different materials to separate e.g. plot red v NIR DN values. 7 Approaches to Classification We need some form of automated (rule-based) classification algorithm to allow us to distinguish one surface type from another Supervised Classification

Unsupervised Classification 8 Supervised classification training stage (significant user input/expertise) Identify areas of cover types of interest (map, ground survey, spectral characteristics) in bands of an image From Lillesand, Kiefer and Chipman (2004) 9 Supervised classification: training stage areas of interest delineated by user

spectral information on the cover types is gathered for these areas Training data (subset of whole) These are classes we will place all remaining pixels in according to their DN values Can plot in feature space do we see clusters? 10 Supervised classification: classification stage Need rule(s) to decide into which class we put given pixel e.g. Minimum distance to means (MDM) for each land cover class, calculate the mean vector in feature space (i.e. the mean value in each waveband)

Put every pixel into nearest class/cluster define a limit beyond which a pixel remains unclassified a simple and fast technique but has major limitations 11 Supervised classification Feature space clusters E.g. 2 channels of information Are all clusters separate?

12 Supervised classification: MDM Find closest cluster mean for each pixel Simple and quick BUT what about points 1, 2? i.e. MDM insensitive to variance of clusters

Can we improve? 13 Supervised classification: parallelepiped (box) Assign boundaries around the spread of a class in feature space i.e. take account of variance typically use minimum/maximum of DN in a particular class to define limits, giving a rectangle in 2D, box in 3D (if we have > 2 bands) etc. pixels outside of these regions are unclassified (which is good or bad, depending on what you want!!) problems if class regions overlap or if high covariance between different bands (rectangular box shape inappropriate) can modify algorithm by using stepped boundaries with a series of

rectangles to partially overcome such problems simple and fast technique takes some account of variations in the variance of each class 14 Supervised classification: parallelepiped (box) Simple boxes defined by min/max limits of each training class. But overlaps..? so use stepped boxes 15

Supervised classification: Gaussian maximum likelihood assumes data in a class are (unimodal) Gaussian (normal) distributed class then defined through a mean vector and covariance matrix calculate the probability of a pixel belonging to any class using probability density functions defined from this information we can represent this as equiprobability contours & assign a pixel to the class for which it has the highest probability of belonging to 16 Supervised classification: Gaussian maximum likelihood

Now we use probability rather than distance in feature space Which class is each pixel most likely to belong to?? 17 Supervised classification: Gaussian maximum likelihood Now pixel 1 correctly assigned to corn class Much more sophisticated BUT is computationally expensive

compared to distance methods 18 Supervised classification: decision tree Classify in steps, where the classifier has only to be able to distinguish between two or more classes at each step can combine various types of classifiers as appropriate using such methods 19 Classification Accuracy How do we tell if classification is any good?

Classification error matrix (aka confusion matrix or contingency table) Need truth data sample pixels of known classes How many pixels of KNOWN class X are incorrectly classified as anything other than X (errors of omission)? So-called Type 2 error, or false negative Divide correctly classified pixels in each class of truth data by COLUMN totals (Producers Accuracy) How many pixels are incorrectly classified as class X when they should be some other known class (errors of commission)? So-called Type 1 error, or false positive Divide correctly classified pixels in each class by ROW totals (Users Accuracy) 20

Classification Accuracy Errors of comission for class U Errors of omission for class U 21 Can use original training data to test BUT. this only tells us how well the classifier can

classify the training areas Ideally, use an independent set of samples to give a better 'overall' accuracy estimate 22 Unsupervised Classification (clustering) Little input from user required (few assumptions) BUT means results hard to interpret (may not represent classes we recognise) cluster pixels in feature space based on some measure of their proximity

interpretation of results / assigned classes can be useful, e.g. in picking up variations within what would otherwise be distinguished as a single class e.g. stressed/unstressed crop in a single field) clusters can be of little intrinsic value in themselves e.g. sunlit trees, shaded trees is perhaps not a useful discrimination if one simply wants to classify 'trees', and so clusters may have to be combined 23 Unsupervised Classification: K-means A large number of clustering algorithms exist K-means input number of clusters desired algorithm typically initiated with arbitrarily-located 'seeds' for cluster means

each pixel then assigned to closest cluster mean revised mean vectors are then computed for each cluster repeat until some convergence criterion is met (e.g. cluster means don't move between iterations) computationally-expensive because it is iterative 24 Unsupervised classification: ISODATA (Iterative self-organising data analysis) algorithm Same as K-means but now we can vary number of clusters (by splitting / merging)

Start with (user-defined number) randomly located clusters Assign each pixel to nearest cluster (mean spectral distance) Re-calculate cluster means and standard deviations If distance between two clusters < some threshold, merge them If standard deviation in any one dimension > some threshold, split into two clusters Delete clusters with small number of pixels Re-assign pixels, re-calculate cluster statistics etc. until changes of clusters < some fixed threshold 25 ISODATA example: 2 classes, 2 bands DN

Ch 1 Initial cluster means a Assign pixel 1 to cluster a, 2 to b etc. DN Ch 1 Cluster means move towards pixels 1 and 2 respectively

a Pixel 2 Pixel 1 Pixel 1 Pixel 2 b b DN Ch 2 DN

Ch 1 New positions of cluster means All pixels assigned to a or b - update stats DN Ch 2 DN Ch 2 DN

Ch 1 SD of cluster a too large? Split a into 2, recalculate. Repeat. New positions of cluster means DN Ch 2 26 Hybrid Approaches

useful if large variability in the DN of individual classes use clustering concepts from unsupervised classification to derive subclasses for individual classes, followed by standard supervised methods. can apply e.g. K-means algorithm to (test) subareas, to derive class statistics and use the derived clusters to classify the whole scene requirement that all classes of interest are represented in these test areas clustering algorithms may not always determine all relevant classes in an image e.g. linear features (roads etc.) may not be picked-up by the textural methods described above 27 Postclassification filtering The result of a classification from RS data can often appear rather 'noisy'

Can we aggregate information in some way? Simplest & most common way is majority filtering a kernel is passed over the classification result and the class which occurs most commonly in the kernel is used May not always be appropriate; the particular method for spatial aggregation of categorical data of this sort depends on the particular application to which the data are to be put e.g. successive aggregations will typically lose scattered data of a certain class, but keep tightly-clustered data 28 Postclassification filtering

Majority filter 29

Recently Viewed Presentations

  • Democratic Republic of the Congo By Jackson McDermott

    Democratic Republic of the Congo By Jackson McDermott

    Democratic Republic of the Congo By Jackson McDermott
  • IRONY 1 Irony A contradiction between what happens

    IRONY 1 Irony A contradiction between what happens

    When I grow up, I want to be a little boy. Joseph Heller Nowadays people know the price of everything and the value of nothing. Oscar Wilde There isn't any answer. ... They also like the fun of seeing how...
  • Socrates and Plato on God and The Divine

    Socrates and Plato on God and The Divine

    'For one who is not pure himself to attain to the realm of purity would no doubt be a breach of the divine order (m ... bright Zeus, life-bringing Hera, Aidoneus, and Nestis, who waters with her tears the mortal...
  • Universitá Degli Studi Di Salerno Facoltà Di Ingegneria

    Universitá Degli Studi Di Salerno Facoltà Di Ingegneria

    It is an old problem that has long been solved ! Methods for Anti-Reset Windup Initialising the controller integral to a desired value Clamp the controller output to be greater than 0% and less than 100% Conditional integration: Turn off...
  • data.treasury.ri.gov


    Net cash outflows are assets not available to invest in lower valuation assets and represent an opportunity cost if financial markets rebound . Negative market returns could cause a Plan's funding ratio to drop to unsustainable levels where the plan...
  • Pitchbook Title Goes Here - BMO Capital Markets

    Pitchbook Title Goes Here - BMO Capital Markets

    Overview of CCB and MCC Accounts Last Revised March 2012 CBDS and MECH Accounts CBDS and MECH Accounts CBDS and MECH Accounts CBDS and MECH Accounts CBDS and MECH Accounts CBDS and MECH Accounts * Overview of Different Models Mutual...
  • Ancient China - birdvilleschools.net

    Ancient China - birdvilleschools.net

    The Dynastic Cycle A new dynasty comes to power. Lives of common people improved; taxes reduced; farming encouraged. Problems begin (extensive wars, invasions, etc.) Taxes increas
  • Cornell Notes - Home - Gallipolis City Schools

    Cornell Notes - Home - Gallipolis City Schools

    Cornell Notes. Note-taking strategy that will . improve your study skills and . your grades!! ... History example. Remember. You can take notes in any format you choose, just keep those notes in the large section on the RIGHT side...