Generating an Accuracy Report

After validating all the samples in the sample collection, you can go back to the Validation Session Details page to generate a report. To do this, click on Create a new report in the section called Reports based on this validation session. This will take you to the Create your Report screen (Figure 14).

Start by giving the report a name and a description. You can then select from different pre-defined styles for the color scheme of the report.

Next, select the statistical values or accuracy measures that you want to add to the report. Currently the following values are supported:

  1. User Accuracy: The user accuracy is the reliability of classes in the classified image: it is the fraction of correctly classified samples compared to all samples selected for a given class.

  2. Producer Accuracy: The producer accuracy is the accuracy of your classification: it is the fraction of correctly classified samples compared to all samples of a given ground truth class.

  3. Overall Accuracy: The overall accuracy is as the total number of correctly classified samples (diagonal elements of a confusion matrix) divided by the total number of samples.

  4. Average Mutual Information (AMI): The average mutual information measures the dependence between two variables. AMI provides a means of assessing the similarity of maps with different themes, i.e., the amount of information that one map can predict from the other (Finn, 1993, Foody, 2006).

  5. Quantity Disagreement: The amount of difference between a reference map and a comparison map that is due to the less than perfect match in the proportions of the categories (Pontius and Millones, 2011).

  6. Allocation Disagreement: The amount of difference between a reference map and a comparison map that is due to the less than optimal match in the spatial allocation of the categories, given the proportions of the categories in the reference and comparison maps (Pontius and Millones, 2011).

  7. Kappa: Cohen's kappa coefficient is a statistic that measures inter-rater agreement for qualitative (categorical) items. It is generally thought to be a more robust measure than a simple percent agreement calculation, since kappa takes the agreement occurring by chance into account.

  8. Portmanteau Accuracy: The portmanteau accuracy describes the overall accuracy when the data are collapsed into two classes: the land cover type of interest, and all other land cover types combined into a single class.

  9. Portmanteau Accuracy Partial: Partial portmanteau accuracy eliminates true negatives from the calculation, so this measure is the number of correctly mapped samples in a class, divided by the total number of samples mapped or validated in a class.

Finally, generate your report by clicking the Create Report button. The report can then be downloaded in Excel format. An example is shown in Figure 15.

You have now successfully uploaded, created, sampled and validated your first data set using the LACO-Wiki online tool and downloaded your first accuracy report.

Last updated