Socially Responsible Data Labeling


Blog By Hamed Alemohammad at Radiant Earth Foundation: “Labeling satellite imagery is the process of applying tags to scenes to provide context or confirm information. These labeled training datasets form the basis for machine learning (ML) algorithms. The labeling undertaking (in many cases) requires humans to meticulously and manually assign captions to the data, allowing the model to learn patterns and estimate them for other observations.

For a wide range of Earth observation applications, training data labels can be generated by annotating satellite imagery. Images can be classified to identify the entire image as a class (e.g., water body) or for specific objects within the satellite image. However, annotation tasks can only identify features observable in the imagery. For example, with Sentinel-2 imagery at the 10-meter spatial resolution, one cannot detect the more detailed features of interest, such as crop types but would be able to distinguish large croplands from other land cover classes.

Human error in labeling is inevitable and results in uncertainties and errors in the final label. As a result, it’s best practice to examine images multiple times and then assign a majority or consensus label. In general, significant human resources and financial investment is needed to annotate imagery at large scales.

In 2018, we identified the need for a geographically diverse land cover classification training dataset that required human annotation and validation of labels. We proposed to Schmidt Futures a project to generate such a dataset to advance land cover classification globally. In this blog post, we discuss what we’ve learned developing LandCoverNet, including the keys to generating good quality labels in a socially responsible manner….(More)”.