APS Logo

Using Multi-Task Learning for Semantic Segmentation in Agriculture

ORAL

Abstract

Combining aerial imagery with computer vision and machine learning is allowing farmers to usher in a new era of precision agriculture. Traditionally, farmers could only manually inspect the edges of a field, limiting their ability to identify issues within the field. To this end, we collect high-resolution aerial imagery and use computer vision to analyze the entire field. With this information, farmers can identify problems early and accurately to reduce costs, improve yields, and ultimately further sustainability. We analyze the images with a U-Net architecture to segment the entire field into different classes representing soil, weeds, crops, and unmanaged areas including roads, waterways, and field boundaries. There is a hierarchical relationship among the different classes, wherein weeds and crops are both forms of vegetation, and managed areas contain both soil and vegetation. We explicitly encode this structure in the model by adding segmentation tasks for the different levels of the hierarchy. In addition, we add a global classification task to predict the status of the entire field. This model achieves an IoU of 40% in the single multi-class segmentation task. The addition of the hierarchical segmentation heads and field status classification improves the IoU by 4.5%.

Publication: We plan to submit a publication derived from this work to ECCV 2022.

Presenters

  • Daniel Marley

    Intelinair

Authors

  • Daniel Marley

    Intelinair

  • Jennifer Hobbs

    Intelinair, Northwestern University