Canopy gap fraction estimation from digital hemispherical images using sky radiance models and a linear conversion method

Mait Lang, Andres Kuusk, Matti Mõttus, Miina Rautiainen, Tiit Nilson

Research output: Contribution to journalArticleScientificpeer-review

56 Citations (Scopus)

Abstract

Thresholding methods for estimating canopy indices from gamma corrected digital hemispherical images are influenced both by the image interpreter and applicability of the automatic interpretation algorithm. The matrix output data of camera sensors (i.e., raw data) can be used to avoid gamma correction. Nevertheless, the impact of an interpreter or a thresholding algorithm is still present. In this paper, we present a new methodology which applies a previously published linear conversion method of raw data to a single below-canopy image to obtain unbiased estimates of canopy gap fraction. The new methodology derives an above-canopy reference image from a below-canopy image using sky radiance information available in gaps. A sky radiance model and interpolation are used to create the above-canopy image which could alternatively be obtained with a second sensor. To evaluate the new method, we conducted test measurements in three mature forest stands. The comparison of image processing output to LAI-2000 Plant Canopy Analyzer data showed high agreement and repeatability. The proposed method can be applied in versatile future versions of hemispherical image analysis programs to create unbiased comparison data for other manual or automatic image processing methods designed for plant canopies.

Original languageEnglish
Pages (from-to)20-29
Number of pages10
JournalAgricultural and Forest Meteorology
Volume150
Issue number1
DOIs
Publication statusPublished - 15 Jan 2010
MoE publication typeA1 Journal article-refereed

Keywords

  • Digital camera
  • Gap fraction
  • Plant area index
  • Plant canopies

Fingerprint

Dive into the research topics of 'Canopy gap fraction estimation from digital hemispherical images using sky radiance models and a linear conversion method'. Together they form a unique fingerprint.

Cite this