Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Subject "UAV"

Sort by: Order: Results:

  • Änäkkälä, Mikael (2020)
    The number of drones has increased in both the private and corporate sectors. There is also an interest in the use of drones in agriculture since by using them the large fields can be monitored easily. Automatic flight systems of drones are simple to use. More accurate overview of the field can be got by utilizing the drones than by making observations from the side of the field. With aerial photographs the measures for the field can be planned further. For example, based on the photos pesticide spraying or fertilize spreading can be planned for the field. Drones can also be used to estimate crop biomasses. With drones the development of the crops is possible to observe as a timeseries during the growing season. The aim of this study was to explore the use of multispectral images and 3D models in crop monitoring. Crop leaf area index (LAI), biomass and chlorophyll content were measured. There were 8 different plants/fertilization levels in this study. In this study, a multispectral camera and a RGB-camera were used to estimate crops features. With a multispectral camera the reflectance values of the vegetation, which described how much of the incoming sun radiation was reflected back from the vegetation, were able to determine. The multispectral camera had five spectral bands (blue, green, red, red edge and NIR). Based on these bands NDVI vegetation index was calculated. The reflectance values and vegetation indices were compared to the dry matter mass, LAI, and chlorophyll content determinations of the vegetation. From the images of the RGB-camera 3D-models were created to calculate crop volumes. Calculated volumes were compared to crop dry matter mass and LAI measurements. Linear regression analysis was used to examine the relationship between the variables calculated from the images and the parameters determined from the crops on the field. According to these results, the variables determined from the multispectral images explained the dry matter mass and leaf area index of the crop slightly less than the 3D-models determined from the RGB images. The strongest determined dependence of the data recorded by the multispectral camera was between the faba bean LAI and NDVI (R2 = 0,85). The relationship between the reflection/index data of multispectral camera and crop parameter was weak: average coefficient of determination for dry matter mass of the crop was 0.15, for chlorophyll content 0.14, and for LAI 0,21. The highest coefficient of determination for 3D model of crop volume was between the dry matter mass of oats (R2 = 0.91). The mean coefficient of dependence was 0.69 for the relationship between the plant dry matter masses and 3D model volumes. The mean coefficient of determination for the relationship between the leaf area index of plants and the 3D model volumes was 0.57. Based on these results, from the multispectral camera data, the NDVI index was best suited to determine the crops dry matter mass, leaf area index, and chlorophyll content. However, there were differences in the dependencies between different spectral bands/NDVI index and plant properties determined from different crops. 3D models produced stronger dependences for estimating crop dry matter mass and leaf area index than the quantities determined from multispectral images. Analyzing the data with more sophisticated calculation methods utilizing the values of several spectral bands and the indices in the same time would probably have been a more efficient method to analyzing the data than the current used linear regression used in this study. Removing errors, caused by external factors, from multispectral images was found to be very difficult. Especially reflectance values of dry soil differed clearly from vegetations values. Further studies are needed to develop vegetation indices that can reduce errors caused by external factors. In addition, data processing of images should be developed to utilize multiple spectral bands and vegetation indices to determine the relationship between crop characteristics and variables measured from images. In addition, different plant species imaging techniques should be investigated, as different plants have different reflection values.
  • Vuornos, Taneli (2023)
    Dead wood is an integral part for forest biodiversity in boreal forests. 5000 (25 %) of Finland’s forest dwelling species depend on decaying dead wood during their life cycle. The loss of dead wood in forest ecosystems has been identified as the number one reason for species endangerment. Conventional dead wood mapping is done by counting and measuring dead wood from field plots or by aerial laser scanning, both of which can be timeand resource consuming. UAV-borne aerial imaging provides cost effective and high spatial and temporal resolution in comparison to conventional aerial imaging and satellite-based imagery. A convolutional neural network (CNN) is a deep learning algorithm that has shown promise in recognizing spatial patterns. The strengths of CNNs are end-to-end learning and transfer learning. CNNs have been used for mapping both standing and downed dead wood. This thesis aims to further investigate the usability of a method based on detecting downed coarse woody debris (CWD) in a coniferous boreal forest from RGB UAV-imagery using a CNN based segmentation approach. CWD was digitized from an orthomosaic created from UAV-imagery. CWD was digitized from 68 square shaped 100 x 100 m virtual plots surrounding 9 m radius circular field plots. The plots were divided into 57 training plots for training the CNN and 11 test plots for evaluating the CNN model performance. The effect of different loss functions and the effect of data augmentation on model segmentation performance was evaluated. The number of digitized and segmented CWD objects were compared to the number of CWD objects from the field plots and the effect of canopy cover and basal area on the detection rate was assessed. The CNN model segmented 324 m ² of CWD from the 11 virtual test plots, from which 469 m ² of CWD had been digitized, resulting in a 69 % segmented-to-digitized CWD ratio. The model with the best performance achieved a precision of 0.722, a recall of 0.500, a Dice-score of 0.591, and an intersection over union (IoU) of 0.42. The sample size of field measured CWD from the field plots was relatively small and neither canopy cover nor basal area was found to have a statistically significant (P = 0.05) effect on CWD detection rate. For the digitized CWD detection rate, canopy cover had a p-value of 0.059 and basal area a p-value of 0.764. For the model segmented CWD detection rate, the p-values were 0.052 and 0.884, respectively.