image analysis

Plant ID app (part 2): REST API

In part 1 of this blog post, we downloaded ~25.000 images of 100 plant species and trained a deep learning classification model. The 100 plant species are included in the Danish stream plant index (DVPI). In part 2, we create a REST API with endpoints/services that can be accessed from a very simple landing page. All code from parts 1 and 2 of this blog post can be found on GitHub.

Plant ID app (part 1): Data and model training

Plants species can be truly difficult to tell apart and this job often requires expert knowledge. However, when images are available computer vision methods can be used to guide us in the right direction. Deep learning methods are very useful for image analysis. Training convolutional neural networks have become the way to solve a wide range of image task including segmentation, classification, etc. Here, we will train a lightweight image classification model to identify 100 different plant species.

Parsing sonar data in Python using NumPy

Recreational-grade sonar equipment can collect vast amounts of data. Unfortunately, the data is often hidden in some kind of proprietary binary format. However, efforts in reverse engineering such formats have made it possible to extract of the information. I have spent time tracking down some this information which has resulted in a R-package as well which can read ‘.sl2’ and ‘.sl3’ file formats collected using Lowrance sonar equipment. See also the sllib Python library which fills a similar gap.

Creating mosaics from Sentinel 2 satellite imagery

Satellite imagery are collected at large scale and made freely available by institutions ESA and NASA. This data is collected at high spatial (10-30 m) and temporal (~2 weeks) resolution making it ideal for many applications. However, going from raw satellite imagery to nice looking image mosaics can be quite a mouthful. Here, I show how to use the gdalcubes R-package to produce a nationwide image mosaic of Denmark.

Semantic segmentation using U-Net with PyTorch

Deep learning is here to stay and has revolutionized the way data is analyzed. Furthermore, it is straightforward to get started. Recently, I played around with the fastai library to classify fish species but wanted to go further behind the scenes and dig deeper into PyTorch. As part of another project, I have used a U-Net to perform semantic segmentation of ‘pike’ in images. Training has been done on Google Colab and a local GPU powered workstation excellent for smaller experiments.

Fish species classification using deep learning and the fastai library

Deep learning is everywhere. The surge of new methods for analyzing all kinds of data is astonishing. Especially image analysis has been impacted by deep learning with new methods and rapid improvements in model performance for many different tasks. Convolutional neural networks (CNN) can be used to classify images with high accuracy and new libraries have made it easier than ever to build and train such networks. The best thing is that you do not need large amounts of data or specialized GPU hardware to experiment with techniques such as transfer learning, where we only need to fine-tune the last part of a pre-trained network.