Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 2)

Mini-Tools

 
 

Search Report

  • 1. Balasubramaniam, Sowbaranika Optimized Classification in Camera Trap Images: An Approach with Smart Camera Traps, Machine Learning, and Human Inference

    Master of Science, The Ohio State University, 2024, Computer Science and Engineering

    Motion-activated cameras, commonly known as camera traps, play a crucial role in wildlife monitoring, biodiversity conservation, and basic ecological science, capturing a diverse range of species in their natural habitats. However, the sheer volume of images generated requires manual analysis by ecologists, making this a time-consuming and inefficient process. This is particularly frustrating since many of the images do not contain animals because camera traps are triggered by the motion of vegetation due to wind or miss the animal due to the slow camera response. This thesis presents an approach aimed at addressing these challenges through the automation of image classification and the development of smart camera traps that leverage artificial intelligence implementations in edge devices. First, we develop a species classifier pipeline consisting of object detection and a species classifier for a novel camera trap setting. We train a convolutional neural network to identify 45 trained species using 1.2M images, achieving an accuracy of around 89\% at the class level without manual intervention. This research demonstrates the combination of deep learning methodologies and human oversight, showcasing their ability to facilitate automatic or semiautomatic, precise, and efficient surveys of small animals in the wild, particularly in novel settings or for novel species. The suggested pipeline helps achieve 95\% accuracy in image classification, resulting in 11\% of images that require manual review at the class level. This pipeline helps to automatically annotate the image while maintaining accuracy. The camera trap generates a large number of images. The time delay between data capture and image processing leads to the loss of critical information. Real-time labeling while capturing images can help mitigate this issue. Following the traditional classifier, we investigate methods for deploying an AI model in a camera trap. Smart Camera Traps would result in real (open full item for complete abstract)

    Committee: Tanya Berger-Wolf (Advisor); Christopher Stewart (Committee Member); Wei-Lun Chao (Committee Member) Subjects: Computer Engineering
  • 2. Ramesh Babu, Reshma Imageomics Approach to Understanding Visual Biological Trait Similarities using Butterfly Mimicry as a Model System

    Master of Science, The Ohio State University, 2023, Computer Science and Engineering

    One of the central goals of evolutionary biology is to quantify the fine variation in adaptive traits, subject to natural selection, and to identify the causal genes. One system where natural selection can be characterized precisely are the tropical Heliconius butterflies. Two species, Heliconius erato and H. melpomene, both warn off predators with bold wing patterns, and have evolved to resemble each other (mimicry). This thesis presents current work being done to expand machine learning (ML) algorithms to capture the biological mechanisms underlying mimicry in Heliconius butterflies. We apply an imageomics approach, which uses biology-guided ML to extract biological traits from images. Traditionally, a common point of contact between biology and computer science is to use ML and computer vision to classify species. To go beyond species classification towards identifying biological traits that are common or different between mimic pairs requires restructuring ML models to consider biologically relevant information. We translate the biological question into computational abstractions in several ways. First, we ask whether there is computationally detectable information from images in the similarity and difference of mimic pairs. Building upon previous work in this field, we use a convolutional neural network with triplet loss to create a latent embedding space, with different variations of the triplet chosen across species and co-mimic pairs. Pairwise comparisons across subspecies show that traditionally hypothesized co-mimics are significantly closer to each other in the embedding than are other subspecies pairs, providing a positive answer to our question. Having identified the pattern elements central to mimicry, we leverage biological knowledge of wing pattern development to link phenotypic variation with specific genotypes. The need to place landmarks manually is a common limitation of biological morphometric studies: here, we automate this process (open full item for complete abstract)

    Committee: Wei-Lun Chao (Committee Member); Tanya Berger-Wolf (Advisor) Subjects: Biology; Computer Science; Conservation; Ecology; Genetics