Skip to content

omipan/camera_traps_classifier

Repository files navigation

Maasai Mara Classifier

Species Classification in a Camera Trap Project

Camera Traps Species Classification

This repository describes the procedure of utilizing computer vision to classify species found in ecological camera trap image data. The steps are:

  • Training a species classifier with camera trap images and the associated collected labels
  • Apply an animal detector to a larger (unlabeled) set of images and extract the respective crops
  • Apply the trained species classifier to the expanded set of animal images

The figure above illustrates the species classification process.

The original training, test, and validation camera trap images are available on LILA BC here: https://lila.science/datasets/biome-health-project-maasai-mara . Images containing humans have been removed, and some image files were corrupted, but the corresponding animal crops are available upon request.

Prerequisites

Set up Classification Environment

We start by creating a conda environment with the required dependencies.

git clone https://github.com/omipan/camera_traps_classifier/

# Create conda environment
conda create -n ct_classifier python=3.8
conda activate ct_classifier

# Install Requirements
pip install -r requirements.txt
conda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia ## depending on the availability of GPU, check [here](https://pytorch.org/get-started/locally/) to find a version that suits your needs. 

Training a species classifier

Preprocessing

Given a labeling procedure that has produced a set of species labels and bounding box coordinates (i.e. the boxes that contain the tagged species) extract the crops and prepare a new metadata file to be utilized for the downstream steps that utilize machine learning. The preprocessing step can also include all sorts of filtering actions (e.g. remove very small images or images missing metadata), sampling, grouping of species etc.

Move training data to the right directory

The data to train the classifier should be saved ander the data/ directory. Before continuing make sure that these are in the following format:

  • data/dataset_name/dataset_name_crops/ Folder that contains all the *.jpg image crops
  • data/dataset_name/dataset_name_meta.csv The dataframe with the medatata (i.e. image paths, labels etc.) required to run the routines

Train the classifier

Launch the script that trains classifier in a supervised way.

conda activate ct_classifier

python train_classifier.py --backbone convnext_tiny --transfer_learning --save_test_predictions --dataset dataset_name --metafile dataset_meta.csv

Applying MegaDetector to filter empties and keep animal image crops

Download and Set up MegaDetector

First, we set up MegaDetector and its dependencies based on the instruction described here. We suggest the following setup:

mkdir ~/git
cd detection
git clone https://github.com/ecologize/CameraTraps
git clone https://github.com/Microsoft/ai4eutils
cd CameraTraps
conda env create --file envs/environment-detector.yml
conda activate cameratraps-detector
export PYTHONPATH="$PYTHONPATH:$HOME/path_to_directory/camera_traps_classifier/detection/CameraTraps:$HOME/path_to_directory/camera_traps_classifier/detection/ai4eutils"
pip install tensorflow==2.10 # if we use MDv4

The detection environment is set up now. In the future, whenever you start a new shell, you just need to do:

conda activate cameratraps-detector
export PYTHONPATH="$PYTHONPATH:$HOME/path_to_directory/camera_traps_classifier/detection/CameraTraps:$HOME/path_to_directory/camera_traps_classifier/detection/ai4eutils"

Then, we download the MegaDetector model from this link and place it under the detection/CameraTraps/ folder. Note, we utilized version 4 during our iteration of experiments, but you are encouraged to try newer versions.

Move the custom crop extraction script

We move the custom script we have for extracting animal detection crops under the respective directory under CameraTraps/

mv crop_detections_custom.py CameraTraps/classification/

Create a list with all the images to apply MegaDetector

Under CameraTraps/ store a json file e.g. camera_trap_images.json file with all the image paths that we want to detect animals for. This will be used as a parameter to the following detection and crop extraction routines.

Now, we are ready to run MegaDetector on camera trap images to extract detections

From the CameraTraps directory, we run the following script to apply MegaDetector on the input batch of images

conda activate cameratraps-detector

python detection/run_detector_batch.py md_v4.1.0.pb camera_trap_images.json camera_trap_image_detections.json --threshold 0.65 --checkpoint_frequency 1000

Note, that 0.65 is the confidence threshold to filter uncertain MegaDetector detections. You are encouraged to pick a value that works for your projects' requirements

Extract the image crops associated with the detected animals

Given original images and the newly acquired animal detections, we crop the boxes to use them for inference in the subsequent step.

conda activate cameratraps-detector
pip install azure-storage-blob # not of any use but it is imported at various points within the MD scripts
mkdir animal_crops
python classification/crop_detections_custom.py --detections_json camera_trap_image_detections.json --cropped_images_dir  animal_crops --images_dir ""  --threshold 0.65 --detections_crops_df_path df_animal_crops.csv

Apply the trained species classifier to the expanded dataset

Finally, the trained classifier is applied on the extracted animal crops and a final file with all the predictions (and corresponding confidences) is stored under ml_inference/

conda activate ct_classifier
python inference.py --dataset dataset_name --detections_file detection/CameraTraps/df_animal_crops.csv --inference_model mmct_convnext_tiny_2024_04_22__01_10_49 --metafile dataset_name_meta.csv --animal_crop_dir detection/CameraTraps/animal_crops

Acknowledgements

This detection component of this codebase depends on the MegaDetector GitHub repository.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors