Background
Camera traps are environmental research tools that are used for passive or remote wildlife monitoring. These cameras take pictures using a motion sensor and can gather large quantities of data about wildlife presence and activities without a researcher needing to be present.
However, the data produced can also be challenging to sort through due to sheer quantity. Especially when attempting to identify rare or elusive species, there might be thousands of “blanks” that are triggered by the movement of vegetation in the wind or by a non-target species. Though this research technique is relatively low-intensity in setup and data collection, the inefficiency of data analysis can negate these benefits.
A More Efficient Solution?

If you’ve recently taken a picture of an animal with a cell phone, you might have noticed that the phone offers identification suggestions for that animal. This neat feature is known as visual intelligence. However, visual intelligence technology is not just a shortcut to identify animals you encounter; it is also something that is becoming increasingly relevant for wildlife and biodiversity research.
This environmental innovation is being paired with camera trapping technology in order to streamline camera trap data analysis. In 2019, Google released a website called Wildlife Insights, which uses the AI (artificial intelligence) model SpeciesNet to process camera trap images. Since then, the model has been trained to identify nearly 1300 different wildlife species and over 230 taxonomic classes, and it can also identify “blanks” without an animal.
How Does it Work?
Each image that the SpeciesNet model correctly identifies contributes to its training dataset. This dataset is massive and contains images with variable lighting quality, image resolution, background or foreground vegetation, and target species placement in the frame, making the tool more versatile and accurate across many different images. Even if the target species is blurry or partially obscured by vegetation, the SpeciesNet pattern recognition AI often accurately identifies the animal, and these more challenging identifications help hone the technology even further.
Currently, the SpeciesNet model detects 99.4% of images containing animals and is correct 98.7% of the time when identifying that there is an animal present. When making a species-level identification, the model is accurate 94.5% of the time.

Why Does it Matter?
Using the current Wildlife Insights interface, researchers or even casual camera trappers can upload images. The SpeciesNet model immediately identifies which images have animals in them, and classifies as many species as possible. The user can then verify each of the identifications, which creates opportunity for data analysis while also continuing to train the SpeciesNet model for increased accuracy.
Rather than clicking through images for hours, waiting for just one potentially blurry shot of the target species, this technology significantly reduces the number of images to look through by eliminating blanks. The species identification function also allows the researcher to simply verify the identification as accurate, instead of needing to type out data manually. When an image set is fully processed, the Wildlife Insights website can be used to generate tables, charts, and graphics of the species identification data, so there is no need for data export into another software.
The Future
As this technology becomes more advanced, the necessary time to process camera trap data will likely continue to decrease. With this added efficiency, camera trapping may become an even more effective research tool for remote, passive species and biodiversity monitoring.
Though the AI identification and elimination of blanks can substantially cut down on data processing time, the algorithm can still make mistakes. Researchers caution that all species identification claims made by the AI go through careful human validation, especially while the technology is still in its early stages.
With these best practices in mind, this technology is a promising step towards more efficient and less invasive camera trapping and data processing.