ZSL puts machine learning to work on studying camera trap data

Jessica Twentyman Profile picture for user jtwentyman February 1, 2018
Summary:
Conservation charity ZSL is working with Google to develop models that identify wildlife species from image data captured in the wild.

zsl
When it comes to studying the biodiversity of an area or monitoring a particular wildlife species, one of the most effective tools that conservationists and scientists use is the camera trap.

These are digital cameras, rigged with sensors that detect heat or motion and left in a particular location.

When an animal comes into range of the camera trap, its sensors detect its presence and automatically trigger the camera’s shutter release, in order to capture a photo (or string of photos) of the passer-by.

Camera traps are used extensively by the Zoological Society of London (ZSL), the organization perhaps best-known for running London Zoo, but also an international scientific, conservation and educational charity in its own right.

But while the technology has proved invaluable in monitoring Sumatran tigers in Indonesia and Liberia’s pygmy hippopotamus community, for example, it comes with a downside, says Sophie Maxwell, head of conservation technology at ZSL – a vast quantity of image data that needs to be analysed:

Projects that deploy a large number of camera traps over several months can send back millions of images. In the past, scientists would sit there manually inspecting each image and then creating the species identification data for each one that is needed to create a biodiversity report.

Basically, she says, it could take around nine months to produce a report on even an average-sized project, by which time the situation may have changed in any case, due to poaching or incursion by a predator species. That, she adds, could mean that the conservation strategy for animals in that area needs to change, based on newer information.

Speeding up the process

For some years, ZSL has been exploring how it might use technology to speed up data processing – by recruiting ‘citizen scientists’, for example, to help in the manual image-inspection effort. But it now seems that artificial intelligence and more specifically machine learning could have a more dramatic effect on getting work done and improving its accuracy, which is where a partnership between ZSL and Google comes into play.

Since the start of 2017, ZSL has been one of several organisations working with the tech giant on refining its Cloud AutoML Vision tool, along with fashion company Urban Outfitters and media and entertainment giant Disney.

Google Cloud AutoML Vision is the first service offered in a suite of Cloud AutoML services, all aimed at helping businesses with limited machine learning expertise tap into machine learning. So AutoML Vision, for example, is focused on helping them to build custom machine learning models for image recognition using a drag-and-drop interface to upload images, train and manage models, and then deploy them directly on Google Cloud.

The technology has proved a good fit at ZSL, says Maxwell:

Previously, AI and machine learning technologies have been pretty inaccessible to an organisation like us. You need a data scientist, really, to write your own model, and the pre-trained models that are commercially available tend to be pretty basic - they can distinguish a cloud from a cup, but not deliver the intricate level of detail we need, like identifying a particular species within a group.”

So what we’ve been doing with Google is refining the usability of the AutoML Vision tool. We’re working to build models based on existing data that we hold, so that for locations where we regularly do camera trapping, such as Borneo or Costa RIca, we have the models ready when new data comes in, so we can compare this year’s data with last year’s data, for example. Over time we’ll have customer models for particular locations, particular species sets, for particular environments - such as forest, savannah or Antarctica. So far, the results are looking really good.

Cloud-hosted ML models

This is exciting work, she says, because the conservation technology team is a relatively small group of 8 people and doesn’t have particular data science skills but is able to build such models, host them in the Google Cloud and open them up to other conservation groups around the world.

In addition, it will also be possible to call on these cloud-based models from inside an application via an API. That’s already happening with ZSL Instant Detect technology for wildlife and threat monitoring, which relies on connected sensors and camera traps to track animals, identify intrusions by poachers, and alert rangers to situations to which they might need to respond.

But this is just a start, says Maxwell:

Our longer-term goal is to provide a health check for the planet, and that’s a very big ambition obviously, but as it starts to become possible to perform more of these studies and get the results back more quickly, our view of biodiversity will grow and grow.

In a sense, she says, technology trends are on ZSL’s side. Sensors are getting cheaper all the time, as is almost limitless cloud-based processing and storage power. Companies like Google are clearly working to democratize once arcane technologies such as machine learning and get them into new hands. Maxwell  concludes that this is welcome, but more could be done:

But we’re a charity, so we rely on partnerships and funding and a helping hand is always welcome. With more support, the kind of work we could get done would be amazing.

Loading
A grey colored placeholder image