Skip to content ↓

Speeding swarms of sensor robots

A new algorithm ensures that robotic environmental sensors will be able to focus on areas of interest without giving other areas short shrift.
One of two Slocum gliders owned and operated by the USC Center for Integrated Networked Aquatic PlatformS (CINAPS).
Caption:
One of two Slocum gliders owned and operated by the USC Center for Integrated Networked
Aquatic PlatformS (CINAPS).
Credits:
Image: Smith et al.

Concerns about the spread of radiation from damaged Japanese nuclear reactors — even as scientists are still trying to assess the consequences of the year-old Deepwater Horizon oil spill — have provided a painful reminder of just how important environmental monitoring can be. But collecting data on large expanses of land and sea can require massive deployments of resources.

At the Institute of Electrical and Electronics Engineers’ International Conference on Robotics and Automation in May, MIT researchers will present a new algorithm enabling sensor-laden robots to focus on the parts of their environments that change most frequently, without losing track of the regions that change more slowly. At the same conference, they’ll present a second paper describing a test run of the algorithm on underwater sensors that researchers at the University of Southern California (USC) are using to study algae blooms.

The work of Daniela Rus, a professor of computer science and electrical engineering, and postdocs Mac Schwager and Stephen Smith (now an assistant professor at the University of Waterloo in Ontario), the algorithm is designed for robots that will be monitoring an environment for long periods of time, tracing the same routes over and over. It assumes that the data of interest — temperature, the concentration of chemicals, the presence of organisms — fluctuate at different rates in different parts of the environment. In ocean regions with strong currents, for instance, chemical concentrations might change more rapidly than they do in more sheltered areas.

Floor it

In its current version, the algorithm assumes that researchers already have a mathematical model of the rates at which conditions change in different parts of the environment. The algorithm simply determines how the robots should adjust their velocities as they trace their routes. For instance, given particular rates of change along a route, would it make more sense to make one pass in an hour, slowing down considerably in areas of frequent change, or to make four or five passes, collecting less detailed data but taking more regular samples?

“From a practical point of view, it seems like an easy problem,” says Calin Belta, an assistant professor of mechanical engineering, systems engineering and bioinformatics at Boston University, who was not involved in the research. But it turns out to be a monstrously complex calculation. “It’s very hard to come up with a mathematical proof that you can really optimize the acquired knowledge,” he adds.

The MIT researchers draw an analogy with dust accumulating on a floor — dust that’s cleared whenever a sensor passes nearby. Because environmental change occurs at different rates in different areas, the dust piles up unevenly. The researchers were able to show that, with their algorithm, the height of the piles of dust would never exceed some limit: Only so much change could occur in any area before the sensor would measure it.

Ups and downs

Although the MIT researchers’ algorithm is designed to control robots’ velocity, the first robots on which it was tested don’t actually have velocity controllers. USC researchers have been studying harmful algae blooms using commercial robotic sensors designed by the Massachusetts company Webb Research. Because the sensors are intended to monitor ocean environments for weeks on end, they have to use power very sparingly, so they have no moving parts. Each sensor is shaped like an airplane, with an inflatable bladder on its nose. When the bladder fills, the sensor rises to the surface of the ocean; as the bladder empties, the sensor glides downward.

The more rapidly the bladder fills and empties, the steeper the sensor’s trajectory up and down, and the longer it takes to traverse a given distance — so it’s possible to concentrate the sensor’s attention in a particular location. Working with colleagues in the USC computer science department, the MIT team developed an interface that allows ocean researchers to specify regions of interest by drawing polygons around them on a digital map and indicating their priority with a numerical rating. The new algorithm then determines a trajectory for the sensor that will maximize the amount of data it collects in high-priority regions, without neglecting lower-priority regions.

At the moment, the algorithm depends on either some antecedent estimate of rates of change for an environment or researchers’ prioritization of regions. But in principle, a robotic sensor should be able to deduce rates of change from its own measurements, and the MIT researchers are currently working to modify the algorithm so that it can revise its own computations in light of new evidence. “That’s going to be a hard problem as well,” Belta says. “But they have the right background, and they’re strong, so I think they might be able to do it.”

The researchers also envision that the algorithm could prove useful for fleets of robots performing tasks other than environmental monitoring, such as tending produce, or — in a more literal application of the vacuuming-dust metaphor — cleaning up environmental hazards, such as oil leaking from underwater wells.

Related Links

Related Topics

More MIT News

Gene Keselman headshot

Faces of MIT: Gene Keselman

At MIT, Keselman is a lecturer, executive director, managing director, and innovator. Additionally, he is a colonel in the Air Force Reserves, board director, and startup leader.

Read full story