Making robots more perceptive
This idea scales to networks of robots working in tandem to accomplish even more complex tasks. In this case, when one machine falters or fails to collaborate with the others, it can cause chaos: Picture a drone flying away from its fleet and failing to photograph its assigned area, or a self-driving car getting too close to another and disrupting carefully designed platoon.
Making networks like these smarter, more functional, and more efficient is the subject of two research projects at Lehigh University led by Nader Motee, an associate professor of mechanical engineering and mechanics at the P.C. Rossin College of Engineering and Applied Science. Motee is the principal investigator for both projects, with funding totaling $1.13 million.
Real-time perception and planning
With $680,000 in support over four years from the Office of Naval Research (ONR), Motee is investigating how to represent streaming data (e.g., images taken by an onboard high-frame-rate camera) efficiently for feature extraction, learning, planning, and control objectives.
For context, Motee uses the example of map classification using a fleet of flying robots. Although a single robot could accomplish this task, he says, the process might take hours or days. The timeline shrinks to minutes when a hundred or so robots do similar, more focused tasks, but the robots must communicate with one another to exchange relevant information and increase efficiency and resiliency while working in uncertain environments.
"The challenge is to figure out which pairs of robots should talk to one another, and how often, and what information to share," Motee says.
The amount of data the cameras on these flying robots collect is staggering—anywhere between 200 and 1,000 frames per second, with each frame at four or five megabytes in size. The robots must process this data in real-time because storing it all would be impossible. But not all data is relevant to the success of the task, nor does every robot in the fleet need to receive every bit of data from every other robot. It is important to efficiently represent the data to enable real-time learning, task planning, and control.
Motee says the human brain works in a similar way. "If my assigned task is to run to get out of my ajar office door, knowing that there's a laptop on the desk on my other side will become irrelevant to the task," he says. "That's not informative data. However, I do care that there's a chair on my path that could block me if I run in that specific direction. And if there were other people in the area doing a similar task, I'd want to make sure they were aware of this relevant information, as well."
Getting robots to make these determinations for themselves is an extremely complicated but important step in the long-term development of artificial intelligence and autonomy. This is at the heart of the military's interest in his research.
"[This work] will be relevant for time-sensitive missions and tasks when humans cannot stay in the loop to monitor the deployed robots," Motee says. "Achieving long-term autonomy and using onboard intelligent mechanisms will help robots survive for long periods of time during their missions in uncertain environments."
Risk-aware planning and control
Motee has also received funding for another vein of robotics research with a military connection: a three-year, $450,000 project supported by the Air Force Office of Scientific Research (AFOSR) that's primarily focused on risk analysis of nonlinear dynamical networks. The goal is to improve robot planning and control by transforming a robot's dynamic behaviors from nonlinear (in finite dimensions) to linear (in infinite dimensions).
Robots behave in a nonlinear manner. "For instance, if we change input signal to a robot by 10 percent," he explains, "its output will not change by the same 10 percent. Their nonlinear behavior makes control design and task planning problems very challenging."
Motee's team will investigate what conditions make nonlinear systems—such as power networks and platoon of self-driving cars—more prone to failure.
"These are two great examples where local failures may result in global, or systemic, failures," he says. "If a tree branch falls on a power line, it may cause that line to fail, which may cause nearby power lines to overload and fail. These local events may result in a global power outage, which is referred to as a systemic failure."
"With platoons of self-driving cars," he continues. "if the two leading cars fail to maintain a safe distance from each other and collide, it will interrupt all other cars in the platoon and result in several collisions."
This project will explore how engineers can mitigate effects of local failures in networks and prevent them from experiencing systemic failures.
Provided by Lehigh University