13-06-2017 14:30  Graduate Seminar

Obstacle Avoidance in an Unstructured Environment

During the past few years there has been a tremendous advancement in self-driving vehicle, mostly fueled by private companies that aim to make the technology a reality during the next decade or so. Most of the research has concentrated on road driving, be it urban or inter-urban, with learning systems lying at the center of the most successful schemes reported so far. It is therefore not surprising that companies that grow doing computer software and hardware now lie at the center of the self-driving revolution. On the other hand, off-road autonomous driving continuous to be among the most ambitious challenges faced by autonomous vehicles. Learning is made more difficult by the unstructured nature of the off-road scenario, and consequently more traditional methods are currently the most successful. These methods use a rather reach sensing suit, that may include two or more cameras, a lidar and a radar. Sensing abundance naturally provide robustness by redundancy: if one sensor becomes unavailable due to a hard failure or inappropriate environment conditions, then the other sensors can still provide for continuous operation. The data obtained by adequate processing from the sensor is then used by a control scheme to follow the designed trajectory as close as possible and steer away from obstacles. Among various alternatives, Model Predictive Control (MPC) has proven to be an effective choice for autonomous driving, given its intrinsic capabilities for dealing with obstacles detected on-the-fly and for providing a framework in which various constraints, for instance driving comfort, can be incorporated. This research concentrates on the problem of self-driving using MPC, assuming that the only sensor available for the system is a forward-looking monocular camera with a relatively large field of view. The starting point of the approach is the observation that MPC requires that all the state of the system be available for computing the control action. If this is not the case, then an estimator must be designed to reconstruct the state, loosely justified by the separation principle. To implement this structure, a nonlinear method for optimizing the camera motion and scene structure called the Bundle Adjustment (BA) algorithm is used as a state estimator. The features detected and reconstructed by the algorithm are fed into the MPC as potential obstacles constraining the control action. In this talk we will present the BA/MPC self-driving scheme, including an overview of the control computation and the tuning required for the BA. Substantial effort was dedicated to show how the overall approach works in practice, and endeavor that proved to be more difficult than expected. We believe that some of the lessons learnt during this stage can be of use also to future research. This research was done in collaboration with the MPC Lab at the Mechanical Engineering Department of UC Berkeley, which shared with us their MPC self-driving algorithm.

Location: 1061
Speaker: Barak Pinkovich
Affiliation: Dept. of Electrical Engineering Technion Back