[
Date Prev][
Date Next][
Thread Prev][
Thread Next][
Date Index][
Thread Index]
[
List Home]
[sumo-dev] SUMO for learning autonomous driving in urban environments
|
Hi,
I'm a member of Prof. Ken Goldberg's AUTOLAB, in UC Berkeley. Recently, our group has been interested in learning autonomous driving policies for navigating cluttered urban environments, in the presence of other vehicles and pedestrians.
Currently, we have been using our own in-house simulator, but we would like to see if we can leverage some of the features and functionality of SUMO for this task.
We have spoken with the developers of the FLOW extension, which provides an interface for learning optimal traffic control policies. We are interested in building or extending a very similar interface, except targeted at low-level control of a vehicle with realistic dynamics.
Some of the requirements we have are:
- Precise, realistic 2D dynamics of the user vehicle. We would like to use steering-acceleration controls, instead of just a velocity control for the vehicles
- Ability to model realistic pedestrian interactions with user vehicles
- Ability to collect occupancy-grid and 2D LIDAR observations from the environment
- Realistic supervisor, for training imitation learning policies
The requirements which we have questions about are the 2D vehicle dynamics and the supervisor vehicles. It seems like the SUMO vehicle model does not allow for control over the vehicle's angle. Also, we are not sure if the vehicle-following models can be extended to operate as supervisors in an imitation learning setting. Specifically, to be reliable supervisors, they need to be able to provide a corrective action to the vehicle when a learned agent makes a mistake.
We would really appreciate any advice you can provide on this task. If we could leverage some of the functionality of SUMO, but for a general autonomous driving problem, we could provide a very valuable tool to AV researchers.
Thanks,
Jerry