Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [sumo-dev] SUMO for learning autonomous driving in urban environments

the 2D vehicle dynamics in SUMO are limited to address a specifc range of behaviors (multiple vehicles on the same lane, overtaking on the same lane, virtual lane formation, lateral encroachment). This is done by activating the sublane model (
The model is based on longitudinal and lateral speed but does not address steering angle.
The vehicle angle can be set by the user and also does change according to lateral manoeuvres but this is only used for visualization.
If you were to use your own model that translates steering/acceleration controls into lateral speeds then you could reflect this in sumo.

SUMO does allow for vehicle-pedestrian interactions when their paths cross at intersections and also when they are using the same road lane. The default pedestrian model is tailed for speed and is not very realistic in regard to crowd dynamics. There are ongoing plans to couple other pedestrian simulations, though.

SUMO gives you vehicle and pedestrian positions as points and rather than shapes. The internal model for collision avoidance is based on bounding boxes. You can ratrieve length, width and angle of these bounding boxes for generating lidar response/occupancy grids.

The car following modules in SUMO are encapsulated from the rest of the code and can be extended with custom code (or own models added to the existing set) with some C++ proficiency. This has repeatedly been done by third parties.


Am Di., 30. Okt. 2018 um 19:35 Uhr schrieb Hankun Zhao <jerryz123@xxxxxxxxxxxx>:

I'm a member of Prof. Ken Goldberg's AUTOLAB, in UC Berkeley. Recently, our group has been interested in learning autonomous driving policies for navigating cluttered urban environments, in the presence of other vehicles and pedestrians.

Currently, we have been using our own in-house simulator, but we would like to see if we can leverage some of the features and functionality of SUMO for this task.

We have spoken with the developers of the FLOW extension, which provides an interface for learning optimal traffic control policies. We are interested in building or extending a very similar interface, except targeted at low-level control of a vehicle with realistic dynamics.

Some of the requirements we have are:
 - Precise, realistic 2D dynamics of the user vehicle. We would like to use steering-acceleration controls, instead of just a velocity control for the vehicles
 - Ability to model realistic pedestrian interactions with user vehicles
 - Ability to collect occupancy-grid and 2D LIDAR observations from the environment
 - Realistic supervisor, for training imitation learning policies

The requirements which we have questions about are the 2D vehicle dynamics and the supervisor vehicles. It seems like the SUMO vehicle model does not allow for control over the vehicle's angle. Also, we are not sure if the vehicle-following models can be extended to operate as supervisors in an imitation learning setting. Specifically, to be reliable supervisors, they need to be able to provide a corrective action to the vehicle when a learned agent makes a mistake.

We would really appreciate any advice you can provide on this task. If we could leverage some of the functionality of SUMO, but for a general autonomous driving problem, we could provide a very valuable tool to AV researchers.

sumo-dev mailing list
To change your delivery options, retrieve your password, or unsubscribe from this list, visit

Back to the top