Robocar Software Overview

An overview of the Derp Learning autonomous vehicle system architecture.

1. Dataflow

Clean consistent data is vital to a successful autonomous project; the heart of our our system is designed around how to manage data generation and transmission. Let’s break down what we want to do with that data:

Below is a generic mock up of how our system handles data. There are three major categories of operations that we do: collect raw inputs or “sense” shown in teal; perform data transformations or “plan” shown in orange; and dispatch aggregated data or “act shown in green.

data recording flow

Every operation interacts with a temporary storage object we’ll call the system state vector by writing data, reading data, or both. For every time step we will want to capture all data stored in the state_vector to a log file before starting the next step.
Looking at the above flow chart it’s pretty clear we would like to sense before plan and plan before we act. It’s also very likely that we’ll want to perform sense and act operations on some of the components we use.

2. Control Loop Overview

To maintain good sequential separation the vehicle control loop is split into three phases: sense, plan, and act (SPA). For each complete loop of the vehicle’s control system we first complete all sense related component tasks, then all planing related tasks, and finally we act.

For uniformness and flexibility we use a template class for all components which has three public functions: Sense, Plan, and Act. All component operations are put in the function corresponding to the appropriate loop phase. For instance code receiving¬† the joystick’s control input lives in the joystick’s Sense function. Likewise code controlling the joystick’s indicator lights lives in the joystick’s Act function. As the joystick has no data processing responsibilities it’s Plan function is left empty and will be skipped over by the control loop.

To make this design modular with respect to the components which may be present on a given vehicle or test environment we store the component list in a configuration file for a given build. For each operation phase the control loop then calls each component class in sequence and performs whatever actions are in the corresponding phase function.

Below is an example of the control loop in action on the components: camera, IMU, joystick, CNN predictor, controller, micro maestro. This loop assumes a single CNN is being used for image processing. Components which are not interacted with during a given phase have been grayed out.

SPA overview

  1. Sense: Camera, IMU, and joystick components are all polled and their data is recorded in the state.csv (image data is recorded in a separate directory).
  2. Plan: Image data is read by the perception CNN which predicts a steering angle, this prediction is also recorded in the state table. If the car is in autonomous mode the CNN output is the read by the controller which sets a speed and passes the CNN prediction through a low pass filter to reduce noise.
  3. Act: The vehicle state table is read and a feedback signal is sent to the controller to keep the operator in the loop. Finally the Micro Maestro controller reads the speed and steering information in the state table and generates control signals for the vehicles motors.

With this structure in place we have the ability to easily add edit and remove elements which control the car, receive diagnostic feedback, and record system logs for training and debugging.
Have a question? Leave a comment!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s