NANI? Autonomous KANSEI DORIFTO(DRIFT) Car

Fei Cheung
9 min readFeb 9, 2020
NANI?! KANSEI DORIFTDO?!

This post is suggested to read with this Youtube video playing in the background.

How to Drift? Movie: Initial D (HK)

Autonomous car is one of the biggest things in the coming decade. Companys like Tesla, Google had already invested billions in creating sophisticated hardware and software to provide a more reliable and safe self-driving vehicle. Although it sounds like rocket science, it won’t stop the maker community trying to make their version.

Brilliant people had started their autonomous vehicle project, for example, Donkey Car, which is a great open-source project to start with! Unlike commercial Autnoumous vehicle, it is for autonomous racing, and it means to be dangerous. 😏

I spent some time with my friends creating our version of donkey car. You can check our work here, for example, Cheap China LIDAR library and technique for generalisation. With single-board computers like the Raspberry Pi and simple ConvNet Model, the results already look pretty good.

Left: First Trial, Middle: A Trial Race, Right: Stable Autonomous driving. All using a simple ConvNet Model

Yet racing is not my ultimate goal. What I want to accomplish require more power, both physical power and computational power (and financial power as well). I want to create an autonomous Drifting car, (and play Deja Vu while it runs, hope you are listening as well =p). And guess what! Jetson Nano just comes in time!. I spent some time running donkey car on Jetson Nano. With the GPU onboard, there’s a lot more room for more sophisticated DNN model, and hopefully, it can learn how to drift!

Finally, every piece comes together. Let’s do it!

Anthony Wong as Bunta Fujiwara

Let’s start with hardware upgrade first!

The old HSP 94186 does serve it purpose for simple driving task. it is easy to set up and cheap to start with. But obviously it is not designed for drift. The battery size and type doesnt fit well for high power output. Using a brushed motor and corresponding ESC (electronic speed controller) makes it rather hard to control. The steering angle is not large enough too.

And most importantly, the tire is not suitable for drifting.

Old HSP 94186 model (left), with Pi, camera, Lidar and PWM controller installed (Right)

Sakura Xis is a more advance RC car (with a higher price as well). There are few advantages compare to the old car.

Component of the new car. The key change is the Jetson Nano, a new car frame with more advance component

The most significant changes are as follow:

  1. Use brushless sensored ESC and motor. It helps provide more stable output, especially when driving at low speed.
  2. Power upgrade. A bigger battery and able to provide more current to the throttle and steering.
  3. Jetson Nano.
  4. Drifting Tire.
  5. A larger steering angle.

With everything set, let’s gather the training data!

The first plan is to try using supervised learning and see how far it can go first. The car is controlled by a PS4 controller and the control, sensor, and camera data will be collected while I am driving. The training data will be used to train the DNN model to mimic user behaviour.

But it is not as easy as it sounds. It is not easy to drift the car without proper fine tuning of the car. Choose of the tires matters. Even it is able to initiate the drift, it is hard to control and maintain at drifting state. I learned it the hard way.🤦‍♂️

Countless crashed 💀
Scratched and broken parts 🤦‍♂️

But like the old man said, practice makes perfect.😉

Years of gaming expereince finally become useful (Manual control)

The key to drift is to oversteer and increase the throttle instantly at the same time. The rear wheel loss grip duel to a high throttle and swing. You need to control the drifting angle and throttle precisely to balance the oversteer angle. If the throttle is too high, the car will lose control and spin. If too low, it won’t drift. Same for the steering as well. Well balanced steering and throttle control are essential for drifting.

Now it is time for the software part. Let’s build a custom Neural Network Model.

The standard model of vision task is Convolution Neural Network (ConvNet). In short, it utilises convolution operations, and extract the useful features of the input. After the features are extracted, very often, a fully connected layer will be used to connect the extracted features and predict the output.

Credit: https://en.wikipedia.org/wiki/Convolutional_neural_network

A simple ConvNet model can already do many things, and it can run pretty fast (around 20Hz) on a Raspberry Pi 3.

But it is not sufficient for my case. The problem is its stateless nature. The prediction depends on the input only without any memory. While when human drive, we control the car base on what we saw and what we predict. We make our decision base on a sequence of action.

Credit: https://colah.github.io/posts/2015-09-NN-Types-FP/

You can see between every time step, the Neural Network generates a Hidden state S, which will be fed into the next time step and used for the next prediction. With this extra bit(s) of information, the model can better predict the output base on the past (and maybe future) events. There are many types of sequence model with more advance structure such as LSTM, GRU…Hopefully using a sequence model will help our car learn how to drift.

Things don’t go well as expected…

At first, I tried the following few models:

  1. A simple ConvNet with categorical output as control.
  2. A LSTM based model with categorical output.
  3. A LSTM based model with linear output
left: Simple ConvNet model. Middle: LSTM with catergorical output, Right: LSTM with linear output🤦‍♂️

All of them fail, but we can learn a few things here.

  1. A simple ConvNet is not able to drift at all. All it does is increase the throttle and crash.
  2. An LSTM base model with categorical output can initiate drift, but it is not able to control and alway oversteer.
  3. A model with linear output can learn to steer and increase the throttle, but it is not ‘drastic’ enough to induce the drift. My theory is the nature of regression limit a sudden change for control.

Hmmm, let’s try using more advance model and see what we can get.

A Many to Many RNN approach

Image source: Andrej Karpathy

I tried using a many to many RNN approach to predict the future Steering, throttle and IMU value. I hope that it forces the agent to learn counter steer after trigger drifting and act beforehand. But no luck🤷‍♂️, the model drive forward without turning at all.
I tried to enable autopilot mode when the car is drifting. What interesting is the model keep boosting the throttle to lose grip but the steering doesn’t move at all.

It doesnt turn at all…

After a few days, I try to analyse the video clip, and I observe something.

The model is able to counter-steer, but it was already too late. The car already loses control and spin around itself. I have a hypothesis:

Maybe it is not about using an advance the model. Maybe it is just not responding fast enough.

Super slow inference spped. The inference speed for models using RNN is around 100–130 ms on Jetson.

I suspect it would be better if I increase the framerate. The model inference rate is pretty slow which is around 120ms, and it converts to less than 10Hz.

Recurrent Neural Network is generally slower than other types of Neural Network since the operation is not able to parallelise well duel to its nature, GPU’s magic doesn’t work well on it. I look into TensorRT and try to use it to freeze the model. But no luck for me as well. The standard utilities in Tensorflow 1.14 keep throwing me errors, and I can’t solve it in a short amount of time.
I decided to search for another way to increase inference speed.

FPS Matters!

Luckily there is an alternative! There is CUDA implementation for LSTM in Keras. To further increase the inference speed, I use only 1 layer of GRU and abandon LSTM. The inference speed decrease to around 60–65 ms, and the result looks pretty good!

Deja vu! I’ve been in this place before!

After countless fail, finally, it can DORIFTDO! (Eurobeat intensifies!!!)

Lesson Learned

Sometimes it is not about how to advance the model is, a simple model, good quality data set, and inference speed is more important. When provided with sufficient information, a simple model can achieve a lot.

What Next?

The original plan is to drift in a circular pattern, which is harder than it sounds.

First, I am not able to control the car precisely in a circular path. It is hard to maintain a fixed centre while maintaining a sufficient throttle to keep drifting. If I am not able to provide a correct label, the car will never learn (Supervised learning disadvantage).

Using RL (Reinforcement Learning) is a better approach but comes with other difficulties. We either need an excellent physical simulator, or we need to spend much time for the physical car to learn (and crash).

The model is also highly overfitted to a controlled environment. I am starting to train a model on an actual track and see how it works in a more complex environment. It will require more sophisticated control policy (Maybe try actual inertial drifting!).

KANSEI DORFITDO! (Manual control)

There’s many models we can try and some fine-tuning for the current model as well. I didn’t tune any hyperparameters and just let it be.

Upgrade to tf2.0 and maybe using Pytorch. They are more stable compared to old tf1.14 and easier to debug.

RNN is slow since the computation for the hidden state cannot be parallelised efficiently. I will try to use convolution instead and see if it is can perform the same task. I am looking into the Transformer model and see if it would work on the car as well.

Acknowledgement

I started this project as a way to learn machine learning and try to build something from it. I would like to acknowledge my friends Felix and Macro started this project with me. They are the real expert in the field of Machine learning, and I learnt a lot from them. And of coz, the ultimate goal for me is to create a meme for Autonomous DRIFT, and I finally make it haha.

Inital D (HK). One of my favourite movie, and it inspire me to start this project

The code for training and driving are on my GitHub pages.

--

--

Fei Cheung

Hongkonger, Maker, Teacher. Interested in all kind of stuff, from physics to Machine Learning. Lead Engineer in 2019 CES innovation award honoree.