Getting started with Jetson Nano and Donkey car aka Autonomous Car

Jetson Nano

I start this project with 2 of my brilliant friends Felix and Marco who have rich experience in machine learning. We had participated in an Autonomous car competition and we tried a lot of different things from pixel value thresholding, style transfer, and sequential model etc. You can check our works and this wonderful article written by Felix.

One of my friend Jonathan had set up a store for purchasing a full set of donkey car. If you want to save some time from gathering the individual parts, this is perfect for you.

update:I have fix a few bug/typo and upload an image. You can find the image in the bottom of the article.

update (18/10/2019):
Serval command have been updated to accommodate for Jetson Nano SD card Image JP4.2.2. Please also check the official donkeycar documentation for more information.

  • Disabling the rtl8192cu buggy driver seems not working really well and the bit rate is limited to 1Mb/s for new image. Please follow the link to reinstall the driver.
  • Nvidia have released support to the new Tensorflow and Pytorch version specifically for Jetson. Some installation procedure are being updated.

Jetson Nano is a powerful and efficient single board computer made for (buzzword alert) AI on the edge. It is just USD 99 and it provides all the possibility for the Maker community to harness the power of machine learning.

I have been playing around with Donkey car for some time using Raspberry Pi. I absolutely love it and appreciate the effort from the community. I am able to train it with simple CNN but the computational power soon falls short when I add more sensors, for example, IMU, Lidar. And a computationally intensive model will not have a good framerate or even not able to run on the Pi. I need something which is more powerful but not so expensive. 😛 Something below USD 100.

left: with Lidar and IMU installed, right: Standard Donkey car settings.

And here it is, the Jetson Nano. Unlike Raspberry Pi, Jetson Nano is released just a few weeks ago and there are little tutorials and projects about it. I had a difficult time to setup the Donkey car and decided to write a (and my first😆) tutorial on how to set things up. Let’s get started.

For the Jetson Nano part, you will need a Jetson Nano, micro SD card and a wifi USB dongle

I am pretty surprised that the dev kit doesn’t come with onboard wifi and Bluetooth. I followed the advice from a tutorial from Nvidia to write the Image to SD card and buy the suggested wifi USB dongle: Edimax EW-7811Un.

left: micro SD card, Right: Edimax EW-7811Un

All you need to do is to follow the Nvidia tutorial and boot up the device. Once you see the welcoming screen, Congratulations!

Image credit: Nivida

However… While I was testing it, the wifi keeps disconnecting every several minutes and I cannot download and install the package that I needed. I spent 2 days 😕 trying to find a solution and the following command will make life easier.

echo “blacklist rtl8192cu” | sudo tee -a /etc/modprobe.d/blacklist.conf

This disables the buggy driver and the wifi seems to return to normal but the bit rate is limited to only 1Mb/s. Another (better) solution is to recompile the driver. Please follow the instruction from this git repo and reinstall the driver. You may need to look into the trouble shoot section to disable the power management function to make the wifi works probably.

Start installing the package

This is an embedded system dedicated to Machine Learning. It won’t be completed without machine learning framework! You can find the following information in Nvidia forums too.

Let’s start with Tensorflow first!

sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-devsudo apt-get install python3-pipsudo pip3 install -U pipsudo pip3 install -U numpy grpcio absl-py py-cpuinfo psutil portpicker six mock requests gast h5py astor termcolor protobuf keras-applications keras-preprocessing wrapt google-pastasudo pip3 install --pre --extra-index-url tensorflow-gpu==1.14.0+nv19.7

It is going to take a long time and things will seem frozen. It took around 45 min on my machine to set things up.

Remember to test things to ensure it is properly installed. Make sure there are no error messages.

donkey@donkey-desktop:~$ python3Python 3.6.7 (default, Oct 22 2018, 11:32:17)[GCC 8.2.0] on linuxType "help", "copyright", "credits" or "license" for more information.>>> import tensorflow as tf>>>

Why not install Pytorch too?

wget -O torch-1.2.0a0+8554416-cp36-cp36m-linux_aarch64.whlpip3 install numpy torch-1.2.0a0+8554416-cp36-cp36m-linux_aarch64.whl

Again, test things before moving forward

donkey@donkey-desktop:~$ python3Python 3.6.7 (default, Oct 22 2018, 11:32:17)[GCC 8.2.0] on linuxType "help", "copyright", "credits" or "license" for more information.>>> import torch>>> print(torch.__version__)1.1.0a0+b457266>>> print('CUDA available: ' + str(torch.cuda.is_available()))CUDA available: True>>>

Last but not least, Keras and we will need it for the Donkey car.

pip install doesn’t work for me, so I use the following method:

sudo apt-get install python3-scipysudo apt-get install python3-keras


donkey@donkey-desktop:~$ python3Python 3.6.7 (default, Oct 22 2018, 11:32:17)[GCC 8.2.0] on linuxType "help", "copyright", "credits" or "license" for more information.>>> import kerasUsing TensorFlow backend.>>>

Software part mostly finished. Let’s go to the hardware part.

I am not going to go through the Donkey car installing procedures step by step. For newcomers, please visit here for more information. I am going to highlight some key points to make things work on Jetson Nano.

  1. PCA9685 PWM driver
  2. Camera

First, update GPIO library

Nivida had already provided a GPIO library and what amazing is it has the same API for RPi.GPIO. So almost nothing needs to be changed to port RPi library to Jetson Nano. Follow the instructions from Nivida GitHub to install the library and you can test the GPIO too. Remember to set the gpio group for the user as well.

Second, install the PCA9685 servo library for controlling the steering and throttle

pip3 install Adafruit_PCA9685

Connect the PCA9685 to the Jetson nano. You should be able to see the pin number from the silkscreen mark.

VCC <-> 3.3v
SDL <-> SDL(3)
SCL <-> SCL(5)
Marking on the PCA9685 (left), back of jetson nano(middle), front of jetson nano(right)

And as usual, test the connection.

donkey@donkey-desktop:/opt/nvidia$ sudo i2cdetect -y -r 10  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f00:          -- -- -- -- -- -- -- -- -- -- -- -- --10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --40: 40 -- -- -- -- -- -- -- -- -- -- -- -- -- -- --50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --70: 70 -- -- -- -- -- -- --

Look at address 0x40. It is our PCA9685. For those who want to know more about I2C protocol, you can visit here.

To access the I2C channel, the user will need to be added to the I2C group. You will need to reboot to activate it.

sudo usermod -a -G i2c usernamesudo reboot

Check the group setting for the user:

donkey@donkey-desktop:~$ groupsi2c adm cdrom sudo audio dip video plugdev lpadmin gdm sambashare gpio

There it is.

Still with me? The next step is to set up the camera. The bad news is Pi camera 1 doesn’t work with Jetson Nano. And also the wide-angle camera I had been using for a long time.

leftmost: Pi Camera 2, left, Pi camera 2 carrier board with wide-angle camera, right old Pi wide-angle camera Rubiks for scale ;P

The purchase guide is to avoid any OV5647 chip for the camera and use the one will IMX219 chip. The IMX219 driver is pre-installed in the image.

CSI connector on Jetson nano

Open the lock, put the cable in the slot, close the lock, Done. Just to be careful of the cable orientation. You can look into the connector where the pins are facing. Power things up and everything should be fine. You can check this tutorial to see how to play with the camera.

Running face detection, the detection is not optimised since I am using wide-angle camera.

The last step: installed the Donkeycar module

I have forked the original donkey car repo and made the necessary changes to make things work. You can download the Donkey car library from my repo and start installing the package.

git clone donkeycarpip3 install -e .

For those who are interested. I edit the following things.

  1. Add a new camera class
  2. Add a default bus to Actuator parts
  3. Add Int typecasting in for a variable to make training works

After a long wait (~45min) for setting up different necessary packages, you can start creating your own car folder with the following command.

donkey@donkey-desktop:~/sandbox$ donkey createcar d2

You need to make a few changes to to use the new camera.

#from import PiCamerafrom import CSICamera

And also add the new camera parts to the vehicle.

#cam = PiCamera(resolution=cfg.CAMERA_RESOLUTION)    
#V.add(cam, outputs=['cam/image_array'], threaded=True)
cam = CSICamera(resolution=cfg.CAMERA_RESOLUTION)
V.add(cam, outputs=['cam/image_array'], threaded=False)

Everything should work just fine to this step and you can start driving and creating your own dataset!

donkey@donkey-desktop:~/sandbox/d2$ python3 drive

You can then log into the web server and control the car.

And the most exciting part! You can train your car locally on Jetson Nano!

Training through ssh session, using Tegra X1 from Jetson Nano.

Have fun!

Controlling the car from Jetson Nano and my PS4 controller. I was too lazy to dismount the Pi 😜


  1. While I am writing this article, I found a USB Bluetooth dongle in my trash box. I plug it in Jetson Nano and it works! And I don’t need to do any extra setting to connect it to my PS4 controller. Yay! You can use the controller following this tutorial. It is critical to get a good dataset and a proper controller will help a lot.
  2. The steering and throttle seem to jam each other. The reason behind is the power-hungry Jetson Nano and motor. The momentary current draw is too large that a significant voltage drop affects the ESC signal. There are two solutions: Use a separate power supply for servo driver or add a large capacitor to prevent voltage drop. I prefer the latter one but I don't have any cap at the moment so you can see from the photo that I cut a USB cable, solder 2 jumper wires on it, and connect it to the Servo driver.
Extra power to V+ and V- on PCA9685

3. If you want the whole SD card Image, you can find it here. The ID and password are both “donkey”.

4. You may also want to create a swap partition/file for JN since it only have 4GB of memeory.

sudo fallocate -l 6G /var/swapfilesudo chmod 600 /var/swapfilesudo mkswap /var/swapfilesudo swapon /var/swapfilesudo bash -c 'echo "/var/swapfile swap swap defaults 0 0" >> /etc/fstab'




Hongkonger, Maker, Teacher. Interested in all kind of stuff, from physics to Machine Learning. Lead Engineer in 2019 CES innovation award honoree.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Managing Unsupervised data with Semi-supervised learning

Towards Event-Driven Architecture for Machine Learning — Part I

In Depth: Parameter tuning for Random Forest

PyTorch Performance Analysis with TensorBoard

Privy filter — Filtering privacy and bias in photos to remove privilege/bias from machine learning

Igor Ivanov: Harnessing Machine Learning Skills to Reduce Damages from Tropical Storms

Logistic Regression From Scratch using Python

Microsoft Explores Three Key Mysteries of Ensemble Learning

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Fei Cheung

Fei Cheung

Hongkonger, Maker, Teacher. Interested in all kind of stuff, from physics to Machine Learning. Lead Engineer in 2019 CES innovation award honoree.

More from Medium

Display the different size of preview image from that of the captured images.

Reinforcement learning — a live-action experience

Imagination Based AIs- Part-5

web visualization using tork-a rwt