I’ll be giving another lightning talk this evening downtown at the San Diego JS meetup. I get to talk about the Autonomous Tank project and the code behind that.
Since I no longer have a corporate-sponsored license of Microsoft PowerPoint, I had to improvise for my overhead slideshow this time. So I did what most coders would do in this scenario: I coded something for the task.
I managed to snag some great track data today at the venue. It was necessary to write a service to take snapshots every second while I manually drove around the track a few times.
With data in hand now at home, I was able to do some data processing now with the images from my own webcam.
A New Perspective
I thought I’d compensate for the lines-of-perspective effect so that the trending portion of the software could have accurate data. Since Jimp didn’t have a skew function and since its
convolute() method didn’t work as expected with the right matrix for this, I ended up writing my own prototype which now works as shown below.
I decided to build a very cool-looking robotic tank kit which is made by OSEPP. They have a variety of grown-up toys like this in the geekspace.
I guess I’ve been inspired lately by some of the local meetups which involve races with autonomously-driven cars.
To build this, I find that a surprising amount of hardware is going into this project as well as several programming languages all at once. I’ve had to bounce back and forth between Python and C as I interface the Raspberry Pi Zero W computer with the Arduino Mega 2560 R3 Plus board. This Arduino doesn’t come with Bluetooth, wi-fi or even an Ethernet jack so I opted to add in the Pi since it’s inexpensive and comes with a full operative system. The Pi of course includes a webcam for initially allowing the remote control features to be easy. Later, that same camera will be used to generate images to be processed for autonomous driving.
I decided to design some plastic parts for the tank. It’s now looking awesome, has some quick-release pins and I’ve purchased a 12-battery AA charger and batteries for the project since it seems to be hungry for power.
The first three attempts at managing the tracks for steering didn’t seem accurate enough for my own driving-related expectations at least. I finally had to resort to trigonometry in the last set of calculations; this appears to be a more natural steerage interface.
It looks like the first two phases of the project are now complete and I’m well into the third (autonomous) phase now.
Autonomous (Self-Driving) Mode
Next up is the part where a service is taking snapshots from the camera and then using this to make steering decisions. The strategy here is to use data image processing to find the road, so to speak, our position relative to the path ahead as well as any competitors also on the track.
The first interesting piece of the data processing involves some linear algebra and a variety of matrices which perform distinct functions, if you will. You basically multiply a particular matrix for a 3×3 array of pixels to replace the center pixel’s color in each case. The first and most useful matrix is named
findEdgesKernel and looks like this:
-1 -1 -1
-1 8 -1
-1 -1 -1
This will allow a new, simpler image which should highlight only the track (masking tape) for the path ahead. This part is working quite well so the next step is to process this resulting path image to determine how the tank should steer both now and in the near future.