Autonomous Electric Car
🚗

Autonomous Electric Car

Video preview
A 1:10 scale RC monster truck was modified to be a fully self-driving electric vehicle using a rotating LiDar scanner, electronic speed controller and a front-mounted RGB-Depth camera.

How it works

Robot Operating System (ROS) is an open-source collection of libraries and tools widely used in the field of robotics [1]. Its ability to facilitate seamless communication between electronic and software components (nodes) has contributed to its widespread adoption among engineers and developers. A node that sends data to another node is called a publisher. A node that receives data from a publisher is called a subscriber.
 
The “brain” of this system is a NVIDIA Jetson Nano microcontroller. All data received from the publishers and sent to the subscribers must first pass through this board. The Bosch BNO055 inertial measurement unit (IMU) measures the linear acceleration and rotation about the x,y and z axes and communicates with the Jetson Nano over I2C. Integrating acceleration with respect to time gives you speed. An electronic speed controller called VESC VI controlls both the speed of the vehicle, as well as the angle that the front tires make by using a multiplexer (MUX) to send the required electric signals to either the motor driver, or the servo that shifts the front tires. Note that heading angle (θ) and front tire angle (δ) are NOT the same.
notion image
 
Every 100 milliseconds, the RPLIDAR A2M8 scanner module completes one full rotation and takes 720 distance measurements at each half degree. 7200 distance measurements each second is far more than what we need, so in the interest of efficiency, we only consider a 20 degree window in front of the vehicle. The vehicle will move in the direction of the largest gap where objects are more than 0.6 meters away.
Video preview
The vehicle will stop if it encounters a dead end. The speed of the device is given by the following equation:
v=vs[1exp(max(dobdstop,0)dτ)]v = v_s \left[ 1 - \exp\left( -\frac{\max(d_{ob} - d_{stop}, 0)}{d_\tau} \right) \right]
Where vv is the vehicle speed, vsv_s is the speed without obstacles, two meters per second, dobd_{ob} is the distance to the nearest obstacle, or whatever the closest LiDar scan distance at the front viewing window of the vehicle is. dstopd_{stop} is the minimum stopping distance, which we have defined to be 0.4 meters, and dτd_τ is the rate of decay of our exponent, which is 1.5.

THIS IS NOT GOOD ENOUGH

While simply guiding the vehicle in the direction of the largest gap may be enough to keep the vehicle moving without hitting any obstacles, this algorithm alone is not sufficient to obtain fully self-driving behaviour. Suppose there is a slight bend in the road before a much sharper turn. The front viewing angle may be offset from the gentle bend, and the vehicle may lock on to a corner and stop when one would expect it to take the sharper turn.

If that sounds confusing, let the following video illustrate
Video preview
There needs to be some way to re-center the vehicle after it swerves around an obstacle. The real-world experiments occurred indoors in a narrow hallway littered with obstacles. The mostly straight walls flanking either side of the vehicle allowed me to set up a convex optimization problem similar to the ones I helped solve in my first research job to build virtual “lanes” around the vehicle. These can then be used to make sure the car returns to the center of it’s path after avoiding an obstacle.
 
The quadprog python library was used to solve this particular convex optimization problem. Keep in mind the math required to solve quadratic convex optimization problems isn’t taught until grad school, and I attempted this project in my third year of undergrad. While I am familiar with these kinds of problems from my aforementioned research position, a lot of what’s happening under the hood when the quadprog library is used is well beyond my knowledge. If you would like a broad explanation on how exactly the quadratic convex optimization works in the getWalls function, click here. If not, just know that the outputs of getWalls are two 1D NumPy arrays that represent the vectors normal to the left and right walls wlw_l and wrw_r.
 
The distance from the vehicle to the left wall can be found by computing disl=1wlTwl{dis}_l = \frac{1}{\sqrt{\mathbf{w}_l^T \mathbf{w}_l}} . Right wall distance is computed in a similar way. We define wl^\hat{w_l} as dislwldis_l w_l, d~lr\tilde d_{lr} as disldisrdis_l-dis_r, and dlr˙\dot {d_{lr}}
as v(wl^wr^)v (\hat {w_l} - \hat {w_r}), where v is the speed determined by the IMU constantly integrating its acceleration readings. We must also apply some control theory. kpk_p is the proportional steering constant and kd k_d is the derivative steering constant. Trial and error were used to determine the appropriate kpk_p and kdk_d values, which were both determined to be 4. The following table describes the effects observed by varying kpk_p and kdk_d.
High kpk_p
Leads to more responsive steering but can overshoot the desired angle causing oscillations
Low kpk_p
May run into obstacles and not follow walls well
High kdk_d
Smoothens response by damping rapid changes but may prevent steering entirely if too high
Low kdk_d
jittery, shaky steering
Remember when I explained the difference between heading angle (θ) and front tire angle (δ)? We can only control δ directly since θ depends on all previous speed and velocity values. Therefore, this new steering algorithm will only deal with δ. Using the following equation, we can determine the heading angle where ll is the length between the axles in meters (0.27017)
δ=tan1(l(kpdlr~+kddlr˙)v2(w^Ly+w^Ry))\delta = \tan^{-1}\left( \frac{l \left( k_p \tilde{d_{lr}} + k_d \dot{d_{lr}} \right)}{v^2 \left( -\hat{w}_{Ly} + \hat{w}_{Ry} \right)} \right)θ=0tv(t)ltan(δ(t))dt\theta = \int_0^t \frac{v(t)}{l} \tan(\delta(t)) \, dt
The readings from the front of the vehicle are still used when computing speed using the exact same algorithm as before.
Video preview
The above video shows a virtual simulation of the vehicle navigating through a hallway with some obstacles. The environment is called RVIZ, which is a 3D visualization tool built within ROS. This testing environment was provided by a professor who specializes in autonomous electric vehicle research and optimization. Red dots denote close points seen by the LiDar, and purple dots denote points that are far away from the LiDar scanner.

Say Cheese!📷

The problem with relying solely on the LiDar scanner is that the vehicle is completely incapable of seeing anything below the plane of the LiDar beams themselves. The Intel Realsense D435 depth camera solves this problem by allowing the vehicle to detect smaller obstacles in front of it.
 
The camera determines the distance to an obstacle using stereo vision, which is the same principle behind human depth preception as well as how scientists determine the distances to the furthest galaxies. Stereo vision involves looking at something from two points some known distance apart, and computing the distance to that thing by seeing how much the image shifts between the two viewing points.
 
The tricky angle projection calculations required for this are mostly handled by the camera’s internal circuitry so that the z coordinate (or depth) of what the camera sees is what it outputs. 921,600 distance measurements are captured by default, which is one z-coordinate for each pixel in the 1280x720 array. Of course this is far too much data for the Jetson Nano to handle, so every 16th pixel in the y direction, and every 32nd pixel in the x direction are sampled instead. Some of the pixels close to the ground must be ignored since if they are detected, the very surface the vehicle drives on will be interpreted as an obstacle. The narrow window in front of the camera corresponding to what the LiDar also sees is the only data that we care about. Knowing only which pixel in the x and y direction of the camera frames, we can find the x,y, and z coordinates of any obstacle seen by the camera frame so long as we know certain intrinsic parameters defined by the camera manufacturer.

Let dd be the distance returned by the camera for that pixel in the . cxc_x and cyc_y are the center pixels of the camera in the x and y directions respectively. cxc_x=640 cyc_y=360. The focal lengths in both the x and y direction (fx,fyf_x,f_y) are given in millimeters as 322.282. The x,y and z coordinates of every pixel can be found using the equations below, where uu and vv are that pixel’s coordinate in the x and y direction respectively
X=(ucx)dfxX = \frac{(u - c_x) d}{f_x}Y=(vcy)dfyY = \frac{(v - c_y) d}{f_y}Z=dZ=d
Now comes the LiDar integration. We iterate through every chosen pixel, and if it returns a distance closer than what the LiDar saw, it is assumed that there is a low obstacle below the field of view of the LiDar. The LiDar pathfinding logic is left unchanged, except for that particular LiDar scan distance value gets replaced by what the camera sees instead. The camera operates at roughly 80 Hz, and the Lidar operates at 10 Hz. To avoid needlessly increasing the computational intensity of the algorithm, a loop was added in the camera callback to math the camera and LiDar sampling frequencies. Fair warning, the camera is susceptible to overheating, which diminishes performance if the car is left running for more than 20 minutes.

RESULTS

The following video shows the autonomous electric vehicle travelling through a long hallway with turns and obstacles to catch the full behaviour of this vehicle in its entirety.
Video preview

Hardware

notion image
 
notion image

ARRMA GRANITE 4X4 3S 1:10 SCALE RC MONSTER TRUCK

notion image
  • 80 kmh (50 mph) top speed
  • 11.1 volt 5000mAH LiPo battery
  • 40 minute battery life at 2 meters per second
  • 6V 4A BLX100 BRUSHLESS 3S ESC motor driver
  • BLX 3660 4 pole 3S brushless motor (upto 3200 rpm per volt)
  • Supports regenerative breaking
  • 4-wheel drive
  • 476mm length x 342mm width x 182 mm height (with LiDar)

Bosch BNO055 Inertial Measurement Unit

notion image
  • 9 degrees of freedom
  • Supports 100 Hz quaternion and Euler vector measurements
  • 20 Hz magnetic field strength vector support
  • 2.4 V, 12.3 mA
  • Uses both UART AND I2C communication protocols

NVIDIA Jetson Nano

notion image
  • Quad-core ARM Cortex-A57 MPCore processor CPU
  • NVIDIA Maxwell architecture with 128 CUDA core GPU
  • Supports GPIO, I2C, I2S, SPI, UART communication protocols
  • 4 USB 3.0 outlets, ethernet and HDMI support
  • 40 GPIO pins
  • 4 GB RAM, 16 GB memory

RPLIDAR A2M8

notion image
  • 0.2-12 meter range
  • Rotates upto 15 times a second
  • 0.45° resolution

Intel Realsense D435 RGBDepth Camera

notion image
  • 87° × 58° field of view
  • 1280 × 720 resolution
  • 80-90 fps
  • Supports RGB and distance measurement using stereo vision

VESC VI Speed Controller

notion image
  • Supports USB, CAN,UAVCAN, 2x UART, SPI, I²C communication protocols
  • 11.1-60 volt range
  • Ouputs up to 80 amps
  • 20 μA hibernation mode
  • Supports Field oriented control on brushless and brushed DC motors

References

[1] “Wiki,” ros.org, https://wiki.ros.org/ (accessed Jun. 22, 2025).
[2] “1/10 granite 3s 4x4 RTR brushless monster truck,” 1/10 GRANITE 3S 4X4 RTR Brushless Monster Truck | Horizon Hobby, https://www.arrma-rc.com/fr/product/1-10-granite-3s-4x4-rtr-brushless-monster-truck/ARA4302V3.html (accessed Jun. 22, 2025).
[3] Kevin Townsend and Eva Herrada, “Adafruit BNO055 absolute orientation sensor,” Adafruit Learning System, https://learn.adafruit.com/adafruit-bno055-absolute-orientation-sensor/overview (accessed Jun. 22, 2025).
[4]“Jetson Nano,” NVIDIA Developer, https://developer.nvidia.com/embedded/jetson-nano (accessed Jun. 22, 2025).
[5] T. Huang, “RPLIDAR-A2 Laser Range Scanner_ solid laser range scanner,” SLAMTEC, https://www.slamtec.com/en/Lidar/a2 (accessed Jun. 22, 2025).
[6] “Intel Realsense depth camera D435,” Buy Intel RealSenseTM Depth Camera D435 – Intel RealSense Store, https://store.intelrealsense.com/buy-intel-realsense-depth-camera-d435.html (accessed Jun. 22, 2025).
[7] VESC 6 mkvi the amazing trampa vesc 6 MkVI gives maximum power!, https://trampaboards.com/vesc-6-mkvi--the-amazing-trampa-vesc-6-mkvi--gives-maximum-power-original-p-27536.html (accessed Jun. 23, 2025).
[8] Quadprog, “Quadprog/quadprog at master · quadprog/quadprog,” GitHub, https://github.com/quadprog/quadprog/tree/master/quadprog (accessed Jun. 22, 2025).