YOUR NEXT SUPERCOMPUTER WILL BE YOUR CAR

Mobile computing originally meant laptops, then smartphones. Increasingly, though, new cars are powerful computing platforms — and starting to pack more compute power than laptops and even desktops. That trend is set to explode with the introduction of more-extensive automated safety features and self-driving options. Stuffed with CPUs, GPUs, cameras, sensors, and networking hardware, cars are driving the state of the art in many technologies. In particular, the growing use of cameras, and AI-inspired software to analyze their captured video in real time are driving an explosion in compute power found in cars.

Beyond the backup Camera: Bird’s eye-view and more

Most of us are familiar with the soon-to-be-mandatory backup cameras that provide an invaluable aid for getting in and out of parking spaces and driveways. But high-end vehicles now feature quite a few more cameras, in front to scout traffic and detect lane lines, on the sides to help them avoid other vehicles, and all-around to provide a very cool “bird’s-eye” view of the car and its surroundings.

While the sensors in these cameras are typically fairly standard, the systems require a lot of innovation in image processing to make them effective. For example, Infiniti’s bird’s-eye, 360-degree, simulated view stitches together images from four ultra-wide-angle cameras (on the side mirrors, grille, and license holder) and then corrects the substantial distortion to provide a more-or-less natural-looking view of the car from above. This makes it much easier to park accurately and to maneuver in tight spaces.

Even the now commonplace backup cameras are getting upgraded, thanks to the availability of more computing power and an assist from machine learning software. Many of the image processors coupled to these cameras are now augmented with object recognition to help prevent collisions with pedestrians, and some are integrated with rear cross-traffic sensors as well. Silicon vendors have been racing to outdo each other with auto-specific application processors and architectures — easily visible by attending an Embedded Vision Alliance event.

Mobileye, the leading provider of both after-market and OEM camera-based vehicle safety systems, has both a custom vision processing chip, the SeeQ2, and board, the EyeQ2, inside its 500-series add-on vehicle safety cameras. Interestingly, the system’s image sensor is only VGA resolution, but it features very-high dynamic range, to allow for operation in tricky lighting conditions. In parallel, Intel has just snapped up vision chip startup Movidius, with automotive expected to be a key market for its high-performance, low-power, Myriad family of chips.

Xilinx wants to provide the silicon to be the eyes, ears, and much of the brain of your future car
The after-market Mobileye systems only serve to warn the driver, and aim to provide at least two seconds of advance notice of a potential accident. Cameras tied into automated safety systems have the additional need to accurately estimate object distances in addition to their position and motion, since they directly control braking and possibly other car functions. This is often accomplished by aligning the images from multiple cameras and using software to compute the depth of objects based on the disparity of where they appear in each camera’s image. However, that approach is far from foolproof. Most of the time there is at least one additional radar or lidar that has its data fused in real time with the vision data to achieve better results in a wider-variety of conditions.

Cameras for style and fun
VW concept for a small display that could be used along with a camera to replace an external side mirrorIn the future, cameras may take on even more roles. For example, Tesla and other auto manufacturers have proposed that the requirement for side-view mirrors be relaxed to allow low-profile cameras to do the job. Coupled with an internal monitor, the result would be a more aerodynamic profile for the car, especially helpful for range-hungry electric vehicles. Also, look for dash cams to become more common as a factory-installed option instead of only as a consumer add-on. In the meantime, user-installed dash cams don’t have to be just about safety and recording accidents. Waylens’s new Horizon dash cam will couple with an OBD2 connector to provide you with action-cam-like footage of your adventures overlaid with car performance data.

Radar and lidar used to augment machine vision

While cameras are currently the only way to perform certain important functions like tracking lane lines, for other tasks like collision avoidance, they aren’t always the best solution. They can be fooled by some high-contrast scenes (which may be what happened in the now infamous Florida Tesla crash), can’t always estimate the distance to other vehicles or objects accurately, and don’t do well in poor weather. For that reason, almost all autonomous vehicle projects also feature one or more non-visual ways to “see” objects in the world around them — typically either radar or lidar.

Tesla has just shifted its primary sensing system from its Mobileye-designed cameras to its in-vehicle radar, after a heavy investment in advanced signal processing to help keep the radar from getting confused by metallic objects and other edge cases. Many current “self-driving” car projects, including Google’s, rely on lidar, which is harder to fool, but so far is still larger and more expensive than radar or cameras. Velodyne, the leading maker of automotive lidar, expects prices to continue to fall, though. So expect to see at least some use of radar, and eventually lidar, in nearly every new car in a few years.

An exception to the typical use of radar or lidar for autonomous vehicles is Nvidia’s DAVE2, which essentially taught itself to drive using a neural network trained in the cloud using only camera data from real cars, and accompanying time-synced steering data. While its goals, so far at least, are much more limited, and research-oriented, than those of the car companies, it’s impressive that it’s able to drive correctly on a variety of roads after just a few months of learning, and using only vision input.

A supercomputer in your trunk
The low-power version of Drive, supports what Nvidia calls AutoCruise functionality, and only consumes about 10 watts of powerWhether an augmented car only uses cameras, or fuses their output with other sensors, a tremendous amount of data is collected, that needs to be analyzed and acted on in real time. In addition, vehicle telematics, GPS, and map data all need to be integrated for any truly autonomous car. All that data requires quite a bit of horsepower to process. While the main training of the AI systems used for self-driving can be done in the cloud on massive computing clusters, the car itself needs to both run the resulting neural network (or other resulting algorithm) in real time, and also adapt to changing conditions, possibly incorporating new training data that is cloud-sourced from other drivers. Almost every autonomous vehicle project now includes some type of support for learning from all of its vehicles as they drive and gather performance and map data — like the one Nvidia and TomTom just announced.

nvidia-pascal-gpu-drive-px-2-ai-supercomputer-635x358-540x334

The low-end CPU that controls your engine, or your entertainment system, isn’t up to the task of automatically navigating your car through traffic. The result has been innovation in what are essentially portable supercomputers. Nvidia’s Drive PX 2 is showing up at the high-end in fully-autonomous test vehicles (others like the Google cars have several traditional computers crammed into their trunks). Nvidia has now released a compact, low-power, version of Drive for basic automated safety functions, while its larger siblings are designed for more-complex autonomous applications.
When most of us think of computing in our car, we think about the infotainment system — which in itself has become quite a technology hotbed. But increasingly, the real computer power in your car will be used for the AI-assisted, vision and spatial sensing systems that help you to drive, or help the car to drive you. Just like the once-far-fetched idea of using your car battery to power your house has become a possibility with Tesla’s battery systems, it may be that at some point you’ll be running your high-end games on your car’s GPU while it sits in the garage, and streaming them to your TV.

Share

Leave a Reply