By: Jonathan Wong
Photo: AUDI AG
Originally seen in Audi Magazine.
More than ever, companies now are determined in getting vehicles moving in ways they haven’t before. Though we are a long way from realizing the dream of technology we could only dream about in science fiction movies, the automobile has become increasingly sophisticated, a powerful computer on wheels thanks to advanced onboard systems with autonomous capability. This technological progress is driven to improve our lives by addressing constant challenges in society.
Already set to impact many industries, including retail and healthcare, artificial intelligence (or AI) is the heart and soul of future technology, evolving as we do. It can change how a product is sold to a consumer, can help prevent disease or can allow us to create smarter city infrastructure—all driven by data.
In the automotive world, AI learns how we behave on the road, analyzing driving conditions the same ways a human being would. It can sense how other drivers behave and uses other key data metrics, such as weather and traffic, with the goal of helping us avoid the mistakes we tend to make that often lead to collisions.
One important collaborator for Audi and its artificial intelligence advancements is Audi Electronics Venture GmbH (AEV), which developed a mono camera technology development project that uses AI to generate a precise 3D model of a vehicle’s surroundings at all times. Positioned on the car’s front hood is a front-facing camera that acts as the sensor, capturing data ahead of the car within a 120- degree angle and delivering 15 images per second at 1.3 megapixels.
The images are processed in a neural network, which classifies them into object categories. Then the system makes a distinction between vehicles, buildings, pedestrians, road markers and traffic signs. “We show the network input scenes to allow it to learn,” said Maximilian Muehlegg, Software development / Machine Learning of AEV. “Over time, the network gradually learns what a car, tree, person and a house is, and once that training is complete, it can be shown different scenes it hasn’t seen yet.”
The presented AI module uses neural networks to calculate distance information, creating ISO lines—or virtual boundaries that define a constant distance—which, when combined with depth estimates, produce the 3D model of the vehicle’s surroundings. The neural network receives multiple videos recorded with a stereo camera in order to view different road situations. As a result, the network is able to independently understand rules to produce the 3D information provided by the images taken with the mono camera.
“Our goal is to train AI to determine how far away an object is, similar to the way the human eye works to gauge distance,” Muehlegg said. “The two cameras provide images from each other to let the network learn, matching one image to another. Once it learns how to match well enough, you can remove one camera—just like if you cover one eye, we can ‘roughly’ say how far away one object is without using the other eye, which is essentially how this system works.
“Once you know what’s in front of you, the next step is figuring out how to react, and we can construct a virtual 3D environment—like a video game—to ‘turn’ the camera to avoid potential hazards,” Muehlegg said. “The beauty is we only need a mono camera, whereas multiple sensors are traditionally required to achieve the same feat. It’s astonishing to realize what the sensor is capable of when we try to get 100% information out of it.”
As artificial intelligence continues to help us make strides towards an autonomous future, Audi is taking multiple steps forward and presenting drivers with new possibilities on how to use their time in their vehicles. Once that’s settled, then maybe we can make some of those older sci-fi fantasies come true.