Skip to main content

Oxbotica 'deepfakes' are teaching AVs

Autonomous vehicle (AV) software specialist Oxbotica is using 'deepfake' technology to develop cars for future deployment - thus minimising the need for testing on roads.
By Adam Hill June 29, 2020 Read time: 2 mins
Oxbotica AI uses colour coding to generate fake images and synthesise different markings onto the road

Deepfaking uses deep learning artificial intelligence (AI) to generate fake photo-realistic images "to test countless scenarios".

Oxbotica can generate thousands of these in minutes, allowing it to change weather, cars, buildings and time of day - thus exposing AVs to "near infinite variations of the same road scene" without the need for real-world testing. 

While deepfake techniques have been used to mislead, shock and scam by realistically doctoring video footage to make it seem as though someone has said or done something they have not, the company uses it to reproduce scenes in adverse conditions - even putting rain water on lenses - or to confront vehicles with rare occurrences.

Oxbotica already users gamers in its R&D and believes that deepfake algorithms can help make AVs safer. 

“Using deepfakes is an incredible opportunity for us to increase the speed and efficiency of safely bringing autonomy to any vehicle in any environment," said Paul Newman, co-founder and CTO at Oxbotica.

"What we’re really doing here is training our AI to produce a syllabus for other AIs to learn from. It’s the equivalent of giving someone a fishing rod rather than a fish. It offers remarkable scaling opportunities."

He suggests there is no substitute for real-world testing but suggests that the AV sector has become concerned with the number of miles travelled "as a synonym for safety". 

"And yet, you cannot guarantee the vehicle will confront every eventuality, you’re relying on chance encounter," he says.
 
Oxbotica says the technology can reverse road signage or 'class switch' - which is where an object such as a tree is replaced by another, such as a building - or change the lighting of an image to mimic different times of the day or year.

The company says: "The data is generated by an advanced teaching cycle made up of two co-evolving AIs, one is attempting to create ever more convincing fake images while the other tries to detect which are real and which have been reproduced."

Engineers have designed a feedback mechanism which means both AIs compete to outsmart each other. 

"Over time, the detection mechanism will become unable to spot the difference, which means the deepfake AI module is ready to be used to generate data to teach other AIs."

For more information on companies in this article

Related Content

  • Ukraine: how ITS works in a war zone
    November 28, 2023
    Russia’s invasion of Ukraine has cost thousands of lives and devastated much of the country. Ertico – ITS Europe hosted a webinar in which some key players in Ukraine’s ITS community – Kyiv Digital, TomTom and Uber - shared their extraordinary stories. Adam Hill listened in…
  • Machine vision - cameras for intelligent traffic management
    January 25, 2012
    For some, machine vision is the coming technology. For others, it’s already here. Although it remains a relative newcomer to the ITS sector, its effects look set to be profound and far-reaching. Encapsulating in just a few short words the distinguishing features of complex technologies and their operating concepts can sometimes be difficult. Often, it is the most subtle of nuances which are both the most important and yet also the most easily lost. Happily, in the case of machine vision this isn’t the case:
  • Camera technology a flexible and cost-effective option
    June 7, 2012
    Perceptions of machine vision being an expensive solution are being challenged by developments in both core technologies and ancillaries. Here, Jason Barnes and David Crawford look at the latest developments in the sector. A notable aspect of machine vision is the flexibility it offers in terms of how and how much data is passed around a network. With smart cameras, processing capabilities at the front end mean that only that which is valid need be communicated back to a central processor of any descripti
  • Saving the world, one parking space at a time
    December 7, 2020
    Donald Shoup, professor of urban planning at University of California, Los Angeles (UCLA), tells Adam Hill about why parking is too cheap – and how Monopoly could seriously raise its game