Skip to main content

Update on autonomous cars: mastering city street driving

In a recent blog post, Google’s director of their self-driving car project, Chris Urmson has given an update on the technology that he says is better than the human eye. Google’s autonomous vehicles have logged nearly 700,000 miles on the streets of the company’s hometown, Mountain View, California. Urmson says a mile of city driving is much more complex than a mile of freeway driving, with hundreds of different objects moving according to different rules of the road in a small area. He claims that
May 14, 2014 Read time: 2 mins
In a recent blog post, 1691 Google’s director of their self-driving car project, Chris Urmson has given an update on the technology that he says is better than the human eye.

Google’s autonomous vehicles have logged nearly 700,000 miles on the streets of the company’s hometown, Mountain View, California.  Urmson says a mile of city driving is much more complex than a mile of freeway driving, with hundreds of different objects moving according to different rules of the road in a small area.

He claims that Google has improved its software so it can detect hundreds of distinct objects simultaneously—pedestrians, buses, a stop sign held up by a crossing guard, or a cyclist making gestures that indicate a possible turn. A self-driving vehicle can pay attention to all of these things in a way that a human physically can’t—and it never gets tired or distracted.

Urmson says: “As it turns out, what looks chaotic and random on a city street to the human eye is actually fairly predictable to a computer. As we’ve encountered thousands of different situations, we’ve built software models of what to expect, from the likely (a car stopping at a red light) to the unlikely (blowing through it). We still have lots of problems to solve, including teaching the car to drive more streets in Mountain View before we tackle another town, but thousands of situations on city streets that would have stumped us two years ago can now be navigated autonomously.”

With nearly 700,000 autonomous miles under its belt, Google is growing more optimistic that it is heading toward an achievable goal—a vehicle that operates fully without human intervention.

For more information on companies in this article

Related Content

  • Scaling up road safety analysis with Aimsun cloud simulation
    May 10, 2023
    Synthetic generation, execution, and analysis of thousands of road safety scenarios is exponentially more efficient and wider ranging than any methodology based on field data. Marcel Sala & Jordi Casas of Aimsun examine the benefits of cloud simulation for safety testing
  • US transport chief: ‘Google car crash not a surprise’
    March 14, 2016
    The recent accident in California involving a Google autonomous car and a bus “was not a surprise,” according to US transportation secretary Anthony Foxx. No one was hurt in the accident, which happened when Google’s Lexus RX-450H tried to avoid some sandbags placed around a storm drain and blocking its path; the car’s computer was said to be at fault. Speaking at the South by Southwest Interactive festival in Austin, Texas, Secretary Foxx told the BBC: “It's not a surprise that at some point there wo
  • ServCity AV project reaches final test
    February 20, 2023
    Three-year initiative in London has aimed to demonstrate practicalities of urban robotaxis
  • Camera technology a flexible and cost-effective option
    June 7, 2012
    Perceptions of machine vision being an expensive solution are being challenged by developments in both core technologies and ancillaries. Here, Jason Barnes and David Crawford look at the latest developments in the sector. A notable aspect of machine vision is the flexibility it offers in terms of how and how much data is passed around a network. With smart cameras, processing capabilities at the front end mean that only that which is valid need be communicated back to a central processor of any descripti