Skip to main content

Update on autonomous cars: mastering city street driving

In a recent blog post, Google’s director of their self-driving car project, Chris Urmson has given an update on the technology that he says is better than the human eye. Google’s autonomous vehicles have logged nearly 700,000 miles on the streets of the company’s hometown, Mountain View, California. Urmson says a mile of city driving is much more complex than a mile of freeway driving, with hundreds of different objects moving according to different rules of the road in a small area. He claims that
May 14, 2014 Read time: 2 mins
In a recent blog post, 1691 Google’s director of their self-driving car project, Chris Urmson has given an update on the technology that he says is better than the human eye.

Google’s autonomous vehicles have logged nearly 700,000 miles on the streets of the company’s hometown, Mountain View, California.  Urmson says a mile of city driving is much more complex than a mile of freeway driving, with hundreds of different objects moving according to different rules of the road in a small area.

He claims that Google has improved its software so it can detect hundreds of distinct objects simultaneously—pedestrians, buses, a stop sign held up by a crossing guard, or a cyclist making gestures that indicate a possible turn. A self-driving vehicle can pay attention to all of these things in a way that a human physically can’t—and it never gets tired or distracted.

Urmson says: “As it turns out, what looks chaotic and random on a city street to the human eye is actually fairly predictable to a computer. As we’ve encountered thousands of different situations, we’ve built software models of what to expect, from the likely (a car stopping at a red light) to the unlikely (blowing through it). We still have lots of problems to solve, including teaching the car to drive more streets in Mountain View before we tackle another town, but thousands of situations on city streets that would have stumped us two years ago can now be navigated autonomously.”

With nearly 700,000 autonomous miles under its belt, Google is growing more optimistic that it is heading toward an achievable goal—a vehicle that operates fully without human intervention.

For more information on companies in this article

Related Content

  • AVs need extreme training, says research
    May 24, 2022
    AVs will be safer if they are given 'one-in-a-million' collision risk scenarios to learn from
  • IBTTA Summit: satellite tolling is the future
    August 15, 2019
    IBTTA members met in Florida to consider the technological changes that will impact their businesses – including satellite tolling. Colin Sowman reports from Orlando Over decades, the technology employed in toll collection has been honed to near perfection – automatic number plate recognition (ANPR) and radio frequency identification (RFID) tags are easily within a couple of per cent of infallibility even at highway speeds. However, technical innovations beyond the confines of the toll road cannot b
  • Revealed: future of mobility in Hamburg
    October 7, 2021
    From 11-15 October, the ITS World Congress will present a myriad of innovations
  • Daimler’s double take sees machine vision move in-vehicle
    December 13, 2013
    Jason Barnes looks at Daimler’s Intelligent Drive programme to consider how machine vision has advanced the state of the art of vision-based in-vehicle systems. Traditionally, radar was the in-vehicle Driver Assistance System (DAS) technology of choice, particularly for applications such as adaptive cruise control and pre-crash warning generation. Although vision-based technology has made greater inroads more recently, it is not a case of ‘one sensor wins’. Radar and vision are complementary and redundancy