Skip to main content

Affectiva and Nuance to offer assistance

US company Affectiva plans to develop a joint automotive assistant which detects driver distraction and drowsiness and voices recommendations such as navigating to a coffee shop. The solution is intended to align its dialogue to a motorist’s emotional state based on facial and verbal expressions. The integrated solution will combine the Affectiva Automotive AI solution with UK-based Nuance Communications’ Dragon Drive platform. Affectiva Automotive AI measures facial expressions and emotions such as ange
December 6, 2018 Read time: 2 mins
US company Affectiva plans to develop a joint automotive assistant which detects driver distraction and drowsiness and voices recommendations such as navigating to a coffee shop. The solution is intended to align its dialogue to a motorist’s emotional state based on facial and verbal expressions.


The integrated solution will combine the Affectiva Automotive AI solution with UK-based Nuance Communications’ Dragon Drive platform.

Affectiva Automotive AI measures facial expressions and emotions such as anger and surprise as well as verbal expressions in real time. It also displays icons which indicate drowsiness such as yawning, eye closure and blink rates and physical or mental distraction.Through the partnership, Dragon Drive will enable the in-car assistant to interact with passengers via emotional and cognitive state detection. It currently facilitates this correspondence through gesture, touch, gaze detection and voice recognition powered by natural language understanding.

Stefan Ortmanns, executive vice president and general manager, Nuance Automotive, says these additional modes of interaction will help its OEM partners develop automotive assistants which can ensure the safety and efficiency of connected and autonomous cars.

In the future, the automotive assistant may also be able to take control of semi-autonomous vehicles if the driver displays signs of physical or mental distraction.

Related Content

  • Lidar: beginning to see the light
    March 14, 2022
    Lidar feels like a technology whose time has come – but why now? Adam Hill talks to manufacturers, vendors and system integrators in the sector to assess the state of play and to find out what comes next
  • Transport in the round
    October 13, 2015
    The ITF’s Mary Crass tells Colin Sowman why future transport demands will require governments to overcome the silo effect of individual single-modal authorities. The only global multimodal transport policy organisation,” is how Mary Crass describes the International Transport Forum (ITF), which is housed at the Organisation for Economic Cooperation and Development (OECD). As head of policy and summit preparation at the ITF she says: “All other organisations are either regional or have a modal focus, we cove
  • Trust me, I'm a driverless car
    October 12, 2018
    Developing C/AV technology is the easy bit: now the vehicles need to gain people’s confidence. So does the public feel safe in driverless hands – and how much might they be willing to pay for the privilege? The Venturer consortium’s final user and technology test (Trial 3) explored levels of user trust in scenarios where a connected and autonomous vehicle (C/AV) is interacting with cyclists, pedestrians and other road users on a controlled road network. Trial 3 consisted of experimental runs in the
  • Winners of AT&T traffic safety innovation challenge announced
    October 22, 2014
    The winners of AT&T's Connected Intersections Challenge, a technology challenge aimed at stimulating innovative solutions to improve traffic safety on New York City streets. Forty-five teams from 13 countries and 26 states submitted their apps and wearable devices ranging from smartphone sensors, phone-to-phone communications and natural user interfaces, among other technologies. The winners include: Tug, an app that alerts pedestrians as they are about to enter an intersection; an anti-sleep alarm