Skip to main content

Affectiva and Nuance to develop humanised automotive assistant

US company Affectiva plans to develop a joint automotive assistant which detects driver distraction and drowsiness and voices recommendations such as navigating to a coffee shop. The solution is intended to align its dialogue to a motorist’s emotional state based on facial and verbal expressions. The integrated solution will combine the Affectiva Automotive AI solution with UK-based Nuance Communications’ Dragon Drive platform. Affectiva Automotive AI measures facial expressions and emotions such as anger
September 7, 2018 Read time: 2 mins

US company Affectiva plans to develop a joint automotive assistant which detects driver distraction and drowsiness and voices recommendations such as navigating to a coffee shop. The solution is intended to align its dialogue to a motorist’s emotional state based on facial and verbal expressions.

The integrated solution will combine the Affectiva Automotive AI solution with UK-based Nuance Communications’ Dragon Drive platform.

Affectiva Automotive AI measures facial expressions and emotions such as anger and surprise as well as verbal expressions in real-time. It also displays icons which indicate drowsiness such as yawning, eye closure and blink rates and physical or mental distraction.

Through the partnership, Dragon Drive will enable the in-car assistant to interact with passengers via emotion and cognitive state detection. It currently facilitates this correspondence through gesture, touch, gaze detection, voice recognition powered by natural language understanding.

Stefan Ortmanns, executive vice president and general manager, Nuance Automotive, says these additional modes of interaction will help its OEM partners develop automotive assistants which can ensure the safety and efficiency of connected and autonomous cars.

In the future, the automotive assistant may also be able to take control of semi-autonomous vehicles if the driver displays signs of physical or mental distraction.

UTC

Related Content

  • December 6, 2018
    Affectiva and Nuance to offer assistance
    US company Affectiva plans to develop a joint automotive assistant which detects driver distraction and drowsiness and voices recommendations such as navigating to a coffee shop. The solution is intended to align its dialogue to a motorist’s emotional state based on facial and verbal expressions. The integrated solution will combine the Affectiva Automotive AI solution with UK-based Nuance Communications’ Dragon Drive platform. Affectiva Automotive AI measures facial expressions and emotions such as ange
  • September 4, 2018
    Getting to the point
    Cars are starting to learn to understand the language of pointing – something that our closest relative, the chimpanzee, cannot do. And such image recognition technology has profound mobility implications, says Nils Lenke Pointing at objects – be it with language, using gaze, gestures or eyes only – is a very human ability. However, recent advances in technology have enabled smart, multimodal assistants - including those found in cars - to action similar pointing capabilities and replicate these human qual
  • November 18, 2016
    Biometric wearables ‘to disrupt the automotive industry’
    Advances in biometrics will radically transform the driving experience, health wellness and wellbeing (HWW) and security of vehicles by 2025, according to Frost and Sullivan. As one in three new passenger vehicles begin to feature fingerprint, iris, voice and gesture recognition, heart beat and brain wave monitoring, stress detection, fatigue, eyelid and facial monitoring and pulse detection, these will be driven by built-in, brought-in and cloud enabled technologies, the automotive biometrics network wi
  • April 10, 2017
    Continental gestures to a safer driving future
    To improve non-verbal communication between drivers and their vehicles, Continental has devised a range of user-friendly touch gestures for the cockpit, using a combination of gesture interaction and touch screens. This enables drivers to draw specific, defined symbols on the input display to trigger a diverse array of functions and features for rapid access. According to Dr Heinz Abel, head of Cross Product Solutions at Continental’s Instrumentation and Driver HMI business unit, the use of gestures and