Skip to main content

Mobileye and Lucid partner on autonomous vehicles

US-based electric vehicle developer Lucid Motors is to collaborate with Israeli company Mobileye to enable autonomous driving capability on Lucid vehicles. Lucid plans to launch its first car, the Lucid Air, with a complete sensor set for autonomous driving, including camera, radar and LiDAR sensors. Mobileye will provide the primary computing platform, full eight-camera surround view processing, sensor fusion software, Road Experience Management (REM) crowd-based localisation capability and reinforceme
January 4, 2017 Read time: 2 mins
US-based electric vehicle developer Lucid Motors is to collaborate with Israeli company 4279 Mobileye to enable autonomous driving capability on Lucid vehicles.

Lucid plans to launch its first car, the Lucid Air, with a complete sensor set for autonomous driving, including camera, radar and LiDAR sensors.  Mobileye will provide the primary computing platform, full eight-camera surround view processing, sensor fusion software, Road Experience Management (REM) crowd-based localisation capability and reinforcement learning algorithms for driving policy.  These technologies will offer a full advanced driver assistance system (ADAS) suite at launch and then enable a logical and safe transition to autonomous driving functionality through over-the-air software updates.

In common with other Mobileye programs, the camera set includes a forward-facing trifocal-lens camera and an additional five cameras surrounding the vehicle. In addition, Mobileye will offer sensor fusion software that incorporates data from radar and LiDAR sensors, along with the camera set, in order to build the critical environmental model necessary to facilitate autonomous driving.

Mobileye's REM system is intended to provide the vehicle with highly accurate localisation capability.  Lucid vehicles will benefit from the near real-time updating of the collaborative, dynamic global Roadbook high-definition mapping system.  Data generated from Lucid vehicles can be used to enhance the autonomous driving software and will also contribute to the aggregation of Mobileye's Global Roadbook.

Related Content

  • December 12, 2013
    One eye on the future
    Mobileye’s Itay Gat discusses the evolution of monocular solutions for assisted and autonomous driving with Jason Barnes. Founded in 1999, Israeli company Mobileye manufactures and supplies advanced driver assistance systems (ADAS) based on its EyeQ family of systems-on-chips for image processing for solutions such as lane sensing, traffic sign recognition, vehicle and pedestrian detection. Its products are used by both the OEM and aftermarket sectors. The company’s visual interpretation algorithms drive
  • June 29, 2021
    Can AV mapping rely on crowds?
    Mapping tech companies need to expand their data inputs beyond crowdsourcing in order to maintain temporally accurate maps at scale, says Ro Gupta at Carmera
  • September 9, 2019
    Mobileye utilises Orange’s IoT connectivity
    Mobileye has selected telecoms giant Orange to provide Internet of Things (IoT) connectivity for a solution which it claims will make roads safer. The company, part of Intel, says the Mobileye 8 Connect provides drivers with collision avoidance technology based on their behaviour, environmental data and real-time alert data such as recognising pedestrians in low light. The solution - which sees the road ahead through a camera lens - is expected to offer municipalities and utilities data to plan for smart
  • January 5, 2017
    ZF and NVIDIA announce AI system for autonomous driving
    German auto supplier ZF is working with NVIDIA to develop artificial intelligence (AI) systems for the transportation industry, including automated and autonomous driving systems for passenger cars, commercial trucks, and industrial applications. Unveiled at CES 2017 in Las Vegas, the ZF ProAI for highway automated driving is ZF’s first system developed using NVIDIA AI technology. It aims to enable vehicles to better understand their environment by using deep learning to process sensor and camera data. I