Skip to main content

Hesai takes long view with new ADAS Lidar products

AT512 has 300m range while ultra-thin ET25 is designed to sit behind windshield
By Adam Hill January 19, 2024 Read time: 2 mins
AT512 measures 160x100x45mm, and offers a 120-degree horizontal field of view and a 25.6-degree vertical field of view (image: Hesai Technology)

Hesai Technology has launched two advanced driver-assistance system (ADAS) Lidar products: the ultra-long-range AT512 and the ultra-thin ET25, which is designed to be installed behind a vehicle’s windshield.

The AT512 is aimed at automotive OEMs, and offers a range of 300m at 10% reflectivity with a maximum range of over 400m.

Hesai says it has a 12.3 million points per second point-rate, giving it the "highest resolution point cloud among any Lidar manufacturer at 2400x512". 

The product is 160x100x45mm, and offers a 120-degree horizontal field of view and a 25.6-degree vertical field of view. 

The manufacturer says it will "greatly enhance" ADAS systems by improving the vehicles perception capabilities by providing world class 3D environmental scans at ultra high resolution which greatly improves a vehicle's ability to detect objects at long range.

It suggests that AT512-equipped ADAS systems "will have 40% more reaction time to avoid dangerous road conditions and significantly improve transportation safety".

“The AT512 represents a tremendous breakthrough in Lidar technology and provides unprecidented performance improvements in all of the key areas our customers care about such as range, resolution, thermal, power consumption and form factor," said David Li, co-founder and CEO of Hesai. 

"Our core belief that intelligent manufacturing needs to be part of our R&D efforts has helped us move beyond traditional 1550 nanometer laser-based Lidar to more advanced 905 nanometer technology as borne out by the superior performance, quality and reliability of our AT512."

Meanwhile the ultra-thin ET25 ADAS lidar is designed to be installed behind a vehicle’s windshield, making it easier to integrate, keeping the sensor clear of dirt and debris. 

The ET25 provides 250m of range at 10% reflectivity and received a 2024 CES Innovation Award for best new product.

Related Content

  • Leddar technology wins in Toronto
    October 6, 2014
    Following a successful trial, the City of Toronto in Canada has ordered an initial sixty of LeddarTech’s innovative d-tec 3D non-intrusive overhead traffic sensors based on Leddar (Light Emitting Diode Detection and Ranging) technology for its traffic management needs. Leddar says that ease of configuration, speed of installation on existing infrastructure, accurate detection in all environmental conditions and its ability to detect objects of all sizes, including bicycles and motorcycles, set d-tec apa
  • Leddartech wins in Toronto
    January 11, 2013
    Following a successful trial, the City of Toronto in Canada has ordered an initial sixty of LeddarTech’s innovative d-tec 3D non-intrusive overhead traffic sensors based on Leddar (Light Emitting Diode Detection and Ranging) technology for its traffic management needs. Leddar says that ease of configuration, speed of installation on existing infrastructure, accurate detection in all environmental conditions and its ability to detect objects of all sizes, including bicycles and motorcycles, set d-tec apart f
  • Urbiotica acquires Fastprk products
    June 24, 2020
    Deal gives Urbiotica direct access to the US and Poland parking markets
  • Machine vision - cameras for intelligent traffic management
    January 25, 2012
    For some, machine vision is the coming technology. For others, it’s already here. Although it remains a relative newcomer to the ITS sector, its effects look set to be profound and far-reaching. Encapsulating in just a few short words the distinguishing features of complex technologies and their operating concepts can sometimes be difficult. Often, it is the most subtle of nuances which are both the most important and yet also the most easily lost. Happily, in the case of machine vision this isn’t the case: