Skip to main content

Velodyne develops Lidar sensor for AV mobility

Velodyne has released a Lidar solution which it says utilises surround-view technology to meet specifications for autonomous mobility. 
By Ben Spencer February 13, 2020 Read time: 1 min
Velodyne Alpha Prime (Source: Velodyne)

Velodyne claims Alpha Prime provides a 360-degree perception and a 40-degree vertical field of view while also providing capabilities that help improve vehicle safety and enable more precise mapping. 

The solution is expected to detect dark vehicles, low reflectivity pavement and low visibility pedestrians at long distances. The sensor offers advanced negative obstacle detection perception for potholes and cracks in the road as well as high resolution and laser calibration to localise vehicles without a GPS, the company adds. 
 

For more information on companies in this article

Related Content

  • Axis sets sights on smarter management
    April 24, 2024
    To view the latest in physical security technology for roads, bridges, tunnels (and lots more), make a beeline for Axis Communications’ booth to cast your eye over the upgrades and additions to its camera-based systems.
  • Aeye's next-gen Lidar advances ITS
    December 9, 2021
    Aeye is showcasing how its adaptive Lidar both enables smart mobility infrastructure, and improves intersection management, tolling and automatic incident detection on highways.
  • Canada establishes air mobility consortium
    November 6, 2020
    AAM aircraft will provide transportation to urban and rural areas, CAAM says
  • ZF and NVIDIA announce AI system for autonomous driving
    January 5, 2017
    German auto supplier ZF is working with NVIDIA to develop artificial intelligence (AI) systems for the transportation industry, including automated and autonomous driving systems for passenger cars, commercial trucks, and industrial applications. Unveiled at CES 2017 in Las Vegas, the ZF ProAI for highway automated driving is ZF’s first system developed using NVIDIA AI technology. It aims to enable vehicles to better understand their environment by using deep learning to process sensor and camera data. I