Skip to main content

Cognitive Technologies launches 4D Radar for self-driving cars

Cognitive Technologies says its 4D Imaging Radar for self-driving cars carries out vertical scanning without using mechanical components and can detect objects with an accuracy over 97%. The 4D radar is expected to detect the coordinates and speed of the road scene objects as well as their shape during all weather conditions. According to Cognitive, the solution supports SAR (synthetic-aperture radar) technology which is used to build a map of the environment around the vehicle. This technology also
November 7, 2018 Read time: 2 mins
Cognitive Technologies says its 4D Imaging Radar for self-driving cars carries out vertical scanning without using mechanical components and can detect objects with an accuracy over 97%.


The 4D radar is expected to detect the coordinates and speed of the road scene objects as well as their shape during all weather conditions.

According to Cognitive, the solution supports SAR (synthetic-aperture radar) technology which is used to build a map of the environment around the vehicle. This technology also allows the car to see potholes and curbs.

The radar detects objects at a distance of 300 metres in a range of azimuth angles greater than 100 degrees and elevation angles up to 20 degrees, the company adds.

Azimuth is the angle formed between a reference direction and a line from the observer to a point of interest.

Also, the product comes with video cameras and cognitive low level data fusion technology to offer improved computer vision capabilities.

Olga Uskova, president of Cognitive Technologies, says the company intends to produce up to 4.5 million radars per year by 2022.

Related Content

  • March 21, 2018
    Gardasoft liquid lens provides faster focus and better images
    Gardasoft is demonstrating how to capture high-quality images of fast-moving vehicles using an innovative liquid lens concept. This, the company says, provides significant performance benefits over traditional, fixed-focus lenses. Many ITS applications require vision systems which can cope with widely varying distances between object and camera. A challenge in the ITS space is the high speeds which can be encountered, particularly in free-flowing traffic. Gardasoft’s approach features a shape-changing
  • June 4, 2015
    After two decades of research, ITS is getting into its stride
    Colin Sowman gets the global view on how ITS has shaped the way we travel today and what will shape the way we travel tomorrow. Over the past two decades the scope and spread of intelligent transport systems has grown and diversified to encompass all modes of travel while at the same time integrating and consolidating. Two decades ago the idea of detecting cyclists or pedestrians may have been considered impossible and why would you want to do that anyway? Today cyclists can account for a significant propor
  • September 4, 2018
    Getting to the point
    Cars are starting to learn to understand the language of pointing – something that our closest relative, the chimpanzee, cannot do. And such image recognition technology has profound mobility implications, says Nils Lenke Pointing at objects – be it with language, using gaze, gestures or eyes only – is a very human ability. However, recent advances in technology have enabled smart, multimodal assistants - including those found in cars - to action similar pointing capabilities and replicate these human qual
  • March 18, 2014
    Point Grey introduces new Blackfly and Grasshopper cameras
    Point Grey’s latest cameras include the Blackfly ultra-compact PoE GigE vision camera and the 2.3 megapixel global shutter CMOS Grasshopper3 USB3 vision camera. The Blackfly BFLY-PGE-12A2 camera utilises global shutter CMOS technology to capture crisp, distortion-free images of objects in motion, for applications such as factory automation or open road tolling. The new Aptina AR0134 1.2 megapixel CMOS image sensor is capable of capturing images at 50 FPS and uses Aptina’s latest 3.75micron global shutter