Tuesday, November 5, 2024

Neuromorphic Chip Developer Synsense Steps Into the Field of Smart Cockpits by Initiating a Technological Exploration With BMW

The world’s leading neuromorphic intelligence and solutions provider, SynSense, announced that it will advance the integration of neuromorphic chips and smart cockpits and explore related fields with BMW. This marks the entry point for the application of SynSense’s brain-like technology into smart cockpits.

Also Read: Pixelworks Empowers iQOO Neo6 with Differentiated Visual Experience for Mobile Gaming

“Unlike traditional solutions, neuromorphic technology simulates biological neural systems. It is an innovation in the architecture of chips, and has features such as low end-to-end latency, extremely low power consumption, and real-time sensing and computing,” said Ning Qiao, founder and CEO of SynSense.

SynSense was founded in 2017 in Zurich, Switzerland, and moved its headquarters to China in 2020. With founding members from the Institute of Neuroinformatics (INI) in Zurich, SynSense is the fruit of the highly innovative and research dedicated environment of the University of Zurich. Synsense specializes in IoT real-time signal processing and AI edge computing, and develops neuromorphic technology algorithms and hardware designs.

This exploration with BMW in the area of neuromorphic technology will focus on SynSense’s dynamic visual intelligence SoC––Speck, which combines in one single chip SynSense’s low-power SNN vision processor with an event-based sensor. It performs both neuromorphic sensing and computing , and utilizes full asynchronous circuit design. It can be used to capture real-time visual information, recognize and detect objects, and perform other vision-based detection and interaction functions.

The essence of dynamic visual technology lies in event-based vision. It doesn’t record an entire scene through a static frame rate, but instead captures changes in a scene. The image obtained contains only operationally relevant data, and the visual processing only consumes power while the event triggers computing, thus reducing data redundancy and latency, and also protecting privacy.

“Speck doesn’t need caches or additional cameras. It can capture visual event information, process real-time information computing, and conduct smart scene analysis with less than 1 milliwatt of power consumption and with 5-10 millisecond end-to-end latency,” the CEO added.

Subscribe Now

    Hot Topics