Sunday, December 22, 2024

TI unlocks scalable edge AI performance in smart camera applications with new vision processor family

To build on innovations that advance intelligence at the edge, Texas Instruments (TI) introduced a new family of six Arm Cortex-based vision processors that allow designers to add more vision and artificial intelligence (AI) processing at a lower cost, and with better energy efficiency, in applications such as video doorbells, machine vision and autonomous mobile robots.

This new family, which includes the AM62A, AM68A and AM69A processors, is supported by open-source evaluation and model development tools, and common software that is programmable through industry-standard application programming interfaces (APIs), frameworks and models. This platform of vision processors, software and tools helps designers easily develop and scale edge AI designs across multiple systems while accelerating time to market.

Also Read: Lattice Advances Industrial Automation at the Edge with Latest Solution Stack Updates

“In order to achieve real-time responsiveness in the electronics that keep our world moving, decision-making needs to happen locally and with better power efficiency,” said Sameer Wasson, vice president, Processors, Texas Instruments. “This new processor family of affordable, highly integrated SoCs will enable the future of embedded AI by allowing for more cameras and vision processing in edge applications.”

Scalable AI camera performance at the edge with vision processors

TI’s new vision processors bring intelligence from the cloud to the real world by eliminating cost and design complexity barriers when implementing vision processing and deep learning capabilities in low-power edge AI applications.

These processors feature a system-on-a-chip (SoC) architecture that includes extensive integration. Integrated components include Arm Cortex-A53 or Cortex-A72 central processing units, a third-generation TI image signal processor, internal memory, interfaces, and hardware accelerators that deliver from 1 to 32 tera operations per second (TOPS) of AI processing for deep learning algorithms.

SOURCE: PR Newswire

Subscribe Now

    Hot Topics