ROHM Semiconductor announced they have developed AI-equipped MCUs (AI MCUs) – ML63Q253x-NNNxx / ML63Q255x-NNNxx – that enable fault prediction and degradation forecasting using sensing data in a wide range of devices, including industrial equipment such as motors. These MCUs are the industry’s first* to independently execute both learning and inference without relying on a network connection.
As the need for efficient operation of equipment and machinery continues to grow, early failure detection and enhanced maintenance efficiency have become key challenges. Equipment manufacturers are seeking solutions that allow real-time monitoring of operational status while avoiding the drawbacks of network latency and security risks. Standard AI processing models, however, typically depend on network connectivity and high-performance CPUs, which can be costly and difficult to install.
In response, ROHM has developed groundbreaking AI MCUs that enable standalone AI learning and inference directly on the device. These network-independent solutions support early anomaly detection before equipment failure – contributing to a more stable, efficient system operation by reducing maintenance costs and the risk of line stoppages.
Also Read: Plug and Play Expands Semiconductor Ecosystem Program with New Synopsys Collaboration to Accelerate Chip Design Innovation
The new products adopt a simple 3-layer neural network algorithm to implement ROHM’s proprietary on-device AI solution “Solist-AI™.” This enables the MCUs to perform learning and inference independently, without the need for cloud or network connectivity.
AI processing models are generally classified into three types: cloud-based, edge, and endpoint AI. Cloud-based AI performs both training and inference in the cloud, while edge AI utilizes a combination of cloud and on-site systems, such as factory equipment and PLCs connected via a network. Typical endpoint AI conducts training in the cloud and performs inference on local devices, so network connection is still required. Furthermore, these models typically perform inference via software, necessitating the use of GPUs or high-performance CPUs.
SOURCE: GlobeNewswire