Flex Logix® Technologies, Inc., supplier of the most-efficient AI edge inference accelerator and the leading supplier of eFPGA IP, announced production availability of its InferX™ X1M boards. At roughly the size of a stick of gum, the new InferX X1M boards pack high performance inference capabilities into a low-power M.2 form factor for space and power constrained applications such as robotic vision, industrial, security, and retail analytics.
Also Read: Pixelworks Powers HONOR Magic4 Series to Take the Lead of High-End Smartphone Market
“With the general availability of our X1M board, customers designing edge servers and industrial vision systems can now incorporate superior AI inference capabilities with high-accuracy, high throughput and low power on complex models,” said Dana McCarty, Vice President of Sales and Marketing for Flex Logix’s Inference Products. “By incorporating an X1M board, customers can not only design new and exciting new AI capabilities into their systems, but they also have a faster path to production ramp versus designing their own custom card design.”
About the InferX X1M Board
Featuring Flex Logix’s InferX X1 edge inference accelerator, the InferX X1M board offers the most efficient AI inference acceleration for advanced edge AI workloads such as Yolov5. The boards are optimized for large models and megapixel images at batch=1. This provides customers with the high-performance, low-power object detection and other high-resolution image processing capabilities needed for edge servers and industrial vision systems.
The InferX X1M M.2 board fits within the low power requirements of the M.2 specification. To help its customers to market quickly, Flex Logix also provides a suite of software tools to accompany the boards. This includes tools to port trained ONNX models to run on the X1M, and simple runtime framework to support inference processing within both Linux and Windows.
Also included in the software tools is an InferX X1 driver with external APIs designed for applications to easily configure & deploy models, as well as internal APIs for handling low-level functions designed to control and monitor the X1M board.