Semidynamics, an advanced computing company developing memory-centric AI infrastructure for large-scale inference, announced a strategic investment from SK hynix, one of the world’s leading memory manufacturers. The investment reflects a shared conviction that memory architecture, not compute alone, will define the economics of next-generation AI inference, where cost per token is the metric that matters.
As large language models scale, and as agentic, multi-turn workloads demand persistent context across longer inference sessions, system performance is increasingly constrained by memory capacity and data movement rather than raw compute. Semidynamics is capable of delivering multiples of the memory capacity available in conventional HBM-based inference systems, hence supporting larger models, larger KV-caches and larger contexts. These three features enable more users per rack, directly leading to lower cost per token.
Headquartered in Barcelona, Semidynamics is one of the few processor companies to have designed its proprietary implementation of the open RISC-V architecture from first principles around the memory wall, not as a retrofit to an existing compute architecture, but as its founding thesis. The architecture incorporates Semidynamics’ proprietary Gazzillion® memory subsystem technology, supported by a growing patent portfolio, and is engineered to reduce the data movement bottlenecks that constrain today’s AI infrastructure. Gazzillion® is Semidynamics‘ proprietary latency-tolerance technology, a design philosophy embedded throughout the processor, from the core and tensor unit through to the memory subsystem, that keeps the system productive during the long memory access times that stall conventional AI accelerators.
The company recently completed a 3nm silicon tape-out with TSMC, its first, and one of the first achieved by a European semiconductor company at that process node, marking a significant milestone on its roadmap to deliver high-performance AI inference processors and vertically integrated systems.
Also Read: OMNIVISION and ATL Medical Collaborate on OVMed® OH0131 Medical Imaging System
Designed for the Memory Wall
This investment reflects the growing importance of tight architectural alignment between processors and advanced memory technologies. Through this collaboration, the two companies will explore opportunities to co-optimize Semidynamics’ architecture with next-generation memory technologies to support increasingly demanding AI inference workloads.
Semidynamics’ memory-centric architecture is designed to handle the workloads placing the greatest pressure on today’s AI infrastructure: agentic reasoning systems that execute multi-step inference over long contexts, maintain stateful sessions, and operate continuously rather than handling discrete requests. These workloads are fundamentally data-movement problems. By optimizing how data flows through the system, the architecture reduces the bandwidth and latency bottlenecks that determine cost per token at scale.
“SK hynix’s investment is a direct reflection of where AI infrastructure is heading, systems where memory architecture is as strategically important as compute. We built Semidynamics around that thesis, and this partnership strengthens our position as we bring our inference platform to market at a moment when the industry has recognized that token economics are a memory problem as much as a compute problem,” said Roger Espasa, Founder and CEO of Semidynamics.
“AI workloads are fundamentally memory-bound problems, and the industry has been underinvesting in architecture-level solutions. Semidynamics is one of the few companies that has built from first principles around this constraint,” said Heejin Chung, SVP, Head of Venture Investment, SK hynix America.
SOURCE: Semidynamics





