Mar 29, 2026
Can Your NPU Run DOOM? Chimera Can.
Is your NPU DOOMed? Quadric's Chimera GPNPU runs every AI model — and a complete DOOM engine. Find out why Quadric is different.
AI Processor IP + software stack built to keep pace with your model roadmap.
One IP foundation
Inference. 1 – 864 TOPS.
One architecture
Low latency. Power efficient.
One toolchain
Any chip. Your models.
The Problem
Multiple accelerator IPs. Multiple toolchains. And NPUs that can't keep up with the models you need to run.
There's a better way to build.
The Solution
Licensable Processor IP for end-to-end inference. One toolchain across your chip line.
The Opportunity
The next billion AI users won't connect to a datacenter. They'll run inference locally—on silicon built for it.
CONSUMERMultimodal AI on PCs and mobile. Voice, gesture, and personalization. Privacy and latency—solved.
ENTERPRISEEdge servers, smart printers, inspection, security, and analytics—at the edge where data lives.
ROBOTICSMultimodal perception and decision-making. Sensor fusion, navigation, and manipulation.
AUTOMOTIVEVision, radar, and LiDAR processing with ASIL B/D support. From in-cabin monitoring to full ADAS.
Here's how it's built.
For Architects
Single-batch performance. Lower power. Deterministic timing. Simpler code.
Optimized for single-batch latency. Immediate inference for real-time responsiveness.
No round trips to separate vector units or DSPs. Data stays local, memory bandwidth drops.
Data movement is instruction-encoded—no routing decisions, no contention. Every cycle is predictable.
Single instruction stream. No partitioning code between NPU, DSP, and CPU. One codebase.
For Developers
Compile most models. Extend the rest in C++.
Graph Compiler auto-compiles hundreds of models. Import ONNX from PyTorch or TensorFlow and run.
No waiting on Quadric for new ops. Python via ChiPy™ or C++ extensible with LLVM compiler included.
Instruction Set Simulator (ISS) gives cycle-accurate performance prediction before tape-out.
One processor handles pre-processing, inference, and post-processing. No partitioning across NPU, DSP, CPU.
Stay Updated
Mar 29, 2026
Is your NPU DOOMed? Quadric's Chimera GPNPU runs every AI model — and a complete DOOM engine. Find out why Quadric is different.
Mar 13, 2026
At Quadric, we have long argued that heterogeneous NPU designs — those that stitch together multiple specialized fixed-function engines — carry an unavoidable hidden cost: data has to move. A lot. And data movement burns power, adds latency, and creates silicon-area overhead that scales with every new generation of AI models. Now, Intel has made that case for us.
Mar 3, 2026
The ChiPy DSL is Quadric's Python framework for building complete on-chip pipelines. Using YOLOX-M as a case study, we show how backbone inference, box decoding, and NMS run entirely on the Chimera GPNPU — no host CPU intervention, no DDR round-trips, just Python compiled to silicon.
Jan 14, 2026
Tripling product revenues, comprehensive developer tools, and scalable inference IP for vision and LLM workloads, position Quadric as the platform for on-device AI.
Jan 14, 2026
Quadric today announced that TIER IV, Inc., of Japan has signed a license to use the Chimera AI processor SDK to evaluate and optimize future iterations of Autoware, open-source software for autonomous driving pioneered by TIER IV.
Dec 11, 2025
Former EVP/GM of Synopsys IP Division Appointed Independent Board Member BURLINGAME, Calif., Dec. 11, 2025 /PRNewswire/ -- Quadric® today announced the appointment of Joachim Kunkel as an independent member of the Board of Directors of the company. Kunkel joins a revamped Quadric Board in anticipation of an imminent closing of Quadric's Series C fund raising.
Get the latest on AI silicon, product updates, and industry insights delivered to your inbox.
Join Our Mailing ListIn datacenters, yes. But also in pockets, dashboards, factories, and devices you haven't imagined yet. We built the silicon that gets it there.