SPAIQ: the engine
behind the quality
Small Pixels Artificial Intelligence
Quality enhancement.
Born to solve bandwidth constraints in broadcasting, SPAIQ began by processing video at the pre-encoding stage through our products Stream and ScaleUp. Today, this foundation has evolved into a versatile AI framework. By integrating new architectures like FrameFlex for temporal fluidity, we have moved beyond simple optimization, redefining video performance across the entire digital ecosystem.
What SPAIQ AI delivers
- Cleaner input: fewer artifacts, less noise, and restored structure.
- Stronger compression resistance: same perceptual quality at lower bitrates.
- Real-time performance under 100 ms
- Seamless Integration: fully encoder-agnostic.
What SPAIQ guarantees
- No Hallucinations: Restorative AI that respects the source, adding zero fake details.
- No Data Retention: We process your pixels without storing any of your data.
- No Infrastructure Lock-in: Works on-prem or in the cloud, adapting to your hardware.
- No Complex Setup: Requires zero modifications to your existing encoding chain.
THE PIXEL ZOO
Adaptive AI for every content type
SPAIQ is engineered for universal versatility, delivering high-performance results across 95% of all video content right out of the box. Yet, we believe that true excellence requires the ability to adapt. Unlike generic AI solutions that apply a single rule to everything, our ‘Pixel Zoo’ offers the option to deploy specialized neural networks fine-tuned for specific content types, ensuring peak performance for your unique visual needs.
Trained on massive, proprietary datasets and refined using perceptual modeling, Pixel Zoo learns patterns and features invisible to the human eye, delivering the most accurate version of each frame.
Sports
Live events
Locks onto fast motion, ensuring edge stability and structural clarity.
Gaming
Computer Generated
Cleans gradients and eliminates digital artifacts for a pristine look.
Surveillance
Security monitoring
Restores visibility and definition even in challenging, low-quality conditions.
Nature
Documentaries
Preserves fine textures and organic details often lost in compression.
Movies & Series
TV Content
Respects the director’s intent with balanced contrast and high fidelity.
INDUSTRIES
Where our technology makes a difference
01
Live Broadcast & Streaming
(VOD + OTT)
Real-time clarity with minimal latency. Bandwidth reduction without losing perceptual quality.
02
Security & Surveillance
Enhanced visibility for mission-critical monitoring.
03
Videoconference
Professional clarity at the source. Ultra-low latency enhancement for seamless remote communication.
04
Embedded & Mobile
High-performance AI on low-power hardware.
05
Video Archives
and library monetization.
Validated Performance
Lab Tested: Efficiency & VMAF Scores
Scientific Validation
- + 15 points VMAF i What is VMAF? Developed by Netflix and the University of Southern California, Video Multimethod Assessment Fusion is the gold standard for measuring video quality. Unlike old metrics, VMAF is designed to predict how human viewers actually perceive sharpness and detail, ensuring that our 15 point gain translates into a better experience for your audience. : Significant perceptual gain at the same bitrate.
- -50% Bitrate: Cut bandwidth requirements in half while preserving the original viewing experience.
- Massive Efficiency: Higher quality, lower storage costs, and reduced carbon footprint.
Viewer Approved: Real-World Mobile Test
User Validation
Metrics matter, but human perception is everything. Especially on mobile devices.
We validated our technology with a medium-scale global crowdtest using an iOS app simulating a Netflix-like experience across 32 countries.
- 250+ Testers involved in real-world conditions.
- >90% Preference for the Small Pixels enhanced experience.
- Zero Hardware Impact: Verified no increase in device temperature or battery drain.
Related reading:
Perceptual Video Quality: Why Human Perception Is the Ultimate Benchmark
What sets us apart
Independent, non-intrusive Integration
SPAIQ runs before encoding, it is fully encoder-agnostic and requires zero infrastructure changes.
Content-Aware Precision
Accepts any input video but automatically selects specialized models tailored to the footage context; whether it's high-speed sports, nature, or surveillance.
Proprietary Architecture: Built from the ground up
Custom neural networks optimized specifically for video efficiency and real-time performance.
Data-Secure by Design
Our processing is fully real-time and requires zero data storage.
True-to-Source Enhancement
Enhances visual clarity without hallucinations. We restore structure and texture based strictly on the original signal, ensuring no fake details are added.
Fast Deployment
As a pre-processing layer, integration takes minutes, not months.
Flexible Scalability
Deploys on-prem or cloud, running efficiently on CPUs, GPUs, and mobile chipsets.
SUSTAINABILITY
Efficiency with Purpose
Every gigabyte of video data has an environmental cost.
Small Pixels reduces that cost at the source.
By optimizing the video signal before encoding, our technology enables broadcasters and platforms to deliver the same, or higher, perceptual quality at significantly lower bitrates. Less data transmitted means less energy consumed across the entire delivery chain: data centers, networks, and end devices.
Sustainability is not an add-on. It is a direct outcome of how our technology is engineered.
The Impact
–50% Bandwidth, Lower Carbon Footprint
By halving bitrate requirements for high-traffic video streams, Small Pixels significantly reduces CO₂ emissions associated with content delivery infrastructures.
Energy Savings Across the Pipeline
Lower data volumes translate into reduced power consumption across data centers, CDN networks, and user devices.
Low-Power AI, Real-Time Performance
Our lightweight neural networks are designed for efficiency. They run in real time on edge and mobile devices with power consumption below 20W without relying on cloud-based inference.
Future-proof design:
a living technology
Continuous Evolution
Our AI models are updated every three months with new data, enhanced architectures, and real-world testing.
Each release delivers better performance, higher quality, and greater efficiency, all automatically.
Never obsolete
Because Small Pixels operates before encoding, it is independent from codecs, pipelines, and future formats.
Whether you switch to AV1 or a new device tomorrow, Small Pixels is already compatible.
Lasting ROI
One integration, endless upgrades. Eliminate “rip-and-replace” costs and enjoy a pipeline that appreciates in value over time.
Small Pixels is not static technology. It evolves, keeping every video pipeline at the edge of performance.
FAQ
Is the AI pre-trained or does it learn on my content?
Our models are pre-trained on custom datasets and can be fine-tuned for specific use cases when needed. Customer data is never stored.
How quickly can Small Pixels be integrated?
Integration is typically fast: our pre-processing stage slots into your workflow without modifying encoding, ingestion or delivery infrastructure.
How is pre-processing AI different from traditional video enhancement?
Pre-processing strengthens the signal before encoding, making frames more compression-resistant. No filters, no post-processing, and no visual artifacts introduced.
Ready to validate these results on your own content?
Experience the SPAIQ engine in your workflow.