Hammerspace for High-Performance Computing

“Hammerspace…allows for immense flexibility, immense cost savings, and covers all cluster workload profiles.”
Exec Director, Research Computing Operations, ACCRE
Vanderbilt Advanced Computing Center
Why Traditional HPC File Systems and Storage Are Evolving
Classical HPC architectures were built for single supercomputers and a few tightly coupled workloads. Today, environments look very different:
- Many Workloads
- AI Demands
- Many users
- Many locations
- Many storage types
What hasn’t changed is the need to keep GPUs and CPUs fully utilized. This requires solutions that deliver high read/write performance, fast metadata operations, efficient small-file and dataset updates, and low-latency data paths.Â
Legacy proprietary file system designs force painful trade-offs:
Scratch parallel file systems deliver excellent performance near the supercomputer, but are typically proprietary and siloed. Scale-out NAS and object storage are shareable but too slow.
Teams create multiple copies of the same dataset across scratch, project storage, AI environments, object stores, and archives, while unused capacity remains siloed and idle on existing arrays and inside GPU servers.
Checkpointing, simulation, AI training, AI RAG, visualization, collaboration, and long-term retention often run on platforms with different protocols and controls — slowing down research and adding operational risk.
One Platform From Tier 0 to Archive
The fastest storage in any HPC environment is the NVMe already sitting inside the GPU or CPU servers. Historically, that capacity has been limited to node-local scratch, caching, or ignored.
Activates local NVMe across GPU/CPU clusters into a single global file system.
Provides high-performance read/write for checkpointing, hot training sets, and ultra-low-latency workloads.
Uses automated data orchestration policies to transparently protect and tier data — without changing paths or breaking applications.
Keeps local NVMe fully utilized turning existing NVMe in servers into a literal performance and budget goldmine.
Hammerspace Solutions for HPC Workloads
Scratch & Simulation
High-performance, read/write data placement as close as physically possible to GPUs and CPUs using server-local NVMe: Ideal for:
- Checkpointing and restart
- Hot training and inference datasets
- Ultra-low-latency workloads
Project & Shared Research Workspaces
Persistent, high-performance storage for the bulk of HPC data:
- Shared team workspaces for research groups and institutes
- Multi-tenant AI and data science environments
- Cross-vendor storage pools unified under one namespace
Home Directories, Collaboration & Visualization
Enterprise-grade data services for user environments:
- Researcher home directories
- Departmental shares and visualization pipelines
- Mixed protocol access (NFS, SMB, S3) with snapshots and quotas
Archive & AI Reuse of Historical Data
Modernized long-term retention that doesn’t strand data on tape:
- Global visibility into archived datasets
- Policy-based recall and re-placement on flash or performance tiers
- Preparation of historical data for future AI workloads