Datacenter, HPC, and AI Cluster Reference Architectures

Clients need to get results as fast as possible. That’s why Source Code created a series of reference architectures for specific types of workloads.

Each one is the result of hours of engineering, testing, and optimization. We save clients time and focus for customizing the design to meet your specific workload and organizational needs–not redoing the basic elements with each new engagement.

Each reflects all our past work designing clusters that meet the unique demands high-performance workloads place on hardware. And they are a great starting place for your customized cluster.

Learn more about our cluster reference architectures:

AI Factory Cluster

The Atlas AI Cluster is SourceCode’s latest innovation in collaboration with supercomputing leader Eviden and high-performance storage pioneer VDURA, designed to simplify and accelerate the development of AI factories. This integrated solution combines advanced engineering with best-in-class technologies to reduce cost and complexity while delivering leading performance, efficiency, and seamless scalability.

AI Ignite Cluster

Atlas AI Ignite is SourceCode’s innovative co-designed solution for organizations who want to get started with their AI factory build-out and scale as they go. Built in collaboration with Dell and VDURA, this turnkey system accelerates the crucial early stages of AI factories and simplifies ongoing scale, delivering a cost-effective, efficient path to success.

HPC Cluster

In High-Performance Computing (HPC), speed and results matter. However, designing and procuring a balanced, open-source HPC system that delivers both high performance and strong ROI can be complex and time-consuming. The turnkey Luna HPC Cluster removes these challenges, offering an integrated solution that accelerates your time-to-results.

Big Data Cluster

Workloads that extract value from massive data sets with accelerated computing (HPC or AI/ML), while highly desirable, can suffer from computing bottlenecks and poor performance. And, even if you deploy all flash, using DAS and NAS can mean additional challenges. The Big Data Cluster removes bottlenecks via a shared pool of NVMe over fabric (NVMeOF) that enables jobs to run up to 10x faster. And S3-compliant storage allows you to control costs.

CDI Cluster

Are you managing compute resources in a multi-tenant environment? Do you have some systems going unused while another type of system is always unable to meet user demand? Have you found your cloud environment to have unsustainably high operational costs? If so, consider the benefits of our CDI Cluster.

CFD Cluster

HPC workloads like computational fluid dynamics (CFD) are proving unsuitable for public cloud computing. The pay-per-usage model used in the cloud leads to high operational costs that only get worse the more dedicated to the cloud you become. For CFD and similar workloads, it can be far more effective to build a balanced, efficient on-premises cluster to run your jobs.

GigaIO SuperNODE™

It’s just simpler and more efficient! Working in collaboration with GigaIO, SourceCode offers the SuperNODE for the growing set of applications that need many GPUs or FPGAs. Near-linear scalability is achieved through a fast interconnect and fabric-centric software.

GigaIO SuperNODE harnesses the combined computational power of multiple GPUs at ultra-low latency, providing the horsepower necessary for a robust AI infrastructure. It can use various accelerator technologies (like GPUs and FPGAs) at scale and with efficiency, reducing component and energy costs associated with multi-CPU systems.