Skip to content

KevinG396/FGI-Fast-GNN-Inference-on-Multi-Core-Systems

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 

Repository files navigation

FGI: Fast GNN Inference on Multi-Core Systems

C code implementation of FGI: Fast GNN Inference on Multi-Core Systems (IPDPSW 2025).

This repository is the official implementation of the IPDPS Workshops 2025 paper FGI: Fast GNN Inference on Multi-Core Systems. In this work, we present FGI, a Fast GNN Inference system for large-scale graph data. FGI employs different parallelization strategies, maximizing the utilization of multi-level cache hierarchies in multi-core systems. We evaluate the Graph Convolutional Network (GCN) model with FGI on a 128-core AMD EPYC system. FGI achieve up to 2.64x inference speed compared to DGL and 3.36x compared to PyG across five large-scale, high-dimensional graph datasets with different properties.

Recommended Requirements

Software Requrements

  • OS: Linux Ubuntu >= 16.04 or Rocky Linux >= 9.5;
  • Software stack dependencies: Pytorch == 2.3.1, DGL == 2.4.0, PyG == 2.6.1, GCC == 11.5.0;
  • Parallel Computing Tool: OpenMP version 4.5;

HardWare Requirements

  • Multi-core AMD CPUs with multiple Core Complex Dies (CCDs)
  • Main Memory >= 8GB

Code Coming Soon

About

C code implementation of FGI: Fast GNN Inference on Multi-Core Systems (IPDPSW 2025)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors