C code implementation of FGI: Fast GNN Inference on Multi-Core Systems (IPDPSW 2025).
Binglin Ji, Chenfeng Zhao, Roger D. Chamberlain
This repository is the official implementation of the IPDPS Workshops 2025 paper FGI: Fast GNN Inference on Multi-Core Systems. In this work, we present FGI, a Fast GNN Inference system for large-scale graph data. FGI employs different parallelization strategies, maximizing the utilization of multi-level cache hierarchies in multi-core systems. We evaluate the Graph Convolutional Network (GCN) model with FGI on a 128-core AMD EPYC system. FGI achieve up to 2.64x inference speed compared to DGL and 3.36x compared to PyG across five large-scale, high-dimensional graph datasets with different properties.
- OS:
Linux Ubuntu >= 16.04orRocky Linux >= 9.5; - Software stack dependencies:
Pytorch == 2.3.1,DGL == 2.4.0,PyG == 2.6.1,GCC == 11.5.0; - Parallel Computing Tool:
OpenMP version 4.5;
- Multi-core AMD CPUs with multiple Core Complex Dies (CCDs)
- Main Memory >= 8GB
