Skip to content

LHY-24/KGCompiler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

KGCompiler

License    Python 3.10+ PyTorch 2.3+

Official resources of "KGCompiler: Deep Learning Compilation Optimization for Knowledge Graph Complex Logical Query Answering".


🔍 Overview

KGCompiler (Knowledge Graph Compiler) is the first knowledge graph-oriented deep learning compiler designed to optimize Complex Logical Query Answering (CLQA) tasks. By introducing KG-specific compilation optimizations, it achieves average 3.71× speedup and significant memory reduction for state-of-the-art KG models without compromising accuracy.

KGCompiler addresses three key challenges in CLQA:

  1. Semantic Gap Between Logical Operators and Hardware Execution Paradigms
  2. Dynamic Query Structures Defy Static Optimization
  3. Tight Coupling of Embedding Methods and Optimization Rules

Through three core components:

  • Graph Capturer: Converts KG models to computation graphs
  • Pattern Recognizer: Detects FOL operator combinations
  • Operator Fuser: Applies KG-specific fusion strategies

KGCompiler Architecture


🚀 Quick Start 

Models

KG Data

The KG data (FB15k, FB15k-237, NELL995) mentioned in the BetaE paper and the Query2box paper can be downloaded here.

Installation

git clone https://github.com/LHY-24/KGCompiler.git
cd KGCompiler
pip install -r requirements.txt  

Basic Usage

from src.graph_capturer import GraphCapturer
from src.operator_fuser import OperatorFuser

# 1. Convert FOL query to computation graph  
query = "∃v: Winner(TuringAward, v) ∧ Citizen(Canada, v) ∧ Graduate(v, ?)"
graph = GraphCapturer().capture(query)

# 2. Apply KGCompiler optimizations
optimized_graph = OperatorFuser().fuse(graph)

# 3. Execute on supported models (e.g., BetaE)
from src.models.betae import BetaE
model = BetaE(dataset="fb15k-237")
results = model.execute(optimized_graph)

📊 Performance

Speedup Comparison (Batch Size = 1)

Model Avg Speedup Max Speedup
BetaE 7.40× 22.68×
ConE 6.19× 17.25×
Query2Triple 1.04× 19.58×

Performance Comparison

Memory Reduction

Memory Usage


🧩 Supported Features

Datasets

  • FB15K
  • FB15K-237
  • NELL

CLQA Algorithms

Algorithm EPFO Negation
GQE
Q2B
BetaE
LogicE
ConE
Query2Triple

Query Types

  • EPFO: 1p, 2p, 3p, 2i, 3i, pi, ip, 2u, up
  • Negation: 2in, 3in, inp, pin, pni

🛠 Customization

Add New Fusion Strategy

from src.operator_fuser import FusionStrategy

class CustomFusion(FusionStrategy):
    def match_pattern(self, graph):
        # Implement your pattern detection logic
        pass
    
    def fuse(self, graph):
        # Implement fusion optimization
        return optimized_graph

OperatorFuser.register_strategy(CustomFusion())

Extend to New Model

  1. Implement model in src/models/
  2. Add pattern recognition rules in src/pattern_recognizer.py

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages