FSE2025 Accepted Paper: "Bridging Operator Semantic Inconsistencies: A Source-level Cross-framework Model Conversion Approach" and FSE-Poster 2025 Accepted Paper : "Enhancing Deep Learning Transpilation with LLMs: Automation and Semantic Consistency"
ModelX is a Source-level Cross-framework Conversion Tool for operator semantic inconsistencies. It is designed to bridge operator semantic inconsistencies. Compared to previous IR-level approaches, ModelX offers a more detailed approach, focusing on achieving consistency at a much finer-grained level. It includes CMC (core Converter, containing 3,500+ LoC) and Evaluation (Experiments).
- Operating System: 64-bit CentOS 7.9
- Packages: >=Cmake-3.15; >=python 3.10
- Libraries: PyTorch 1.10.0a0; Paddle 2.5.2
- Paths: Complete the custom dependency library paths enclosed by '**' in xxx_CMakeLists_template.txt
All the PyPi package dependencies and their appropriate versions are captured in requirements.txt (excluding PyTorch and PaddlePaddle).
pip install -r requirements.txtpython main.py --in_dir <in_file/dir> --out_dir <out_dir> --show_unspport True
# Run an example:
python main.py --in_dir EVALUATION/datasets/models/vision/sourceModels/alexnet.py --out_dir EVALUATION/datasets/models/vision/targetModels --show_unspport TrueOutput example using GoogleNet:
After conversion, you get a folder containing the target framework model. Run the target framework model using the following command:
# When not including .so files:
python xxxx.py
# When including .so files:
LD_PRELOAD=/path/to/build/libmodified_xxx.so python xxxx.pyCurrently, the tool mainly evaluate the conversion task from PyTorch to PaddlePaddle, so --source_framework and --target_framework do not need to be specified; they are set to defaults.
We designed four experiments for ModelX:
1. evaluation for converting operators in its equivalence,
2. evaluation for comparing it with SOTA tools,
3. evaluation for the performance of LLMs in cross-framework model conversion,
4. evaluation for its robustness in cross-framework model conversion.
In this experiment:
- Test pipline file.
- We provide "test_operatorEquivalence_pipeline.py", which details calculating the MAE and RMSE, assessing the equivalence of converting operators by the tool.
- Experiment data.
- We provide "EVALUATION\datasets\operators", which contain about 140+ operators test files.
In this experiment:
- Generating compuatation graph and weight file.
- We provide "computation_graph_conversion_pipline.py", which converts the tested model into .onnx file and stores model weight.
- Comparison converted models using different approaches.
- We provide "compareExperiment.py", which evaluates different approaches in model conversion task.
In this experiment:
- Two type of Promptings files.
- We provide "orginal prompting.md" and "chain-of-thought prompting.md", which details the procedures of the prompts.
- Experiemnta data.
- We provide "test model.xlsx", which details the converted result of tested models.
In this experiment:
- Test the performance of models in the source framework and the target framework.
- We provide “pytorch_model_pipeline.py” and “paddlepaddle_model_pipeline.py”, which test models in some application fields. Of course, before testing, it is necessary to use the above converter to convert the specific model into the target framework format.
- Experiemnta data.
- We Provide "test vision models.xlsx", "test text models.xlsx" and "test audio models.xlsx"
- ModelX/
- README.md
- main.py
- CMC/
- Matchers/
- importMatcher.py
- basicMatcher.py
- astAnalysis.py
- converter.py
- nodeBase.py
- paddle_CMakeLists_template.txt
- pytorch_to_paddlepaddle.json
- codeGeneration.py
- special_mapping.py
- utils.py
- Matchers/
- EVALUATION/
- datasets/
- models
- operators
- EffectivenessExperiment
- EquivalenceExperiment
- LLMsExperiment
- RobustnessExperiment
- datasets/
