Neural networks have become an integral component of modern software systems, with a variety of deep learning frameworks available to support their development. As the AI landscape rapidly evolves, adopting a different framework might be necessary to leverage new features, respond to evolving requirements, or avoid relying on deprecated technologies. However, migrating neural network code across frameworks remains a largely manual and time-consuming process, due to the lack of migration approaches specifically tailored for neural network code.
Model-driven engineering offers a promising way to address this challenge, as it enables the definition of framework-independent abstractions of neural networks that can serve as a basis for automated migration. This abstraction can act as a pivot between frameworks, decoupling the neural network from its original implementation and enabling its translation to any target framework.
In our paper Towards Migrating Neural Network Implementations, accepted at the ACM International Conference on AI-powered Software (AIware 2026), we propose an automated approach to migrate neural network code across deep learning frameworks, leveraging the BESSER neural network metamodel as the intermediary representation driving the migration. While our migration approach is generic, we focus in this work on the two popular DL frameworks: PyTorch and TensorFlow.
Our migration approach consists of three main steps: AST Extraction, Transformation and Code Generation that are illustrated in Figure 1.

Figure 1: An overview of the migration approach. TensorFlow and PyTorch are used as the source and target frameworks respectively. Migration from PyTorch to TensorFlow is also supported.
Step 1: Source Code AST Extraction
The first step consists of extracting the Abstract Syntax Tree (AST) of the neural network source code. The AST is a structured representation of the code that captures its components, such as layers and their attributes, in a tree of nodes. This representation preserves the relationships between the different components of the neural network, making it suitable for the transformations that follow in the next step.
To parse the source code, we rely on Python’s built-in ast library, given that both PyTorch and TensorFlow are primarily used with Python. Our extractor supports both Sequential and Subclassing neural network architectures for the two frameworks.
Step 2: Transformation
The second step consists of transforming the extracted AST into a BESSER model through Model-to-Model transformations. Each neural network component in the source code, including layers, tensorOps and sub-networks along with their attributes, is mapped to its equivalent representation in the BESSER metamodel. The resulting BESSER model is platform-independent, meaning it is decoupled from the source framework and can therefore serve as a reliable pivot for the migration.
To support both Sequential and Subclassing architectures across PyTorch and TensorFlow, four transformation modules were developed, each tailored to a specific combination of framework and architecture type.
Step 3: Code Generation
The third step takes the BESSER pivot model as input and generates the neural network code in the target framework. We developed code generators for both PyTorch and TensorFlow, covering all combinations of Sequential and Subclassing architectures, to support migration in both directions.
The code generation process relies on Model-to-Text transformations, implemented using Jinja templates, to map neural network concepts to their equivalent code constructs in the target framework.
Figure 2 illustrates the three steps of the migration process from TensorFlow to PyTorch.

Figure 2: An illustration of the migration process from TensorFlow to PyTorch.
Migration Challenges
Migrating neural network code across frameworks involves several challenges. For instance, some layer attributes are automatically inferred by TensorFlow at runtime and therefore not explicitly defined in the source code, yet they are required in PyTorch and must be retrieved before the migration can proceed. Similarly, the two frameworks differ in how activation functions are defined and in their channel ordering conventions, both of which required careful processing to ensure the migrated code remains functionally equivalent to the original.
Evaluation
We evaluated our approach on five neural networks from the literature, covering a variety of architectures including convolutional and recurrent networks. All five networks were successfully migrated between PyTorch and TensorFlow in both directions.
To further validate the functional equivalence of the migrated networks, we trained both the original and migrated implementations on benchmark datasets and compared their performance. Our results presented in Table1 confirm that the migrated networks are functionally equivalent to the originals.

Table1: Accuracy of source and migrated NNs on benchmark datasets using PyTorch and TensorFlow.
This work is a step towards simplifying neural network code migration, enabling implementations to be ported across frameworks without the need for manual rewriting. All artefacts are available as part of the BESSER open-source platform.
Recent Comments