As ATL – and EMFTVM – are used more in large scale settings, the need for tool integration with Continuous Integration (CI) environments rises. These environments typically rely on command line build tools, such as Ant or Maven, to create reproducable builds. IDE integration, such as for Eclipse or IntelliJ, isn’t helpful for creating a production build.
For ATL/EMFTVM, the current situation is that you have to check in the binary .emftvm files generated by your Eclipse IDE into your versioning system. The CI environment just uses these binaries as-is. This not only locks you into the Eclipse IDE, but also exposes the production build to local developer IDE misconfiguration (e.g. wrong ATL/EMFTVM version installed).
Other MDE tools, such as Acceleo and xText, have already realised this need, and provide Maven integration, for example. For its upcoming 4.0 release, ATL/EMFTVM will add support for standalone use of its Ant tasks, and also adds an emftvm.compile task to the mix. These Ant tasks can also be used from Maven. A dedicated standalone jar file for the ATL/EMFTVM Ant tasks will be made available for this as part of the ATL build (so it will be updated together with each ATL release).
How to use?
The ATL/EMFTVM maven example on GitHub demonstrates how to use this standalone jar file. This example contains one ATL transformation module and one UML model:
In the example, Example.atl is first compiled to Example.emftvm bytecode, which is then executed on Example.uml, which yields the same model with an added comment.
From Ant:
To use the new EMFTVM Ant tasks in a standalone Ant script, you must first load them:

The above Ant snippet assumes you’ve downloaded all required jars to a directory named target/lib. You can then use any of the following Ant tasks:
- emftvm.compile
- emftvm.registerMetamodel
- emftvm.loadMetamodel
- emftvm.loadModel
- emftvm.newModel
- emftvm.run
- emftvm.saveModel
These Ant tasks are explained in detail on the ATL/EMFTVM wiki page. The emftvm.registerMetamodel task is intended for standalone use only, as normally Eclipse takes care of this already: metamodels are installed as plug-ins, and register themselves. The example build.xml file shows you how to put things together, and also includes a handy get task to download the required jar files into target/lib.
From Maven:
In Maven, you can use the maven-antrun-plugin to use the same Ant tasks. Downloading dependencies will be a bit easier, as they are available in Maven repositories. Just add an extra plugin repository, and you’re set:

Now you can add a plugin section to the build section of your POM file:

This time, the Ant taskdef task can just refer to the Maven plugin classpath. Note that the (maven plugin) dependencies are collapsed; this contains the list of required jar files: see the full POM file for details.
Conclusion
This post has shown how to use ATL/EMFTVM outside its Eclipse IDE cocoon in the form of Ant tasks, and a Maven build step via the maven-antrun-plugin. This can help you to loosen the dependency on your Eclipse IDE, at least for build purposes. Also, you no longer need to commit automatically derived resources, such as EMFTVM byte code, or generated models, to your versioning system.
At HealthConnect/Corilus, we used this for exactly these reasons: people can edit ATL files with their preferred IDE or text editor (even if Eclipse provides the best tooling, of course ;-)). In any case, they no longer need to switch IDE when doing a small edit on an ATL file, amidst all other Java, XML, YML, etc. edits.
But the main advantage is in branching and merging (or cherrypicking), where generated artifacts nearly always cause merge conflicts. By not having to commit these generated artifacts anymore, they can no longer cause merge conflicts. Our Jenkins CI server can now generate the ATL/EMFTVM artifacts – byte code and models – by itself. That being said: committing generated source code can be a good thing, especially if you’re the maintainer of the code generator – or you’re dealing with some sort of merging code generator on top of hand-written code – as it enables you to review the generator output.

I will present
Most data-centric software must deal with some form of import/export of their internal data model to an external data format. In many cases, this external data format is some sort of standard format, or otherwise dictated by external sources, and does not map one-on-one to the internal data model. This was also the case for
As a transformation language, we chose 