Object Teams Final 0.7.0
On a day like this (2010/07/07), it should be evident what is better than one or a few stars: a strong team!
But when you look at what happens in your software at runtime, all you see is a soup of individuals (called objects) running around all over the place (remember: standard module systems do not create boundaries around runtime objects!).
In all humbleness, are you surprised to learn that it was a German invention back in 2002, that objects should team up?
the Object Teams Development Tooling 0.7.0 is released!
(supersedes version 1.4.0 from objectteams.org)
This is the first release after the move of the project from objectteams.org to Eclipse.org, at what point it is time to express a big thank you to all who helped along the way, Mentors, EMO, Legal, the Tools-PMC, and – of course – the many Contributors. This is: a team-effort!
Now you might say: that’s a pretty scattered team: some people at the Eclipse Foundation in Ottawa, some students in Berlin, people from Austin, and where-not. But that’s actually the point about a team: you start from a set of individuals who initially do not necessarily have any particular relationship. Then you create a team where each individual takes one particular role. This means you further specialize these existing individuals (you don’t want a team of 11 goal keepers, would you? Even with a Manual Neuer giving a perfect forward pass, it takes a Miroslav Klose to make the goal). And then you unite all members of the team towards a common goal, giving the team a new identity, so that team acts like one.
Still, each team member brings into the team the strengths of his particular background, meaning: the individuals do not completely disappear, but some properties of the individuals shine through when they play their roles in the team.
Now, what’s that got to do with software?
Given you already have a core of an application implemented, and now it’s your task to implemented one or two more user stories on top of the existing code. You look at the existing classes and mark those that in some way or other are related to what you need to implement. One user story relates to all classes marked red, another one to those marked greenish etc (and do expect overlap):
How do you implement the user story that involves all the red classes, such that the new implementation sits in a nice new module that concentrates on only this one task/user story?
Consider the red entities as plain individuals, they don’t know about the new task they should contribute to. Also keep in mind, that not all instances of those red classes will participate in the new user story. What we need to do is: specialize a few of those individuals so they can play particular roles wrt the new task and unite those roles within a new team.
If you live in flat-land, this is tremendously difficult, but if you’re able to just add one more dimension to the picture …
… the solution is very straight forward:
Now:
- The implementation of the red user story is a strong, cohesive
team - Its members are
roles specialized in their particular sub-tasks. - Each role relates (↓) to one individual from the application core and specializes what is already given towards what the team requires.
- How exactly each role relates to its base is declared using two kinds of atomic bindings: callout and callin method bindings (see, e.g., this post).
- These roles only affect the system as long as the team context is active, and activation happens per team instance (with options for fine-tuning).
- Other teams may be formed for other purposes / user stories (see the greenish team)
- Even if you start already with this 3-D picture, with Object Teams you can always add one more dimension to your architecture, if needed.
Meta Feedback
Last Friday I received some wonderful meta-feedback. What’s that you’ll say?
It’s feedback on feedback, or, second order feedback.
First Order Feedback
Initially, I’m thinking of feedback whereby a tool tells its user what s/he’s done wrong and where to go in order to improve. As I mentioned earlier, I’m not interested in a tool that just works when it works, as that might require its users to get everything 100% correct right from the beginning. In our business I’m interested in tools that help the user to get it to work, from initial buggy attempts towards a full working solution.
So, when working on the Object Teams Development Tooling, how can we make the tool speak to the user in really helpful ways?
First we really care to give precise error messages and warnings regarding all kinds of situations that look funny, strange or plain bogus. Last time I counted the OT/J compiler featured 314 messages specific to OT/J. This is excellent for a seasoned OT/J developer but someone still trying to learn the language might be a little bit puzzled by messages like:
The clue on how to help the puzzled users lies in the suffix “OTJLD 4.3(e)”: that’s exactly the paragraph in the OT/J Language Definition, that defines what’s going on here. But what’s a reference like “§4.3(e)” good for? So the next thing we added to the OTDT was a context menu action on any OT/J related problem in the Problems view:
What do you see:
- At the top you see an editor with an underlined, buggy piece of code
- Next you see the Problems view with the corresponding error message
- Next you see a context menu on the problem with an entry “Go to Language Definition“.
- At the bottom finally you see a small extract from the language definition, exactly that paragraph that is violated by the buggy code. If that doesn’t provide sufficient context there are plenty of hyperlinks and breadcrumbs that help to find the required explanation
This is our specific feedback system and I think its already quite nice, however …
Second Order Feedback
Last Friday I presented Object Teams at the Vienna Helios Demo Camp. When I showed the “Go to Language Definition” action Peter Kofler gave some excellent feedback on our feedback system. He must have feeled the too-much-stuff syndrome you can easily see when looking at the screenshot above. So he requested that the same action be available even without the Problems view. So once back home I file bug 318071. In the most recent build you now have two more options:
Use the context menu of the left gutter:
Use the toolbar of the problem hover
Need I say that adding a button to the problem hover is not normally possible? With the action already in place the following OT/J code is all we need to integrate into the JDT/UI’s implementation of that hover:
/** * Add OT-Support to hovers for java problems. * * @author stephan * @since 0.7.0 (Incubation at Eclipse.org) */ @SuppressWarnings({ "restriction", "decapsulation" }) public team class HoverAdaptor { /** Add the "Go to Language Definition" action to the hover's toolbar. */ protected class ProblemHoverAdaptor playedBy ProblemInfo { void addAction(ToolBarManager manager, Annotation annotation) <- after void fillToolBar(ToolBarManager manager, IInformationControl infoControl) base when (isOTJProblem(base.annotation)) with { manager <- manager, annotation <- base.annotation } void addAction(ToolBarManager manager, Annotation annotation) { manager.add(ShowOTJLDAction.createAction(null/*site*/, annotation.getText())); } static boolean isOTJProblem(Annotation annotation) { if (annotation instanceof IJavaAnnotation) { int problemId = ((IJavaAnnotation) annotation).getId(); return problemId > IProblem.OTJ_RELATED && problemId < IProblem.TypeRelated; } return false; } } }
Thanks Peter, I think your RFE made a clear point for usability of the OTDT!
Moving business

Remember the last time you had to cram your whole hosehold into boxes, bags and cases? You may feel excited about your new home etc. but the whole boxing business is quite a drag, ain’t it? There’s of course at least two ways of approaching this:
- Don’t look, just shovel everything randomly into boxes
- Look at each single piece, indulge in memories associates with it and sort it to its likes
Obviously, (1) means that on the other side you will unpack a whole mess of junk. OTOH, (2) won’t be finished before the moving truck arives. Still deep inside there lives a bit of hope, that you’ll move to your new home with only that stuff you’ll actually want, and everything ready to be neatly deployed into its new destination. Moving could free you from all the junk you don’t want any more, right? And even more, after the move, you may want to know, where everything is, right?
When moving the Object Teams project to Eclipse I was in the lucky situation that I could indeed use the occasion to sort through some of our stuff. The software engineer might be tempted to even speak of some “quality assurance” along the way, but let’s be careful with our wording for now.
On January 26 this year, the Eclipse Object Teams Project was created and we started to put up signs “Object Teams is moving to Eclipse.org”. Recently, I changed that sign to “Object Teams has moved to Eclipse.org”. So, what exactly happened between then and now?
Learning the infrastructure
At first the newly appointed eclipse committer and project lead is overwhelmed with all the shiny technology: web server, wiki, version repositories, build servers, download servers, the portal, project metadata, accounts for this and that, bugzilla components, versions and milestones and what-not. “Alles so schön bunt hier!”
I won’t indulge in talking about the paper work needed at this stage, after some four days I had most my accounts and the project was registered as “incubation – conforming”, so we were ready to go into the parallel IP process.
Initial contribution
On project day #12 I submitted my first CQ and, yes, that submission was already the result of heavy refactoring: I had renamed most (not all!!) occurrences of org.objectteams to org.eclipse.objectteams. A piece of cake? Well not exactly if the code piles up to a zip of 35 MBytes (not including .class files and nested archives), and not if your team-support plug-in goes berserk on some of the renamings and if … (see also this post).
Parallel IP process – our version
In fact our version of the parallel IP process looked like this:
| On one thread I was chasing after some people and institutions to just provide the necessary signed statements of code provenance. Wrt the individuals this was painless, however, the university and the research institute involved both had their very specific strategies for delaying the project. All-in-all it took them more than one week per sentence in the final document. Or would you prefer the words-per-day count? | The much feared IP analysis turned out to be a very constructive collaboration with Sharon Corbett. I was really amazed about the obscure pieces of code (and comments) she brought to light, things that I never knew where in there. So that was helpful information, actually 🙂 . Most of all I was pleased by her quick responsiveness – quite in contrast to the other thread. Thanks Sharon! I should also thank Jan Wloka who from the outset of the project took care that we’d have copyright headers and that stuff. |
The effect was: at the time I got the signatures that cleared us to check our sources into svn the IP analysis was already done and complete! And not only that, during that process we’ve done some significant cleanup.
Code cleanup triggered by the move:
- Proviously, Object Teams used the JMangler framework for launching with our bytecode transformers in place. This was a great thing to have back in the olden Java 1.3 times. But in 2010 our Java 5 based alternative had matured and we didn’t even have to put JMangler into any of our moving boxes 🙂
- We used to maintain a patched version of BCEL 4.4.1 (developed at Freie Universität Berlin, as the namespace
de.fubstill announced). I consulted the AspectJ folks who maintain a patched version, too. But their patch only vaguely resembles the original, and they clearly stated that they saw now chances of these changes being adopted upstream. So, I went back to our sources, checked the patches, checked the current version 5.2 of bcel and found that the remaining bugs could actually be easily worked around from the outside. That’s when I learned the details of Orbit: since our initial commits to Eclipse we never had to bother about the bcel version, it just comes right flying from the Orbit. So we got that legacy version cleared up. - My heart skipped a tiny single beat, when I learned that one of our most central data structures could not be accepted at Eclipse: I had patched class
WeakHashMapfrom the OpenJDK to create aDoublyWeakHashMapwith quite unique characteristics concerning garbage collection. We need that! Yet, the license (GPL with “classpath exception”) was not accepted. I made a quick experiment with wrapping instead of patching and guess what I learned (again): while the patched version was created in the pursuit of performance, still, after changing the strategy (to what was destined to be slower) my measurements could not show any performance penalty. So, carve that in stone: never optimize without measuring. The new version has the same performance – and no license issues!
Of course, there were more issues like needing to file a new CQ just for using files like xhtml1-strict.dtd, but those caused no grief after all. Enter the next phase.
Getting everything to build and test on eclipse servers
OK, when we opened the boxes at our new home, some of the content was actually broken on the way.
Fixing broken builds
The ugliest part was getting an ancient set of PDE/Build scripts to run on build.eclipse.org. Digging through a 30MByte build-log looking for the cause of a build failure never was fun. The point that dissatisfies me with all the build technology I’ve seen so far: You have a build that works on one machine and with one version of the software. Then, one arbitrary piece of the setup changes, let’s take a big change as the example: moving your JDK from 5 to 6. It’s OK that things may break at this point, but the kind of breakage frequently seems to have nothing to do with what you’ve changed, e.g., after moving to a different JDK the compiler can no longer resolve java.long.Object, whereas everybody knows: that’s not the difference between the two JDKs. The problem is not broken builds, the problem is how little clues the logs give you for finding the root cause of the breakage, or even: telling you how to fix it. A technology that works when it works is one thing. A technology that helps you get it to work is another (and we’re working hard to make Object Teams fall into the second category).
Modernizing the build
OK, enough complained, the move to Eclipse again gave reason to cleanup that monster build and even update to using some of the automatic built-in p2 stuff (yes, finally we use p2.gathering=true), rather than manually invoking the various p2 applications (publisher, director). When it runs, you may even get the impression you know why it does.
The final round in improving our build was adding bundle signing, yeah! Of course, that’s when all the p2-metadata generated during the build don’t help you any more because those include the checksums of your unsigned jars. So I created a tiny little shell script to automate those steps required after a successful build&test. I ended up with 7 more transformations of our metadata needed at this stage. So we’re back at directly invoking p2 publisher, p2 director, do various XSL transformations etc. Most of these could actually be done by a PDE/Build – p2 integration, but let’s not expect too much, not now.
Did I mention the almost 50000 tests that successfully run during each build? Well, that’s what we owe our users, right?
It builds – let’s ship it!
OK, let’s move ahead to the success-story-part. Less than a month after the initial commit we had our first milestone. It’s so good to be back in business 🙂 After fixing all those migration-induced regressions I’m sure our code has better quality than before.
User side migration
Only one burden we had to pass on to users: due to the changed namespace, the configuration files of existing OT/J projects have to be updated. Luckily, it wasn’t too difficult to add some specific build-path problems and quickfixes, which should make the migration for users pretty smooth.
Installing
Right while I was publishing our first milestones a new cool tool came around the corner: the Market Place Client. So, now if you download, e.g., the Helios Package “Eclipse IDE for Java Developers” you’ll get the OTDT installed without having to know the download address, just select Help > Eclipse Marketplace and search for “Object Teams”, and you’re ready to hit “Install“:

Interestingly, in order for this to work I had to ask for this feature, which later down the road triggered this blocker security issue. At the end of the day this made me ponder about generalizing various things that the user might want to know when installing software. And indeed, Object Teams should play an active role in this discussion: the whole business of OT/Equinox is based on the assumption that the user agrees with what we are doing. We already do our best in treating the users as grown-ups who can make their own decisions, if we provide sufficient
information, like:

This little screenshot tells you a whole story about this version of the bundle org.eclipse.jdt.core (see last column):
- The icon
in the 1st column says: this bundle is signed, but that signed content is going to be woven at load time before the JVM sees it - Columns 2 & 3 give the obvious information that this is not the version provided by the JDT team but something from the Object Teams project, which BTW. is still in its incubation phase.
- Column 4 finally gives you all the gory details: a sophisticated version number plus the list of OT/Equinox plugins that have declared to adapt the current plugin.
That’s the kind of transparency we show upon request after the software is installed. The mentioned bug 316702 is about providing similar transparency already during install.
So, what’s the plan?
Given that all legal and technical matters have been sorted out to this point, and given that the tool is in an even better shape than the final OTDT 1.4.0 from objectteams.org, what’s our plan?
- Just recently I requested a Release Review, tentatively scheduled for July 7, so with only little delay after Helios we should have an actual, stable release.
- I decided to defer the project graduation some more months to give us time to define which parts of the software are actually API.
Where can I see it in action?
Well, given the current milestone releases (and the ease of installing) and all the documentation we have in the wiki nothing should stop you from running your first Object Teams demo in do-it-yourself mode 🙂
Otherwise, if you happen to be in Vienna on June 25, just come to this DemoCamp and I’ll help you to get started with Object Teams.
So, indeed for the Object Teams project that past 6 months were used to turn this:

into this:

Object Teams in Print
Today I received my hard copy of the German Eclipse Magazin issue 3.10
, which features a 4-pages article on Object Teams, so if you want to let a colleague know about Object Teams (and if that colleague understands German) give him/her a copy of that Magazin 🙂
The article gives an overview of …
- what deficiencies in OOP are addressed by Object Teams
- fundamentals of OT/J: teams, roles, playedBy, callout and callin
- the Object Teams Development Tooling
- OT/Equinox
- exemplary applications of OT/J
… and summarizes the gains in flexibility and maintainability.
There are a few errata which I’d like to correct here:
- The pictures of figures 1 & 2 are swapped
- Some text in table 1 is unrelated to Object Teams, in the manuscript it reads:
Ende 2001 Beginn der Arbeiten an der Technischen Universität Berlin 2003-2006 Kooperation TU Berlin & Fraunhofer FIRST, gefördert mit Mitteln des BMBF 2005 erste öffentliche Präsentation des OTDT 2006 erste Plug-ins mit OT/Equinox geschrieben März 2007 Version 1.0.0 des OTDT veröffentlicht 2007-2010 Kontinuierliche Verbesserungen, Versionen 1.1.0 – 1.4.0 Januar 2010 Eclipse Object Teams Projekt kreiert, Beginn des Umzuges - The CD claims to contain the Object Teams Development Tooling, however, you’ll only find the jar of the command line compiler. But no problem, simply visit the download page and you’ll find all you need to install the OTDT into Eclipse 3.6 M6 (or earlier versions)
Edit: The magazine has provided an online version with these errata fixed.
Enjoy the read!
How many concepts for modules do we need?
The basic elements of programming are methods. If you have many methods you want to group them in classes. If you have many classes you want to group them in packages. If you have many packages you want to group them in bundles. If you have many bundles you group them in features, but if you have many features …stop!, STOP!!!
Isn’t this insane? Every time we scale up to the next level we need a new programming concept? Like, someone invented the +1 operator and I can trump him by inventing a +2 operator, and you trump me by …? Haven’t we learned the 101 of programming: abstraction?
I guess not many folks in Java-land are aware of a language called gbeta, where classes and methods are unified to “patterns” and no other kinds of modules are needed than patterns. It’s good news that the guy behind gbeta receives one of this year’s prestigious Dahl-Nygaard prizes: Erik Ernst. Object Teams owes much to Erik and I will speak more about his contributions in a sequel post.
Another really smart guy is Gilad Bracha, who after working on Java generics (together with Erik actually) and even JSR 294 decided to do something better, actually doubleplusgood. While he upholds the distinction between methods and classes he makes strong claims that no other modules than classes are needed, if, and here is the rub: if classes can be nested (see “Modules as Objects in Newspeak“).
Modules in Java: man or boy?
Let me briefly check this hypothesis:
The proliferation of module concepts in Java is due to the lack of truly nestable modules.
Which of the above mentioned concepts supports nesting? Features support inclusion which isn’t exactly nesting, but since features are actually defied by OSGi purists, I don’t want to burn my fingers by promoting features. Bundles cannot be nested, bummer! Some people actually think packages support nesting, with two reasons for believing so: packages are mapped to directories in a files system or in a jar and directories can be nested, plus: packages have compound names. However, semantically the dot in a package name is just a regular part of the name, it has no special semantics. Speaking of package foo.bar being a “sub-package” of foo is strongly misleading as the relation between them two is in no way different from the relation between any other pair of packages. All packages are equalst. Perhaps you recall the superpackage hero (or
the strawman of that hero), which introduced the capability that a public class can actually be seen by classes from specific superpackages only. And: superpackages where designed to support nesting. Now superpackages are “superceded” by Jigsaw modules, which don’t support nesting. Great, they invented the +3 operator. That’s award winning!
Finally classes: surprisingly, yes, classes can be nested. Unfortunately, still today it shines through that nested classes are an after-thought, the real core of Java 1.0 was designed without that. E.g., serializing nested classes is strongly discouraged. How many O/R mappers support non-static inner classes? The last time I looked: zero!
Nested classes: flaws and remedies (part 1)
I see three conceptual flaws in the design of nested classes in Java:
- scoping (2 problems actually)
- forced representation
- poor integration with inheritance
I discuss the first two in this post, the third, inheritance, deserves a post on its own right.
For each problem I will briefly show how it is resolved in OT/J. First off, I should mention that OT/J maintains compatibility with Java by applying any modified semantics only inside classes with the team modifier.
Scoping
Consider the following Java snippet:
public class City { TownHall theTownHall = new TownHall(); class TownHall { class Candidate { } Candidate[] candidates; void voteForMayor(Candidate candidate) { ... } } class Citizen { void performCivilDuties() { ... theTownHall.voteForMayor(theTownHall.candidates[n]); } } }
In this code we can actually make all methods, fields and nested classes private, to the end that external clients see none of these internal details of a City, whereas classes at the inside can blithely communicate with each other. Thus we have created a wonderful module City which can use all accomplishments of object-oriented programming for its internal structure – well hidden from the outside. In Java, however, this is flawed in two ways:
- Nested classes cannot protect any members from access by sibling classes, so I (a
Citizen) can actually see the wallets of all mayorCandidates (well, maybe that’s not a bug but a feature). It would be much more useful if only inside-out visibility was given, i.e., a Citizen can see all members of his/her City, but not inverse – the City looking into its glass Citizens. - Scoping rules in Java are purely static, i.e., permissions are given to classes not objects. As a result I could not only vote in my home city, but in any City (and every person can see the wallet of any other person etc.).
OT/J solves (1) by reinterpreting the access modifiers. Within a team class a private field Candidate.wallet, e.g., would not be visible outside its class, whereas a Citizen could still access theTownHall if this field were private.
(2) is basically solved by applying different rules to self-references (incl. outer self) than to explicitly qualified expressions, so City.this.theTownHall (OK, uses the enclosing this instance) applies different rules than newYork.theTownHall (not OK if theTownHall is private).
Well, issue (1) is a matter of tedious details, where I see no excuse for Java’s “solution”. Issue (2) has always been a differentiation between good old Eiffel, e.g., and its “successor” Java. This issue stands for a conceptual crossroad: are we interested in code nesting, or are we more interested in expressing how run-time objects are grouped? I personally don’t see much use in making the definition of a Citizen a nested part of the definition of a City, but speaking of many Citizens (instances) forming a City (instance) makes a lot of sense.
Forced Representation
By this I mean the fact that programmers are forced to store all inner classes textually embedded in their enclosing class. After we’ve already seen the discrepancy between the semantics and the representation of a package (flat structure stored in nested directories), now we see the opposite: semantic nesting is forcefully tied to representational nesting. This sounds logical until you start to write significant amounts of code as one big outer class with lots of (deeply) nested classes contained. You probably wouldn’t even think of this, because pretty soon the file becomes unmanageably huge. This is a very mundane reason why class nesting in Java simply doesn’t scale.
The solution in OT/J is pretty trivial. The following structure
// file community/City.java package community; public team class City { protected class Citizen {} }
is strictly equivalent to these two files:
// file community/City.java package community; /** * @role Citizen */ public team class City { }
// file community/City/Citizen.java team package community.City; protected class Citizen {}
The trick is: City is both a class and a special package. So semantically the team contains the nested Citizen but physical storage may happen in a separate file stored in a directory community/City/ that corresponds to the enclosing team class.
As a special treat the package explorer of the OTDT provides two display options for team packages: logical vs. physical. In the physical view the 2-files solution looks like this:

Just by selecting the logical view, where the team-package is not shown, the display changes to

Small files in separate directories are good for team support etc. Logical nesting is good for modularity. Why not have the cake and eat it?
These are just little tricks introduced by OT/J and the OTDT. But with these there’s no excuse any longer for not using class nesting even for large structures. And remember: this is real nesting, so you can use the same concept for 1-level, 2-level, … n-level containment. Good news for developers, bad news for technology designers, because now we simply don’t need any “superpackages”, “modules” etc. I actually wonder, how nestable classes could be unified with bundles, but that’s still a different story to be told on a different day.
Before that I will discuss how nesting and inheritance produce tremendous synergy (instead of awkwardly standing side-by-side). From a research perspective this has been solved some 9 years ago. I strongly believe that the time is ripe to bring these fruits from ivory towers to the hands of real world developers. Stay tuned for part 2.
Re: Eclipse and Academia
This is a response to Chris Aniszczyk’s post “Eclipse and Academia“.
I’m glad he raises the issue of academia producing cool stuff vs. consuming Eclipse in class, which seems to be better supported.
Clearly, Object Teams is one of the projects that has crossed the line between both worlds, so let my try to summarize the experience I made along the way.
First, it’s important to distinguish 2 kinds of academic projects: There are individual (PhD) projects, which indeed have quite limited resources. However, there’s also funded projects that typically run about 3 years, involving a number of people, perhaps from different institutions. Object Teams is of the second kind.
When I submitted presentations for EclipseCon the main goal certainly was to increase visibility (both for getting feedback and setting the grounds for a business, eventually). I made the experience that an academic project is not very likely to meet the criteria of an EclipseCon program committee: EclipseCon wants presentations about topics that everybody already talks about. By definition a research project covers topics that nobody has been talking about yet. I once discussed this with Bjorn and apparently the poster session at ESE is an offspring of discussions like that. However, I think posters is not enough. I had a poster at ESE ’07, to the effect that I could talk to about 4 persons. That doesn’t really help a lot.
I would really suggest to have a track with regular presentations where selection criteria are just shifted from already-hyped to potentially-changing-things-in-the-future.
Another thing that would help integrating academic projects that actually move to Eclipse.org would be a track of New Project presentations. When the Object Teams project was in its proposal phase I submitted four presentations to EclipseCon none of which was accepted, so on that channel I cannot introduce the project to the community. I know of another project that made the same experience.
Sorry, if this sounds a bit negative. Summarizing my experience, currently it’s much easier to get involved bottom-up like in Eclipse Demo Camps (great!), blogging, forums etc. Regarding “top-level” visibility academic projects seem to be starting with a handicap in Eclipse land.
I also like the idea of collecting Eclipse-related publications. If things go well I’d actually expect a huge number of such publications to show up, which would soon require smart categorization to ensure that the list is digestable.
Regarding the opposite direction, going to academic conferences: Sure there was a successful series of Eclipse Technology eXchange events at OOPSLA and ECOOP and I once tried to organize one at ECOOP but failed. My feeling is that by now Eclipse is so pervasive in research that just the fact of using Eclipse for your prototypes isn’t enough of a commonality any more to foster an “academic” workshop. There might be a case for tutorials to jump start young researchers: “write your first dataflow analysis (refactoring, metrics, Java extension …) in Eclipse“. I’m not sure how big the gap would be between existing articles and what a typical PhD would need.
IDE for your own language embedded in Java? (part 2)
In the first part I demonstrated how Object Teams can be used to extend the JDT compiler for a custom language to be embedded in Java. I concluded by saying that more substantial features like Refactoring might need more rocket science which I wanted to show next.
The “bad news” is: before I started to do some strong adaptations of DOM AST etc to make Refactorings work, I just made a few experiments of how Refactorings actually behaved in my hybrid language. To my own surprise a lot of things already worked OK: I could extract a custom syntax expression into a local variable and inline the variable again and more stuff of that kind. Just look at this example:

Actually this reflects an experience I’ve made more than once: If you reuse some module and perform some adaptations in terms of provided API and extension points etc. more often than not one adaptation entails the next, adding tweaks to workarounds because you keep scratching at the surface. If, OTOH, you succeed to make your adaptation right at the core where the decisions are made, just one or two cuts and stitches may suffice to get your job done. Clean, effective and consistent. That’s what we see when cleanly inserting a custom AST node into the JDT: if our CustomIntLiteral behaves well a lot of JDT functionality can just work with this thing without knowing it’s not a genuine Java thing.
Now this means for my next example I had to look for an extra challenge. I decided to enhance the example in two ways:
- The custom syntax should be a bit more realistic, so I chose to create a syntax for money, consisting of a number and the name of a currency
- I wanted source formatting to work for the whole hybrid language
A word of warning: this post uses some bells and whistles of OT/J and applies it to the non-trivial JDT. This might be a bit overwhelming for the novice. If you prefer lower dosage first, you may want to check out our example section in the wiki. It’s still far from complete but I’m working on it.
A syntax for money
The new syntax should allow me to write this:
int getMoney() { return <% 13 euro %>; }
and the stuff should internally be stored as a structured AST node. This is how class CurrencyExpression starts:
public class CurrencyExpression extends Expression { public IntLiteral value; public String currency; final static String[] CURRENCIES = { "euro", "dollar" }; public CurrencyExpression(int sourceStart, int sourceEnd) { ... public boolean setCurrency(String string) { ... @Override public StringBuffer printExpression(int indent, StringBuffer output) { ... .... }
For creating a CurrencyExpression from source I wrote a little CustomParser, normal boring stuff with 40% just reading individual chars and manipulating character positions, another 45% actually does some error reporting and only 3 lines are relevant: those that create a new CurrencyExpression, create an IntValue for the value part and invoke setCurrency with the currency string.
In the ScannerAdaptor from the previous post I simply replaced this
Expression replacement = new CustomIntLiteral(source, start, end, start+2, end-2);
with this:
Expression replacement = customParser.parseCurrencyExpression(source, start, end, this.getProblemReporter());
That suffices to make the above little method compile and run just as expected.
Interlude: DOM AST
Well, with this slightly more realistic syntax you’d actually see a number of exceptions in the IDE that can all be fixed by letting the DOM AST know about our addition. For those who don’t regularly program against the JDT API: the DOM AST is the public data structure by which tools outside the JDT core manipulate Java programs. Inside the JDT extending the DOM AST would mean to subclass either org.eclipse.jdt.core.dom.ASTNode or one of its subclasses. Unfortunately, all constructors in this hierarchy are package private, and even with OT/J we respect what the javadoc says: “clients are unable to declare additional subclasses“.
But we can do something similar: instead of subclassing we can use instances of a regular DOM class and attach a role instance to them. As the base I chose org.eclipse.jdt.core.dom.SimpleName which inside the JDT could mean a lot of different things, so for most parts a node of this kind is regarded as a black box, just what we need. This is the role I added to the team SyntaxAdaptor from the previous post:
protected class DomCurrencyLiteral playedBy SimpleName { protected String currency; void setSourceRange(int sourceStart, int length) -> void setSourceRange(int sourceStart, int length); @SuppressWarnings("decapsulation") public DomCurrencyLiteral(AST ast, CurrencyExpression expression) { base(ast); this.currency = expression.currency; setSourceRange(expression.sourceStart, expression.sourceEnd-expression.sourceStart+1); } }
So this almost looks like subclassing except we use playedBy instead of extends and base() instead of super(). And yes, when creating an instance with “new DomCurrencyLiteral(ast, expr)” inside the constructor we create a SimpleName from DOM using the package private constructor. But by using role playing instead of sub-classing this has become part of the aspectBinding relationship, which makes analysis of the state of encapsulation much easier.
So, who actually creates these nodes? Inside the JDT this is the responsibility of the ASTConverter, which takes an AST from the compiler and converts it to the public variant. In order to tell the ASTConverter how to handle our currency nodes I added this role to the existing team SyntaxAdaptor:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
@SuppressWarnings("decapsulation") protected class DomConverterAdaptor playedBy ASTConverter { // whenever convert(Expression) is called ... org.eclipse.jdt.core.dom.Expression convertCurrencyExpression(CurrencyExpression expression) <- replace org.eclipse.jdt.core.dom.Expression convert(Expression expression) // ... and when the literal is actually a CurrencyExpression ... base when (expression instanceof CurrencyExpression) // ... perform the cast we just checked for and feed it into the callin method below. with { expression <- (CurrencyExpression)expression } /** * Convert a CustomIntLiteral from the compiler to its dom counter part. * This method uses inferred callouts (OTJLD §3.1(j)) * which need to be enabled in the OT/J compiler preferences. */ @SuppressWarnings({ "basecall", "inferredcallout" }) callin org.eclipse.jdt.core.dom.Expression convertCurrencyExpression(CurrencyExpression expression){ final DomCurrencyLiteral name = new DomCurrencyLiteral(this.ast, expression); if (this.resolveBindings) { recordNodes(name, expression); } return name; } } |
I deliberately used some special OT/J syntax worth explaining:
- Lines 5ff. define a callin bindings like we’ve seen before.
- Line 8 adds a guard predicate to the binding, saying that this binding should only fire when the argument
expressionis actually of typeCurrencyExpression - After passing the guard we know that we can safely cast to
CurrencyExpressionso I added a parameter mapping (line 10) which feeds a casted value into the role method. - Inside the role method
convertCurrencyExpressioneverything looks normal, but at a closer lookthis.astandthis.resolveBindingsseem to be undefined in the scope of the current class. In fact these fields are defined in the base classASTConverterand we could use explicit callout accessors like in the previous post. However, this time I chose to let the compiler infer these callouts so that the method would look exactly like existing methods inASTConverterdo (this option has to be enabled in the OT/J compiler preferences).
OK, with this little addition our CurrencyExpressions are converted to something that the JDT can handle and we’re already prepared for doing real AST manipulation including our syntax.
Source Formatting
Inside the JDT source formatting (Ctrl-Shift-F) is essentially performed by class CodeFormatterVisitor. This class is one of many subclasses of the general ASTVisitor. If one wanted to make these visitors aware of our CurrencyExpression we would have to add one visit method to ASTVisitor and each of its sub-classes! That’s certainly not viable, so with plain Java we’re pretty much out of luck.
The situation that needs adaptation can be described as follows:
- A visitor will be created and invoked in order to descend into the AST
- At the point when traversal finds a CurrencyExpression it will invoke its
traverse(ASTVisitor)method.
Of course we could manually inspect the type of visitor within the traverse method, but that would defy the whole purpose of having visitors: keep all those add-on functions out from your data structures. Instead I only gave a default implementation to CurrencyExpression.traverse and used OT/J for the cleanest implementation of double dispatch (which is what the visitor pattern painstakingly emulates): we need dispatch that considers both the visitor type and the node type for finding the suitable method implementation.
In green-field development this would be still easier but even on top of an existing visitor infrastructure it get’s pretty concise.
Visitor adaptation – version 1
My first version looks like this (explanations follow below):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
public team class VisitorsAdaptor { protected team class AstFormatting playedBy CodeFormatterVisitor { // whenever visiting something that could contain an expression // activate this team to enable callins of the inner role callin void visiting() { within(this) { base.visiting(); } } @SuppressWarnings("decapsulation") void visiting() <- replace boolean visit(Block block, BlockScope scope), boolean visit(FieldDeclaration fieldDeclaration, MethodScope scope), void formatStatements(BlockScope scope, final Statement[] statements, boolean insertNewLineAfterLastStatement); Scribe getScribe() -> get Scribe scribe; /** This role implements formating of our custom ast: */ protected class CustomAst playedBy CurrencyExpression { void traverse() <- replace void traverse(ASTVisitor visitor, BlockScope scope); @SuppressWarnings({ "inferredcallout", "basecall" }) callin void traverse() { Scribe scribe = getScribe(); Scanner scanner = scribe.scanner; // format this AST node into a StringBuffer: StringBuffer replacement = new StringBuffer(); replacement.append("<% "); this.value.printExpression(0, replacement); replacement.append(' '); replacement.append(this.currency); replacement.append(" %>"); // feed the formatted string into the Scribe: int start = this.sourceStart(); int end = this.sourceEnd(); scribe.addReplaceEdit(start, end, replacement.toString()); // advance the scanner: scanner.resetTo(end+1, scribe.scannerEndPosition - 1); scribe.pendingSpace = false; } } } } |
The key trick in this example is nesting:
- Role
AstFormattingis responsible for detecting when aCodeFormatterVisitoris visiting any subtree that may contain expressions. This is done using a callin binding that lists three relevant base methods which all should be intercepted by the same role method (lines 12-16). - Inside role
AstFormatting(which is also marked as ateam) an inner roleCustomAstwill only be triggered if aCodeFormatterVisitorcalls thetraversemethod of aCurrencyExpression(see callin binding in line 23). - The connection between both levels is wired in method
AstFormatting.visiting: the block statementwithin() { }temporarily and locally activates the given team instance, here denoted bythis. Only during this block the nested teamAstFormattingis active – meaning that only during this block the callin binding in roleCustomAstwill fire. - Within role
CustomAstwe can naturally access theCodeFormatterVisitorvia the enclosing instance ofAstFormatting. No instanceof and casting needed, because all this only happens in the context of aCodeFormatterVisitor
The body of method traverse contains only domain logic: pretty-printing the current node into a string buffer and interacting with the underlying infrastructure (Scanner, Scribe) that drives the formatting.
That’s it, with these classes in place, we can write this method:
int getMoney() { int myMoney = <% 3 euro %> ; System .out.println("myMoney ="+myMoney); return myMoney; }
then hit Ctrl-Shift-F et voilà:
private static int getMoney() { int myMoney = <% 3 euro %>; System.out.println("myMoney =" + myMoney); return myMoney; }
How’s that? 🙂
The formatter smoothly operates on the full hybrid language, not just skipping over our nodes but handling them as well.
Generalizing visitor adaptations
After success wrt both challenges I’d like to clean up even more and prepare for further adaptations of other visitors. Given how many subclasses of ASTVisitor are used within the JDT we wouldn’t want to write the infrastructure for double dispatch over and over again. So let’s generalize, that is: extract a common super-class, by extracting everything re-usable out off class AstFormatting
public team class VisitorsAdaptor { protected abstract team class AstVisiting playedBy ASTVisitor { // whenever visiting something that could contain an expression // activate this team to enable callins of the inner role callin void visiting() { within(this) base.visiting(); } void visiting() <- replace boolean visit(Block block, BlockScope scope), boolean visit(FieldDeclaration fieldDeclaration, MethodScope scope); protected abstract class CustomAst playedBy CurrencyExpression { // variant of traversal that should be used when the enclosing team is active: // (implement in subclasses) abstract callin void traverse(); void traverse() <- replace void traverse(ASTVisitor visitor, BlockScope scope); } // Insert more roles for binding more AST nodes... } protected team class AstFormatting extends AstVisiting playedBy CodeFormatterVisitor { // one more trigger that should activate the team: @SuppressWarnings("decapsulation") visiting <- replace formatStatements; Scribe getScribe() -> get Scribe scribe; /** This role implements formating of our custom ast: */ @Override protected class CustomAst { @SuppressWarnings({ "inferredcallout", "basecall" }) callin void traverse() { // method body as before } } } protected team class OtherVisitorAdaptor extends AstVisiting playedBy XYVisitor { @Override protected class CustomAst { callin void traverse() { // domain logic } } // Insert more roles for actually handling more AST nodes ... } }
Now team class AstVisiting contains the part that is common for all visitors. At this level several things are still abstract: method traverse, role class CustomAst and even the whole team AstVisiting.
Team class AstFormatting extends the abstract team and defines everything specific to formatting. We have one more trigger for visiting, one callout binding to a field of class CodeFormatterVisitor and then we only refine the previously abstract role class CustomAst. At this level it is no longer abstract because we give an implementation for traverse.
I’ve also sketched another nested team showing a minimal specialization of AstVisiting for adapting some other visitor and adding another implementation for CustomAst.traverse plus potentially more roles for more node types.
Conclusion
For those who don’t work in the compiler business on a day-to-day basis this is probably pretty tough stuff, but let me summarize what we’ve just achieved:
- Embed a custom syntax into Java, showing how a custom parser can be plugged in to create custom AST from a region of the Java source.
- Adapt the conversion between two different AST structures (internal -> DOM) to also handle custom nodes.
- Adapt the code formatter so that hybrid sources can be formatted with a single command.
- Prepared the infrastructure for adapting other visitors, too. By this we have achieved that new visitor adaptations will only need to add their specific implementation with close to zero scaffolding.
- Cleanly separated each implemented concern in one module.
- Keep each module in the scale of only tens of lines of code.
- Yet implement significant steps towards a production quality IDE for our custom hybrid language.
Maybe I shouldn’t have told you, how easy these things can be – if your tools are sharp – maybe.
But professional carvers know: if your knife is sharp, it’s actually easy to handle. Only if it is blunt you are in real danger of hurting yourself – because you need to apply disproportionate force to cut your wood. So:
Spare your fingers, sharpen your knife!
PS: Here’s the archive of all sources, ready to be imported into the OTDT.
IDE for your own language embedded in Java? (part 1)
Have you ever thought of making Java a bit smarter? Perhaps, for some task you would prefer a custom syntax, and snippets using that syntax should then be embedded into Java? Sure, many never seriously think about this because of the prohibitively high effort to create the compiler for such hybrid language. And even if you are a compiler guru, knowing your toolkits so that translation wouldn’t be a problem for you, you’ll probably surrender at the mere thought of how to create a mature IDE that would allow efficiently productive work with you hybrid language.
You shouldn’t give up. Think: If you build your own IDE you’ll never be able to really compete with the JDT, right? Still anything falling back behind the quality of the JDT won’t raise your productivity but will stand in your way at the most common tasks during development, right?
What does this tell you? Give up? No. If you can’t beat us, join us. Don’t write a new IDE for any Java-based language. Join the JDT. Well, but the JDT doesn’t provide an extension point for embedding a different syntax, does it? Sure they don’t, but it’s actually not their job to do so because every embedded language will probably have slightly different requirements so designing such an extension point would be a battle you can never win.
I have developed a tiny extension to Java and integrated this into the JDT by a mere 204 lines of code including comments and a plugin.xml. As some may guess the only trick needed is to use Object Teams. By this post I will explain how Object Teams can be used for extending the JDT in this way. And I will also argue against the most common fear in this context: “Is that solution maintainable?” From my very own experience this design is not just barely manageable, but from all I’ve seen this is the best maintainable solution for this kind of task, but I’m getting ahead of myself.
In order not to distract from the interesting design issues I’ll be using the most simply language extension: I want to be able to write integer constants in natural language, and while I’m at it, I want it to work in an multilingual setting. So, this should, e.g., be a legal program:
public class EmbeddingTest { private static int foo() { return <% one %>; } public static void main(String[] args) { System.out.println(foo()); } }
I’m using <% and %> tokens to switch between Java syntax and custom syntax.
The first step can be achieved in plain Java, it’s creating a class for ASTNodes representing my custom int literals within the compiler. If you really want you may inspect class CustomIntLiteral, but it’s actually pretty boring old Java. Its main job is to lookup a given string from an array of known number words and thus translate the word into an int. It even detects the language used and remembers this for later use. The behaviour is hooked into the JDT compiler by overriding method TypeBinding resolveType(BlockScope scope) — just normal Java practice.
Drilling down into the example
Here’s an overview of the module that does all the rest:
package embedding.jdt; import org.eclipse.jdt.core.compiler.CharOperation; import org.eclipse.jdt.core.compiler.InvalidInputException; import org.eclipse.jdt.core.dom.AST; import org.eclipse.jdt.core.dom.ASTNode; import org.eclipse.jdt.internal.compiler.ast.Expression; import org.eclipse.jdt.internal.compiler.ast.IntLiteral; import org.eclipse.jdt.internal.compiler.parser.TerminalTokens; import embedding.custom.ast.CustomIntLiteral; import base org.eclipse.jdt.core.dom.ASTConverter; import base org.eclipse.jdt.core.dom.NumberLiteral; import base org.eclipse.jdt.internal.compiler.parser.Scanner; public team class SyntaxAdaptor { /** * <h3>Part 1 of the adaptation:</h3> * Wait until '<' is seen and check if it actually is a special string enclosed in '<%' and '%>'. */ protected class ScannerAdaptor playedBy Scanner { ... } /** * <h3>Part 2 of the adaptation:</h3> * If the ScannerAdaptor found a match intercept creation of the faked null expression * and replace it with a custom AST. * * This is a team with a nested role so that we can control activation separately. * * This team should be activated for the current thread only to ensure that * concurrent compilations don't interfere: By using thread activation any state of * this team is automatically local to that thread. */ protected team class InnerCompilerAdaptor { /** This inner role does the real work of the InnerCompilerAdaptor. */ } /** * Dom representation of CustomIntLiteral. * Since the constructor of NumberLiteral is package private we cannot subclass, so use a role instead. */ protected class DomCustomIntLiteral playedBy NumberLiteral // don't adapt plain NumberLiterals, just those that already have a DomCustomIntLiteral role: base when (SyntaxAdaptor.this.hasRole(base, DomCustomIntLiteral.class)) { ... } /** * <h3>Part 3 of the adaptation:</h3> * This adaptor role helps the ASTConverter to convert CustomIntLiterals, too. */ @SuppressWarnings("decapsulation") protected class DomConverterAdaptor playedBy ASTConverter { ... } }
Imports
Why am I showing you boring import declarations to begin with? Well, with OT/J there’s a fine distinction that is worth looking at: all imports starting with import base indicate that these classes are imported for attaching a role to them. So just from these lines you see that the given module adds roles to classes from org.eclipse.jdt.internal.compiler.parser and org.eclipse.jdt.core.dom (2 classes each). All other imports are plain Java imports and won’t let you apply any OT/J tricks.
Teams and Roles
Line 18 above tells you that the class SyntaxAdaptor is actually a team. Teams are used for grouping a set of roles – nested classes of a team. Using the playedBy keyword a role declares that it adapts the specified base class (which are the same classes we base-imported above). The purpose of these roles should be roughly clear by the doc comments.
So, role ScannerAdaptor will be responsible for switching between both syntaxes.
Role ParserAdaptor (line 39) will be responsible for creating our AST node (CustomIntLiteral). But wait, what’s that: the role is nested within an intermediate team, InnerCompilerAdaptor. This team will show you, how to define a role that is only effective in specific situations, here, the ParserAdaptor should only be effective after the ScannerAdaptor has detected a syntax switch. Details follow below.
The other two roles will do advanced stuff so I’ll discuss them later.
Role implementation (1)
Here is the full(!) code of role ScannerAdaptor:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
protected class ScannerAdaptor playedBy Scanner { // access fields from Scanner ("callout bindings"): int getCurrentPosition() -> get int currentPosition; void setCurrentPosition(int currentPosition) -> set int currentPosition; char[] getSource() -> get char[] source; // intercept this method from Scanner ("callin binding"): int getNextToken() <- replace int getNextToken(); callin int getNextToken() throws InvalidInputException { // invoke the original method: int token = base.getNextToken(); if (token == TerminalTokens.TokenNameLESS) { char[] source = getSource(); int pos = getCurrentPosition(); if (source[pos++] == '%') { // detecting the opening "<%" ? int start = pos; // inner start, just behind "<%" try { while (source[pos++] != '%' || source[pos++] != '>') // detecting the closing "%>" ? ; // empty body } catch (ArrayIndexOutOfBoundsException aioobe) { // not found, proceed as normal return token; } setCurrentPosition(pos); // tell the scanner what we have consumed (pointing one past '>') int end = pos-2; // position of "%>" char[] fragment = CharOperation.subarray(source, start, end); // extract the custom string (excluding <% and %>) // prepare an inner adaptor to intercept the expected parser action new InnerCompilerAdaptor(fragment, start-2, end+1).activate(); // positions include <% and %> return TerminalTokens.TokenNamenull; // pretend we saw a valid expression token ('null') } } return token; } } |
Comments describing the logic are in the right column. Inline comments describe the usage of OT/J:
- Lines 3-6 define accessors for two fields from the base class Scanner.
- Line 9 defines that calls to method
getNextToken()should be intercepted by our version of this method - Line 11 marks the role method as
callinwhich is a pre-requisite for line 13 - Line 13 invokes the original method from Scanner
- In line 29 we are in the situation that we have detected a region delimited by <% and %>. We have extracted the text fragment between delimiters, and we know the start and end positions within the source file. Only now we create an instance of
InnerCompilerAdaptorand immediately activate it for the current thread (activate()).
At this point the ScannerAdaptor is done and now an InnerCompilerAdaptor is watching what comes next.
Here’s the nested team InnerCompilerAdaptor with its role ParserAdaptor:
protected team class InnerCompilerAdaptor { char[] source; int start, end; protected InnerCompilerAdaptor(char[] source, int start, int end) { this.source = source; this.start = start; this.end = end; } /** This inner role does the real work of the InnerCompilerAdaptor. */ // import methods from Parser ("callout bindings"): @SuppressWarnings("decapsulation") void pushOnExpressionStack(Expression expr) -> void pushOnExpressionStack(Expression expr); // intercept this method from Parser ("callin binding"): void consumeToken(int type) <- replace void consumeToken(int type); @SuppressWarnings("basecall") callin void consumeToken(int type) { if (type == TerminalTokens.TokenNamenull) { // 'null' token is the faked element pushed by the SyntaxAdaptor // this inner adaptor has done its job, no longer intercept InnerCompilerAdaptor.this.deactivate(); // TODO analyse source to find what AST should be created Expression replacement = new CustomIntLiteral(source, start, end, start+2, end-2); this.pushOnExpressionStack(replacement); // feed custom AST into the parser: return; } // shouldn't happen: only activated when scanner returns TokenNamenull base.consumeToken(type); } } }
- Lines 3-4 define state of the nested team, which is used for passing the information collected by the ScannerAdaptor down the pipe
- Line 15 provides access to a protected method from Parser. By
@SuppressWarnings("decapsulation")we document that this access inserts a tiny little hole into the encapsulation of Parser - Line 18 defines a callin binding as we have seen it before.
- Line 24 already deactivates the enclosing InnerCompilerAdaptor, ensuring this is a one-shot adaptation, only.
- Line 26/27 perform the payload: feed a CustomIntLiteral node into the parser
Coming to life
Wow, if you’ve read so far, you’ve seen a lot of OT/J on just a few lines of code. Let’s wire things together, by throwing the code into an Object Teams Plug-in Project and declaring one extension:

I have defined one aspectBinding between the existing plugin org.eclipse.jdt.core and my team classes SyntaxAdaptor and InnerAdaptor (there’s a man behind the curtain pushing an ugly __OT__ prefix into the declaration, please ignore him – he’ll be gone in the next release of the tool).
Please note that for team SyntaxAdaptor I have set the activation to ALL_THREADS which means that at application launch an instance of this team will be created and activated globally. Without this flag the whole thing would actually have no effect at all.
That’s all the wiring needed, so kick up a runtime workbench, create a Java project and class, insert the code for class EmbeddingTest from the top of this post and boldly select Run As > Java Application. In the console we see a result:
1
Oops, the compiler for our little language extension already works? Did you see me writing a compiler?
Well, beginner’s luck, let’s assume. But, oops, watch this: When I mistype the return type of foo and ask the JDT for help, this is what I see:

The problem view tells me it knows that <% one %> has type int, which doesn’t match the declared return type boolean. Next I positioned the cursor on “one” (the element that’s definitely not Java) and hit Ctrl-1, and the standard JDT quickfix knows that I should change the return type of foo to int.
Did you watch me implementing a quickfix??
Summary so far
Here’re the stats:
- 204 lines of code including plugin.xml
- roles adapting two base classes from org.eclipse.jdt.core.
- callout bindings to two fields and one method
- callin bindings to two methods
- all adaptation is cleanly encapsulated in one team class. If you wish you could even deactivate this one team in a running workbench and thus disable all our adaptions with a single click.
- one plain Java class to implement the semantics of our extension
As for maintainability: The only dependencies are the items mentioned above: two classes, two fields and three methods. Only if one of these are modified under evolution, my adaptation has to be updated accordingly – and: if this happens I will definitely be told by the compiler because one of the bindings will break. If it doesn’t break there’s no need to worry.
With this implementation the compiler seamlessly works with our new syntax and even UI features that operate on the compiler AST can handle our extension, too.
What’s next?
I’m sure some think that the above is probably a forged example. You might challenge me to do something real, like refactoring. If you do so, you actually got me (mumble, mumble) – with the above implementation refactoring does not work with our custom syntax. Now that you’ve seen the start, what do you expect, how much additional rocket science does it take to add minimal refactoring support? (to be continued)
Compare Object Teams to AOP?
In response to our “Hello Eclipse” I was asked about “the distinction between OT, AOP and delegation”
and also Wayne suspected some overlap. So here’s an attempt at answering.
What OT/J is not
If the only problems you see with pure Java are of the kind as non-invasively adding
tracing/logging to a system, than you’re probably fine with AspectJ and OT/J does not
compete for a better solution in this discipline. This is because AspectJ is specialized
at defining sophisticated pointcuts: use powerful patterns and wildcards to capture
a large set of joinpoints that shall trigger your aspect.
I’m personally not enthusiastic about targeting problems of this category because
the focus is too narrow for my taste. So in this league where AspectJ performs best
OT/J is not applying as a replacement for AspectJ.
Goals for OT/J
Positively speaking, let’s call five toplevel goals for OT/J:
- powerful modules
- powerful ways of connecting modules
- maximum support for re-use
- evolvable architectures
- intuitive metaphors
Most of these goals are so broad and common place, that we’ll soon agree that
we all scream for ice cream. For (1) classes, objects and bundles are a pretty good
starting point. Not much need to improve. For (3) & (4) the proof of the pudding is
only in the eating. You can’t directly boil them down to specific language features.
(5) is what makes a language suitable for manipulation by humans, it’s the least
technical goal in the list and thus ‘difficult’ to discuss among geeks 🙂
The issue of connecting modules (2) is, however, extremely interesting and creates
the backbone for anything we can say about an architecture. And this is where OT/J
excels in my view, based on three kinds of relations:
- OT/J takes inheritance to the extreme
- OT/J introduces a real meaningful containment relation
- OT/J introduces the role playing relation
Mentioning inheritance lets me add that we pay very close attention to not
side-step object orientation, but rather to put object orientation on steroids.
As I’m zooming in I will leave inheritance and containment aside for now
so as to focus on the role playing relation.
Role playing
The way role playing is defined in OT/J it is actually very similar to inheritance
with three decisive differences
- role playing is a dynamic thing happening among runtime instances
- role playing separates two sides of inheritance: acquisition and overriding
- control is more fine grained as individual methods (and fields) can be acquired
and overridden selectively
This dynamism is one of the strongest points in OT/J: roles can be added to specialise
existing instances at any point during runtime and multiple roles can specialise the same
base instance simultaneously. Neither is possible if inheritance is defined between
classes rather than instances. Yet, OT/J is not careless about possible runtime effects,
so in order for a role instance to be attached to a base instance the role’s class must
statically declare a playedBy relation to the corresponding base class. The ensures that
possible runtime effects are analysable from the source code.
Ingredients to role playing
Separating acquisition from overriding yields the following pictures of possible
communications between a role and its base
– in all pictures assume a role class with this header:
public class ARole playedBy ABase { ...
Here we go:
Here class ABase implements baseMethod() which ARole would like to “inherit”.
It does so by this little callout declaration:
void roleMethod() -> void baseMethod(); // make baseMethod known under a new name
Now when a client sends a roleMethod() call to the :ARole instance, this is
automatically forwarded to the associated base instance, invoking its baseMethod().
Great, so a role may acquire individual methods from its base using callout.
No big deal so far.
Here’s the opposite direction:

This time the client talks to the :ABase instance saying “baseMethod()”.
Assume that the role has defined this callin binding:
void roleMethod() <- replace void baseMethod();
Now the original method call is intercepted and redirected to the role.
This has the same effect as overriding has in traditional inheritance.
The full glory only shines when both directions are involved in the same control flow:

This picture shows the role version of the template&hook pattern: :ARole inherits the
template method baseMethod2() which issues a self call to the hook method baseMethod1().
Even during this self call, method dispatch may be aware of the overridingM() in the role,
which intercepts the self call.
This situation is what is widely termed as delegation in the literature:
forwarding with the option to still override methods called within this control flow.
Comparison
Now that I have elaborated on the role playing relation in OT/J, how does it compare?
To AOP? To delegation?
Role playing vs. delegation
Role playing supports full delegation with overriding. In OT/J delegation is configured
selectively for individual methods whereas the declarative style of method bindings
keeps the effort at a minimum.
Furthermore, the effect of callin bindings can be controlled by several mechanisms
which I haven’t shown here (“team activation“, “guard predicates“), which means you
have the free choice between the weaker forwarding and the stronger delegation.
Delegation usually doesn’t imply overriding when directly addressing the base instance
as in the second picture. In OT/J you can freely choose, whether or not overriding
is effective in this situation.
Additionally, OT/J takes away the burden to manually manage the additional instances
involved in delegation. That’s what a team as the container for roles does for you.
Role playing vs. AOP
My explanation didn’t sound much like AOP, did it? The only connection here is in
the term “interception”. That’s the core mechanism that is used in both approaches.
Other than that I see little similarity.
In the same way as all languages providing dynamic method dispatch can solve a
similar set of design issues, also all languages providing method call interception
can solve similar issues. In OT/J we blend interception into the general concept
of dynamic dispatch as best as we can, so that it doesn’t stick out from other
concepts of object oriented programming. So, instead of featuring three new
concepts (“join point”, “pointcut”, “advice”) , OT/J only has callin bindings to declare
method call interception.
Two examples for those who like the details of what I mean by
“blend with other concepts”: “advice” in AspectJ is an oddish animal,
it is, e.g., impossible to override inherited advice.
Callin bindings refer to methods, which can be overridden like normal.
Also “aspects” are limited regarding inheritance: it is illegal to extend a
non-abstract aspect. Roles in OT/J have no such restriction.
All the rest
OK, role playing is key for re-using (and adapting) existing things.
Remember, role playing is only one of three strong ways in OT/J to specify connections
between modules. The enhanced inheritance and the strong containment relation
are both unrelated to AOP and delegation, but add even more value as they help to
create evolvable architectures.
It’s a major contribution of OT/J that these three ways of connecting things are not added
as isolated language features but in a way that creates the best synergy among them.
I even think that roles and teams are great metaphors representing the mechanisms
at hand in an intuitive way, but before you actually tried eating the pudding,
you may perhaps not feel this way – yet 🙂









