Injecting configuration values using CDI’s InjectionPoint
Dependency injection is a great technology for the organization of class dependencies. All class instances you need in your current class are provided at runtime from the DI container. But what about your configuration?
Of course, you can create a “Configuration” class and inject this class everywhere you need it and get the necessary value(s) from it. But CDI lets you do this even more fine-grained using the InjectionPoint concept.
If you write a @Produces method you can let your CDI container also inject some information about the current code where the newly created/produced value is inject into. A complete list of the available methods can be found here. The interesting point is, that you can query this class for all the annotations the current injection point has:
Annotated annotated = injectionPoint.getAnnotated(); ConfigurationValue annotation = annotated.getAnnotation(ConfigurationValue.class);
As the example code above shows, we can introduce a simple @Qualifier annotation that marks all the injection points where we need a specific configuration value. In this blog post we just want to use strings as configuration values, but the whole concept can of course be extended to other data types as well. The already mentioned @Qualifier annotation looks like the following one:
@Target({ElementType.FIELD, ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
@Qualifier
public @interface ConfigurationValue {
@Nonbinding ConfigurationKey key();
}
public enum ConfigurationKey {
DefaultDirectory, Version, BuildTimestamp, Producer
}
The annotation has of course the retention policy RUNTIME because the CDI container has to evaluate it while the application is running. It can be used for fields and methods. Beyond that we also create a key attribute, which is backed by the enum ConfigurationKey. Here we can introduce all configuration values we need. In our example this for example a configuration value for a default directory, for the version of the program and so on. We mark this attribute as @Nonbinding to prevent that the value of this attribute is used by the CDI container to choose the correct producer method. If we would not use @Nonbinding we would have to write a @Produces method for each value of the enum. But here we want to handle all this within one method.
The @Produces method for strings that are annotated with @ConfigurationKey is shown in the following code example:
@Produces
@ConfigurationValue(key=ConfigurationKey.Producer)
public String produceConfigurationValue(InjectionPoint injectionPoint) {
Annotated annotated = injectionPoint.getAnnotated();
ConfigurationValue annotation = annotated.getAnnotation(ConfigurationValue.class);
if (annotation != null) {
ConfigurationKey key = annotation.key();
if (key != null) {
switch (key) {
case DefaultDirectory:
return System.getProperty("user.dir");
case Version:
return JB5n.createInstance(Configuration.class).version();
case BuildTimestamp:
return JB5n.createInstance(Configuration.class).timestamp();
}
}
}
throw new IllegalStateException("No key for injection point: " + injectionPoint);
}
The @Produces method gets the InjectionPoint injected as a parameter, so that we can inspect its values. As we are interested in the annotations of the injection point, we have a look if the current injection point is annotated with @ConfigurationValue. If this is the case, we have a look at the @ConfigurationValue’s key attribute and decide which value we return. That’s it. In a more complex application we can of course load the configuration from some files or some other kind of data store. But the concept remains the same.
Now we can easily let the CDI container inject the configuration values we need simply with these two lines of code:
@Inject @ConfigurationValue(key = ConfigurationKey.DefaultDirectory)
private String defaultDirectory;
Conclusion: Making a set of configuration values accessible throughout the whole application has never been easier.
Implementing dynamic proxies – a comparison
Sometimes there is the need to intercept certain method calls in order to execute your own logic everytime the intercepted method is called. If you are not within in Java EE’s CDI world and don’t want to use AOP frameworks like aspectj, you have a simple and similar effective alternative.
Since version 1.5 the JDK comes with the class java.lang.reflect.Proxy that allows you to create a dynamic proxy for a given interface. The InvocationHandler that sits behind the dynamically created class is called everytime the application invokes a method on the proxy. Hence you can control dynamically what code is executed before the code of some framework or library is called.
Next to JDK’s Proxy implementation bytecode frameworks like javassist or cglib offert similar functionality. Here you can even subclass an exiting class and decide which methods you want to forward to the superclass’s implementation and which methods you want to intercept. This comes of course with the burden of another library your project depends on and that may have to be updated from time to time whereas JDK’s Proxy implementation is already included in the runtime environment.
So let’s take a closer look and try these three alternatives out. In order to compare javassist’s and cglib’s proxy with the JDK implementation we need an interface that is implemented by a simple class, because the JDK mechanism only supports interfaces and no subclassing:
public interface IExample {
void setName(String name);
}
public class Example implements IExample {
private String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
In order to delegate the method calls on the proxy to some real object, we create an instance of the Example class above and call it within the InvocationHandler via a final declared variable:
final Example example = new Example();
InvocationHandler invocationHandler = new InvocationHandler() {
@Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
return method.invoke(example, args);
}
};
return (IExample) Proxy.newProxyInstance(JavaProxy.class.getClassLoader(), new Class[]{IExample.class}, invocationHandler);
As you can see from the code sample the creation of a proxy a rather simple: Call the static method newProxyInstance() and provide a ClassLoader, an array of interfaces that should be implemented by the proxy as well as an instance of the InvocationHandler interface. Our implementation forwards for the sake of demonstration only the instance of Example we have created before. But in real life you can of course perform more advanced operations that evaluate for example the methods name or its arguments.
Now we take a look at the way the same is done using javassist:
ProxyFactory factory = new ProxyFactory();
factory.setSuperclass(Example.class);
Class aClass = factory.createClass();
final IExample newInstance = (IExample) aClass.newInstance();
MethodHandler methodHandler = new MethodHandler() {
@Override
public Object invoke(Object self, Method overridden, Method proceed, Object[] args) throws Throwable {
return proceed.invoke(newInstance, args);
}
};
((ProxyObject)newInstance).setHandler(methodHandler);
return newInstance;
Here we have a ProxyFactory that wants to know for which class it should create a subclass. Then we let the ProxyFactory create a whole class that can be reused as many times as necessary. The MethodHandler is here analog to the InvocationHandler the one that gets called for each method invocation of the instance. Here again we just forward the call to an instance of Example we have created before.
Last but not least let’s take a look at cglib’s proxy:
final Example example = new Example();
IExample exampleProxy = (IExample) Enhancer.create(IExample.class, new MethodInterceptor() {
@Override
public Object intercept(Object object, Method method, Object[] args, MethodProxy methodProxy) throws Throwable {
return method.invoke(example, args);
}
});
return exampleProxy;
In the cglib world we have an Enhancer class that we can use to implement a given interface with a MethodInterceptor instance. The implementation of the callback method looks very similar to the one in the javassist example. We just forward the method call via reflection API to the already existing instance of Example.
Now that we have seen three different implementations we also want to evaluate their runtime behavior. Therefore we write s simple unit test, that measures the execution time of each of these implementations:
@Test
public void testPerformance() {
final IExample example = JavaProxy.createExample();
long measure = TimeMeasurement.measure(new TimeMeasurement.Execution() {
@Override
public void execute() {
for (long i = 0; i < JavassistProxyTest.NUMBER_OF_ITERATIONS; i++) {
example.setName("name");
}
}
});
System.out.println("Proxy: "+measure+" ms");
}
We choose a huge number of iterations in order to stress the JVM and to let the HotSpot compiler create native code for the often executed passages. The following chart shows the average runtime of the three implementations:
To show the impact of a Proxy implementation at all, the chart also shows the execution times for the standard invocation of the method on the Example object (“No proxy”). First of all we can put to record that the proxy implementations are about 10 times slower than the plain invocation of the method itself. But we also notice a difference between the three proxy solutions. JDK’s Proxy class is surprisingly nearly as fast as the cglib implementation. Only javassist pulls out with about twice the exeuction time of cglib.
Conclusion: Runtime proxies are easy to use and you have different way of doing it. JDK’s Proxy only supports proxies for interfaces whereas javassist and cglib allow you to subclass existing classes. The runtime behavior of a proxy is about 10 times slower than a standard method invocation. The three solutions also differ in terms of runtime.
Securing a JSF application with Java EE security and JBoss AS 7.x
A common requirement for enterprise applications is to have all JSF pages protected behind a login page. Sometimes you even want to have protected areas inside the application that are only accessible by users that own a specific role. The Java EE standards come with all the means you need to implement a web application that is protected by some security constraints. In this blog post we want to develop a simple application that demonstrates the usage of these means and shows how you can build a complete JSF application for two different roles. As the solution might look straight forward at first glance, there are a few pitfalls one has to pay attention to.
The first point we have to care about, is the folder layout of our application. We have three different kinds of pages:
- The login page and the error page for the login should be accessible by all users.
- We have a home page that should only be accessible for authenticated users.
- We have a protected page that should only be visible for users of the role protected-role.
These three types of pages are therefore put into three different folders: the root folder src/main/webapp, the folder src/main/webapp/pages and the protected page resides in src/main/webapp/pages/protected:
The web.xml file is the place to define which roles we want to use and how to map the accessibility of some URL pattern to these roles:
<security-constraint>
<web-resource-collection>
<web-resource-name>pages</web-resource-name>
<url-pattern>/pages/*</url-pattern>
<http-method>PUT</http-method>
<http-method>DELETE</http-method>
<http-method>GET</http-method>
<http-method>POST</http-method>
</web-resource-collection>
<auth-constraint>
<role-name>security-role</role-name>
<role-name>protected-role</role-name>
</auth-constraint>
</security-constraint>
<security-constraint>
<web-resource-collection>
<web-resource-name>protected</web-resource-name>
<url-pattern>/pages/protected/*</url-pattern>
<http-method>PUT</http-method>
<http-method>DELETE</http-method>
<http-method>GET</http-method>
<http-method>POST</http-method>
</web-resource-collection>
<auth-constraint>
<role-name>protected-role</role-name>
</auth-constraint>
</security-constraint>
<security-role>
<role-name>security-role</role-name>
</security-role>
<security-role>
<role-name>protected-role</role-name>
</security-role>
As you can see we define two roles: security-role and protected-role. URLs matching the pattern /pages/* are only accessible by users that own the roles security-role and protected-role, whereas the pages under /pages/protected/* are restricted to users with the role protected-role.
Another point you may stumble upon is the welcome page. At first guess you would want to the specify the login page as welcome page. But this does not work, as the login module of the servlet container automatically redirects all unauthorized accesses to the login page. Therefore we specify the home page of our application as welcome page. This is already a protected page, but the user will get redirected to the login page automatically when he calls its URL directly.
<welcome-file-list>
<welcome-file>pages/home.xhtml</welcome-file>
</welcome-file-list>
Now we are nearly done with the web.xml page. All we to do is to define the authentication method as well as the login page and the error page, which is shown in case the user enters invalid credentials. One has to page attention that both pages don’t include protected URLs (e.g. CSS or JavaScript files), otherwise even the access to these two pages is forbidden and the user gets an Application Server specific error page to see.
<login-config>
<auth-method>FORM</auth-method>
<form-login-config>
<form-login-page>/login.xhtml</form-login-page>
<form-error-page>/error.xhtml</form-error-page>
</form-login-config>
</login-config>
As we are going to deploy the application to the JBoss Application Server, we provide a file named jboss-web.xml that connects our application to a security-domain:
<?xml version="1.0" encoding="UTF-8"?>
<jboss-web>
<security-domain>java:/jaas/other</security-domain>
</jboss-web>
The “other” security-domain is configured inside the standalone.xml. The default configuration requires that the user passes the “RealmUsersRoles” login module, which gets its user and role definitions from the two files application-users.properties and application-roles.properties inside the configuration folder. You can use the provided add-user script to add a new user to this realm:
What type of user do you wish to add? a) Management User (mgmt-users.properties) b) Application User (application-users.properties) (a): b Enter the details of the new user to add. Realm (ApplicationRealm) : Username : bart Password : Re-enter Password : What roles do you want this user to belong to? (Please enter a comma separated list, or leave blank for none) : security-role,protected-role About to add user 'bart' for realm 'ApplicationRealm' Is this correct yes/no? yes
Here it is important to choose the correct realm (ApplicationRealm), as this realm is configured by default in the standalone.xml for the “other” login module. This is also the place where you provide the roles the user possesses as a comma separated list.
<form method="POST" action="j_security_check" id=""> <h:panelGrid id="panel" columns="2" border="1" cellpadding="4" cellspacing="4"> <h:outputLabel for="j_username" value="Username:" /> <input type="text" name="j_username"/> <h:outputLabel for="j_password" value="Password:" /> <input type="password" name="j_password"/> <h:panelGroup> <input type="submit" value="Login"/> </h:panelGroup> </h:panelGrid> </form>
The next step is to implement a simple login form that submits its data to the login module. Note the ids of the input fields as well as the action of the form. This way the form is posted to the login module, which extracts the entered username and password from the request. JSF developers may wonder why we use a standard HTML form instead of the element. The reason for this is that the JSF form elements spans its own namespace and therefore the ids of the input fields are prefixed with the id of the form (and this form id cannot be empty).
If the user has passed the login form, we present him a home page. But the link to the protected page should only be accessible for users that own the role protected-role. This can be accomplished by the following rendered condition:
<h:link value="Protected page" outcome="protected/protected" rendered="#{facesContext.externalContext.isUserInRole('protected-role')}"/>
Last but not least we need the logout functionality. For this case we implement a simple backing bean like the following one that invalidates the user’s session and redirects him back to the login page:
@Named(value = "login")
public class Login {
public String logout() {
FacesContext.getCurrentInstance().getExternalContext().invalidateSession();
return "/login";
}
}
As usual the complete source code can be found on github.
Building and testing a websocket server with undertow
The upcoming version of JBoss Application Server will no longer use Tomcat as integrated webserver, but will replace it with undertow. The architecture of undertow is based on handlers that can be added dynamically via a Builder API to the server. This approach is similar to the way of constructing a webserver in Node.js. It allows developers to embed the undertow webserver easily into their applications. As the addition of features is done via the Builder API, one can only add the features that are really required in one’s application. Beyond that undertow supports WebSockets and the Servlet API in version 3.1. It can be run as blocking or non-blocking server and it is said, that first tests have proven that undertow is the fastest webserver written in Java.
As all of this sounds very promising, so let’s try to set up a simple websocket server. As usual we start by creating a simple java project and add the undertow maven dependency:
<dependency> <groupId>io.undertow</groupId> <artifactId>undertow-core</artifactId> <version>1.0.0.Beta20</version> </dependency>
With undertow’s Builder API our buildAndStartServer() method looks like this:
public void buildAndStartServer(int port, String host) {
server = Undertow.builder()
.addListener(port, host)
.setHandler(getWebSocketHandler())
.build();
server.start();
}
We just add a listener that specifies the port and host to listen for incoming connections and afterwards add a websocket handler. As the websocket handler code is a little bit more comprehensive, I have put it into its own method:
private PathHandler getWebSocketHandler() {
return path().addPath("/websocket", websocket(new WebSocketConnectionCallback() {
@Override
public void onConnect(WebSocketHttpExchange exchange, WebSocketChannel channel) {
channel.getReceiveSetter().set(new AbstractReceiveListener() {
@Override
protected void onFullTextMessage(WebSocketChannel channel, BufferedTextMessage message) {
String data = message.getData();
lastReceivedMessage = data;
LOGGER.info("Received data: "+data);
WebSockets.sendText(data, channel, null);
}
});
channel.resumeReceives();
}
}))
.addPath("/", resource(new ClassPathResourceManager(WebSocketServer.class.getClassLoader(), WebSocketServer.class.getPackage()))
.addWelcomeFiles("index.html"));
}
Let’s go line by line through this code snippet. First of all we add a new path: /websocket. The second argument of the addPath() methods lets us specify what kind of protocol we want to use for this path. In our case we create a new WebSocket. The anonymous implementation has a onConnect() method in which we set an implementation of AbstractReceiveListener. Here we have a convenient method onFullTextMessage() that is called when a client has sent us a text message. A call of getData() fetches the actual message we have received. In this simple example we just echo this string back to client to validate that the roundtrip from the client to server and back works.
To perform some simple manual tests we also add a second resource under the path / which serves some static HTML and JavaScript files. The directory that contains these files is given as an instance of ClassPathResourceManager. The call of addWelcomeFiles() tells undertow which file to server when the client asks for the path /.
The index.html looks like this:
</pre>
<html>
<head><title>Web Socket Test</title></head>
<body>
<script src="jquery-2.0.3.min.js"></script>
<script src="jquery.gracefulWebSocket.js"></script>
<script src="websocket.js"></script>
<form onsubmit="return false;">
<input type="text" name="message" value="Hello, World!"/>
<input type="button" value="Send Web Socket Data" onclick="send(this.form.message.value)"/>
</form>
<div id="output"></div>
</body>
</html>
<pre>
Our JavaScript code is swapped out to the websocket.js file. We use jquery and the jquery-Plugin gracefulWebSocket to ease the client side development:
var ws = $.gracefulWebSocket("ws://127.0.0.1:8080/websocket");
ws.onmessage = function(event) {
var messageFromServer = event.data;
$('#output').append('Received: '+messageFromServer+'');
}
function send(message) {
ws.send(message);
}
After having created a WebSocket object by calling $.gracefulWebSocket() we can register a callback function for incoming messages. In this method we only append the message string to the DOM of the page. The send() method is just a call to gracefulWebSocket’s send() method.
When we now start our application and open the URL http://127.0.0.1:8080/ in our webbrowser we see the following page:

Entering some string and hitting the “Send Web Socket Data” button sends the message to the server, which in response echos it back to the client.
Now that we know that everything works as expected, we want to protect our code against regression with a junit test case. As a websocket client I have chosen the library jetty-websocket:
<dependency>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-websocket</artifactId>
<version>8.1.0.RC5</version>
<scope>test</scope>
</dependency>
In the test case we build and start the websocket server to open a new connection to the websocket port. The WebSocket implementation of jetty-websocket allows us to implement two callback methods for the open and close events. Within the open callback we send the test message to the client. The rest of the code waits for the connection to be established, closes it and asserts that the server has received the message:
@Test
public void testStartAndBuild() throws Exception {
subject = new WebSocketServer();
subject.buildAndStartServer(8080, "127.0.0.1");
WebSocketClient client = new WebSocketClient();
Future connectionFuture = client.open(new URI("ws://localhost:8080/websocket"), new WebSocket() {
@Override
public void onOpen(Connection connection) {
LOGGER.info("onOpen");
try {
connection.sendMessage("TestMessage");
} catch (IOException e) {
LOGGER.error("Failed to send message: "+e.getMessage(), e);
}
}
@Override
public void onClose(int i, String s) {
LOGGER.info("onClose");
}
});
WebSocket.Connection connection = connectionFuture.get(2, TimeUnit.SECONDS);
assertThat(connection, is(notNullValue()));
connection.close();
subject.stopServer();
Thread.sleep(1000);
assertThat(subject.lastReceivedMessage, is("TestMessage"));
}
As usual you can find the source code on github.
Conclusion: Undertow’s Builder API makes it easy to construct a websocket server and in general an embedded webserver that fits your needs. This also eases automatic testing as you do not need any specific maven plugin that starts and stops your server before and after your integration tests. Beyond that the jquery plugin jquery-graceful-websocket lets you send and receive messages over websockets with only a few lines of code.
Testing HTML5 canvas applications with sikuli and arquillian
HTML5 introduces a great new element that can be used to draw arbitrary content on a pane: the canvas element. What has been a standard feature for fat client applications for decades is now introduced to the world of web applications. Web developers no longer need to use proprietary plugins to draw images or charts in their applications.
But when it comes for testing, this new feature imposes new challenges to the web development community. How to test that the canvas element is in an appropriate state at some point in time? Standard technologies like selenium focus on the markup that is generated by the web server and not on the pixels drawn on the canvas.
More promising in this field are technologies that use image processing to verify that an application renders its data correctly. One of these frameworks is sikuli. Sikuli is an open-source research project that was started at the MIT and is now maintained by Raimund Hocke.
To give a more practical introduction, let’s assume we have a simple web application that uses the HTML5 canvas element to implement some simple image processing functionality like a grayscale, a brighten and a threshold filter as well as an undo button (the code for this application can be found as usual on github):

The installation of sikuli is (of course) platform dependent. The installer, which can be downloaded from the sikuli download page, is a Java Swing application that asks you about your typical usage pattern. As we do not want to use the python IDE, we choose option 4 from the list of options. The actual jar file is then downloaded and prepared for our OS. After the installation process has finished, we find an OS dependent jar file within the installation directory. As our example project uses maven as build system, we have to introduce a system scope dependency after we have copied the library into the lib folder:
<dependency>
<groupId>org.sikuli</groupId>
<artifactId>sikuli</artifactId>
<version>1.0</version>
<scope>system</scope>
<systemPath>${basedir}/lib/sikuli-java.jar</systemPath>
</dependency>
When sikuli is used for the first time, it extracts some native libraries into a new folder, here in our example into ${basedir}/lib/libs. This folder has to be added to the user’s path environment variable.
Now that we have installed sikuli, let’s setup arquillian so that we can write our first unit test. How to setup arquillian is described for example here. As I don’t want to repeat everything, in the following you will find only the unit test class:
@RunWith(Arquillian.class)
public class FilterTest {
public static final String WEBAPP_SRC = "src/main/webapp";
@ArquillianResource
URL deploymentURL;
private Screen screen;
@Before
public void before() throws URISyntaxException, IOException {
screen = new Screen();
if (Desktop.isDesktopSupported()) {
Desktop.getDesktop().browse(deploymentURL.toURI());
} else {
fail();
}
}
@Deployment
public static WebArchive createDeployment() {
return ShrinkWrap.create(WebArchive.class, "html5-sikuli-webapp.war")
.addClasses(HomeBackingBean.class)
.addAsWebResource(new File(WEBAPP_SRC, "home.xhtml"))
.addAsWebResource(new File(WEBAPP_SRC, "resources/css/style.css"), "resources/css/style.css")
.addAsWebResource(new File(WEBAPP_SRC, "resources/images/rom.jpg"), "resources/images/rom.jpg")
.addAsWebResource(new File(WEBAPP_SRC, "resources/js/html5Sikuli.js"), "resources/js/html5Sikuli.js")
.addAsWebResource(new File(WEBAPP_SRC, "resources/js/jquery-2.0.3.js"), "resources/js/jquery-2.0.3.js")
.addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml")
.setWebXML(new File(WEBAPP_SRC, "WEB-INF/web.xml"));
}
The createDeployment() method sets up the war archive, which is deployed by arquillian to JBoss AS 7.1.1.Final (see arquillian.xml file). In our @Before method we use the SDK class Desktop to open the default browser and point it to the deployment URL. Here we also create an instance of sikuli’s class Screen. This class provides all methods needed to perform the interaction with our application. Let’s look at this in more detail:
@Test
@RunAsClient
public void testGrayScale() throws FindFailed {
screen.wait(getFullPath("originalImage.png"));
screen.find(getFullPath("btnUndo_disabled.png"));
screen.click(getFullPath("btnGrayscale.png"));
screen.find(getPattern("grayscaleImage.png", 0.9f));
screen.click(getFullPath("btnUndo_enabled.png"));
screen.click(getPattern("originalImage.png", 0.9f));
}
private Pattern getPattern(String path, float similarity) {
Pattern p = new Pattern(getFullPath(path));
return p.similar(similarity);
}
private String getFullPath(String path) {
return "src/test/resources/img/" + path;
}
As sikuli is based on image processing, we can define where to click and what to verify with screenshots we have taken before. In this simple example I have stored all screenshots as png files within the src/test/resources/img folder of our project. More advanced projects may need a more sophisticated folder hierarchy. As you can see, we first wait for the application to show up. Once sikuli has found the first screenshot, we verify that the button “Undo” is disabled. This is done by calling the method find() with an image of the disabled button. Now we can click on the button “Grayscale” (again specified by an image of the button) and then verify that the grayscale version of the image is found on the screen.
Sikuli does not only compare both images pixel by pixel, but if you like it computes the similarity of the found screen region with the requested region. This helps when you need to be more tolerant (e.g. if you want to test the application in different browsers and these render the buttons slightly different). The default value for the similarity attribute is 0.7f, but if you increase it to 1.0f you have a simple pixel by pixel comparison.
But this is not all. With sikuli you can do nearly all things you could do as human interactor:
- Type characters using screen.type()
- Double click with screen.doubleClick()
- Perform drag and drop operations with screen.dragDrop()
- Use the mouse wheel
- …
Conclusion: Sikuli is a powerful and easy to use tool to perform integration tests for web applications that heavily rely on HTML5’s canvas object. The same is of course true for standard fat client applications (Swing, JavaFX). Together with arquillian you can setup comprehensive test suites that cover a lot of “real” use cases.
Efficiently delete data with JPA and Hibernate
You may come to the situation where you have to perform a bulk deletion on a huge amount of datasets stored in a relational database. If you use JPA with Hibernate as underlying OR mapper, you might try to call the remove() method of the EntityManager in a way like the following:
public void removeById(long id) {
RootEntity rootEntity = entityManager.getReference(RootEntity.class, id);
entityManager.remove(rootEntity);
}
First of all, we load a reference representation of the entity we want to delete and then pass this reference to the EntityManager. Let’s assume the RootEntity from above has a child relation to a class called ChildEntity:
@OneToMany(mappedBy = "rootEntity", fetch = FetchType.EAGER, cascade = CascadeType.ALL) private Set childEntities = new HashSet(0);
If we now turn on the property show_sql of hibernate, we will wonder what SQL statements are issued:
select
rootentity0_.id as id5_1_,
rootentity0_.field1 as field2_5_1_,
rootentity0_.field2 as field3_5_1_,
childentit1_.PARENT as PARENT5_3_,
childentit1_.id as id3_,
childentit1_.id as id4_0_,
childentit1_.field1 as field2_4_0_,
childentit1_.field2 as field3_4_0_,
childentit1_.PARENT as PARENT4_0_
from
ROOT_ENTITY rootentity0_
left outer join
CHILD_ENTITY childentit1_
on rootentity0_.id=childentit1_.PARENT
where
rootentity0_.id=?
delete
from
CHILD_ENTITY
where
id=?
delete
from
ROOT_ENTITY
where
id=?
Why does Hibernate first load all data into memory in order to delete this data immediately afterwards? The reason is that JPA’s lifecycle requires that the object is in “managed” state, before it can be deleted. Only in this state all lifecycle functionality like interceptors is available (see here). Therefore Hibernate issues a SELECT query before the deletion in order to transfer both RootEntity and ChildEntity to the “managed” state.
But what can we do, if we just want to delete RootEntity and ChildEntity, if we know the id of RootEntity? The answer is to use a simple DELETE query like the following one. But due to the integrity constraint on the child table, we first have to delete all depending child entities. The following code demonstrates how:
List childIds = entityManager.createQuery("select c.id from ChildEntity c where c.rootEntity.id = :pid").setParameter("pid", id).getResultList();
for(Long childId : childIds) {
entityManager.createQuery("delete from ChildEntity c where c.id = :id").setParameter("id", childId).executeUpdate();
}
entityManager.createQuery("delete from RootEntity r where r.id = :id").setParameter("id", id).executeUpdate();
The above code results in the three SQL statements we would have expected by calling remove(). Now you may argue, that this way of deletion is more complicated than just calling the EntityManager’s remove() method. It also ignores annotations like @OneToMany and @ManyToOne we have placed in the two entity classes.
So why not write some code that uses the knowledge about the two entities that already exists in the two class files? First of all, we look for @OneToMany annotations using reflection in the RootEntity class, extract the type of the child entity and then look for its back relation field annotated with @ManyToOne. Having done this, we can easily write the three SQL statements in a more generic way:
public void delete(EntityManager entityManager, Class parentClass, Object parentId) {
Field idField = getIdField(parentClass);
if (idField != null) {
List oneToManyFields = getOneToManyFields(parentClass);
for (Field field : oneToManyFields) {
Class childClass = getFirstActualTypeArgument(field);
if (childClass != null) {
Field manyToOneField = getManyToOneField(childClass, parentClass);
Field childClassIdField = getIdField(childClass);
if (manyToOneField != null && childClassIdField != null) {
List childIds = entityManager.createQuery(String.format("select c.%s from %s c where c.%s.%s = :pid", childClassIdField.getName(), childClass.getSimpleName(), manyToOneField.getName(), idField.getName())).setParameter("pid", parentId).getResultList();
for (Long childId : childIds) {
entityManager.createQuery(String.format("delete from %s c where c.%s = :id", childClass.getSimpleName(), childClassIdField.getName())).setParameter("id", childId).executeUpdate();
}
}
}
}
entityManager.createQuery(String.format("delete from %s e where e.%s = :id", parentClass.getSimpleName(), idField.getName())).setParameter("id", parentId).executeUpdate();
}
}
The methods getFirstActualTypeArgument(), getManyToOneField(), getIdField() and getOneToManyFields() in the code above are not depicted here, but do what their name sounds like. Once implemented we can easily delete all entities beginning with the root of the tree.
A simple example application that can be used to examine the behavior and solution described above, can be found on github.
Passing SFSBs as an argument to a custom JSF component?
Have you ever developed a custom JSF component that renders an img tag? If so, you will know that this component consists at least of two parts: The first part is the implementation of UIComponent that actually renders the img tag, the second part is the servlet filter or phase listener that answers the asynchronously incoming request for the image URL. Does it work, to use a stateful session bean (SFSB) to load the image data from the database?
First of all we start with the UIComponent. As base class we choose UIOutput, which is sufficient for the current case:
@FacesComponent("EjbComponent")
public class EjbComponent extends UIOutput {
private static final Logger logger = LoggerFactory.getLogger(EjbComponent.class);
private static final String COMPONENT_FAMILY = "martins.developer.world.jsf.component.helloWorld";
private enum PropertyKeys {
statefulEjb
};
@Override
public String getFamily() {
return COMPONENT_FAMILY;
}
...
}
We override the encodeBegin() method to render the img tag. We get the stateful session bean, which is passed as an argument to the component, using JSF’s StateHelper:
@Override
public void encodeBegin(FacesContext facesContext) throws IOException {
ResponseWriter writer = facesContext.getResponseWriter();
String imageSrc = createImageUrl(facesContext);
writer.startElement("img", this);
writer.writeAttribute("src", imageSrc, "");
writer.endElement("img");
Map<String,Object> sessionMap = FacesContext.getCurrentInstance().getExternalContext().getSessionMap();
sessionMap.put("ejbComponent", getStatefulEjb());
}
public StatefulEjbLocal getStatefulEjb() {
return (StatefulEjbLocal) getStateHelper().eval(PropertyKeys.statefulEjb);
}
public void setStatefulEjb(StatefulEjbLocal statefulEjb) {
getStateHelper().put(PropertyKeys.statefulEjb, statefulEjb);
}
The createImageUrl() method adds a parameter to the HTTP request, such that we can detect this specific request later on in the PhaseListener:
private String createImageUrl(FacesContext context) {
StringBuilder builder = new StringBuilder(context.getExternalContext().getRequestContextPath());
if (builder.indexOf("?") == -1) {
builder.append('?');
} else {
builder.append('&');
}
builder.append("ejbComponent").append("=").append("ejbComponent");
return builder.toString();
}
The PhaseListener has the task to detect the incoming request issued by the img tag rendered by the component above:
public class EjbComponentPhaseListener implements PhaseListener {
private static final Logger logger = LoggerFactory.getLogger(EjbComponentPhaseListener.class);
@Override
public void beforePhase(PhaseEvent phaseEvent) {
FacesContext facesContext = phaseEvent.getFacesContext();
ExternalContext externalContext = facesContext.getExternalContext();
String ejbComponentParameter = requestParameter.get("ejbComponent");
if (ejbComponentParameter != null) {
Map<String, Object> sessionMap = externalContext.getSessionMap();
StatefulEjbLocal ejbComponent = (StatefulEjbLocal) sessionMap.get("ejbComponent");
if (ejbComponent != null) {
byte[] image = ejbComponent.getImage();
HttpServletResponse response = (HttpServletResponse) externalContext.getResponse();
ServletOutputStream outputStream = null;
try {
outputStream = response.getOutputStream();
IOUtils.copy(new ByteArrayInputStream(image), outputStream);
outputStream.flush();
} catch (Exception e) {
logger.error(e.getMessage(), e);
} finally {
IOUtils.closeQuietly(outputStream);
facesContext.responseComplete();
}
} else {
logger.debug("Could not retrieve ejbComponent from session.");
}
} else {
logger.debug("Request parameter not found");
}
}
@Override
public void afterPhase(PhaseEvent phaseEvent) {
logger.debug("afterPhase()");
}
@Override
public PhaseId getPhaseId() {
return PhaseId.RENDER_RESPONSE;
}
As you see, we try to see if the currently handled request has a parameter with the name “ejbComponent”. If this is the case, we look up the SFSB from the session map, as we have put it in there before (see code for the JSF component). Now that we have a reference to the SFSB, we can use it to load the image data and pass it to the browser.
In our web application we now create a simple backing bean that holds a reference to the SFSB, which later on is passed to the component on the JSF page:
@Named
public class HelloWorldBackingBean {
@EJB
private StatefulEjbLocal statefulEjb;
...
Finally we pass the SFSB from the backing bean to the component on our JSF page:
<mdw:ejbComponent statefulEjb="#{helloWorldBackingBean.statefulEjb}"/>
If we compile and deploy the application, the JSF will render the image as expected. But there is one caveat: If the same SFSB is used to render more than one image within the same web session, the concurrent access to the SFSB is serialized as the EJB 3.1 specification demands this in section 4.3.14. This means that each invocation locks/blocks the other threads until a timeout occurs. This timeout can be given with the @AccessTimeout annotation, the default in the JBoss AS 7.x container is 5 seconds. If the waiting threads therefore have to wait too long, a ConcurrentAccessException is thrown. This may lead to sporadic failures that only happen, if the system is under load.
Sample code can be found in my github repository.
Setting up your application server with maven
In many cases there is no way to deploy an application without the need to setup your application before. In JBoss AS 7.x you may want to configure for example your database connection. Or you have to configure a security realm. Maybe you also want to adjust the SLSB pool… In any of these cases all developers in the team have to share a common or at least a similar configuration.
Often this information can be found in sporadically sent emails or on some wiki page. But what happens sometime after a release, when you have to check out a branch to fix some bug or to add a new feature? You will have to reconstruct the configuration that was valid for this branch. So why not add the configuration files to your version control system and together with the mere configuration files, a maven configuration that sets up the whole application server?
Let’s try to keep it simple and use only public available and commonly used plugins. First of all let’s add all versions that we will need in the following to the properties part of the pom.xml:
<properties>
<jboss.install.dir>${project.build.directory}/jboss</jboss.install.dir>
<jboss.version>7.2.0.Final</jboss.version>
<app.version>${project.version}</app.version>
<ojdbc.version>11.2.0.1.0</ojdbc.version>
</properties>
We also define here the installation directory of the JBoss AS. This way we can change it, if we want, using the command line option -D. Now we add a new profile, such that we have to explicitly switch the setup procedure on and that it is not part of the normal build:
<profile> <id>setupAs</id> <build> <plugins> ... </plugins </build> </profile>
If we have the current JBoss version as a maven artifact deployed in our maven repository, we can use the maven-dependency-plugin to download and unpack the JBoss to the installation directory given above:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>2.8</version>
<executions>
<execution>
<id>unpack-jboss</id>
<phase>package</phase>
<goals>
<goal>unpack</goal>
</goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>org.jboss</groupId>
<artifactId>jboss-as</artifactId>
<version>${jboss.version}</version>
<type>zip</type>
<outputDirectory>${project.build.directory}/jboss</outputDirectory>
</artifactItem>
</artifactItems>
</configuration>
</execution>
Now that the application server is unpacked, we have to add the JDBC driver as well as our application (or anything else you need). We set this up by adding another execution block to the maven dependency plugin:
<execution>
<id>copy</id>
<phase>package</phase>
<goals>
<goal>copy</goal>
</goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>our-company</groupId>
<artifactId>our-application-ear</artifactId>
<version>${app.version}</version>
<type>ear</type>
<outputDirectory>${jboss.install.dir}/jboss-as-${jboss.version}/standalone/deployments</outputDirectory>
</artifactItem>
<artifactItem>
<groupId>com.oracle</groupId>
<artifactId>ojdbc6</artifactId>
<version>${ojdbc.version}</version>
<outputDirectory>${jboss.install.dir}/jboss-as-${jboss.version}/standalone/deployments</outputDirectory>
<destFileName>ojdbc6.jar</destFileName>
</artifactItem>
</artifactItems>
</configuration>
</execution>
Last but not least, we also want to adjust the standard configuration files to our needs. We can use the maven-resources-plugin to substitute variable values within each file. Therefore we add templates for these files to the resources folder of our JBoss module and call the goal copy-resources:
<plugin>
<artifactId>maven-resources-plugin</artifactId>
<version>2.6</version>
<executions>
<execution>
<id>copy-jboss-configuration</id>
<phase>package</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>${jboss.install.dir}/jboss-as-${jboss.version}/standalone/configuration</outputDirectory>
<resources>
<resource>
<directory>src/main/resources/jboss/standalone/configuration</directory>
<filtering>true</filtering>
</resource>
</resources>
</configuration>
</execution>
<execution>
<id>copy-jboss-bin</id>
<phase>package</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>${jboss.install.dir}/jboss-as-${jboss.version}/bin</outputDirectory>
<resources>
<resource>
<directory>src/main/resources/jboss/bin</directory>
<filtering>true</filtering>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>
The values for the filtering can be given on the command line with the -D option. If the team has more than a few members, it is also possible to create for each user a properties file that contains his/her specific configuration values. If we use the OS user as filename, we can easily choose the file by the name of the currently logged in user. This way each team member can easily setup its own completely configured application server instance by simply running:
mvn clean install -PsetupAs
In order to prevent that the newly configured server is deleted with the next clean invocation, we disable the maven clean plugin for the normal build:
<plugin> <artifactId>maven-clean-plugin</artifactId> <version>2.5</version> <configuration> <skip>false</skip> </configuration> </plugin>
Within the setupAs profile created above, we have to enable it of course, such that we can delete the whole installation just by calling “mvn clean -PsetupAs”. Now switching to an older branch is easy as we don’t lose any time searching for the right configuration…
Developing your own maven plugin to verify the bytecode of your artifact
In this article the goal is to develop your own maven plugin that accesses at build time the artifact of your project and verifies the class files. I used this concept for my library jb5n to verify that for each MessageResource interface an appropriate key/value pair is available in the underlying resource bundle.
The first step is of course to create a new maven module or project using the mojo archetype:
mvn archetype:generate \ -DgroupId=sample.plugin \ -DartifactId=hello-maven-plugin \ -DarchetypeGroupId=org.apache.maven.archetypes \ -DarchetypeArtifactId=maven-archetype-plugin
This will create a ready-to-run maven project of type maven-plugin for you:
<groupId>sample.plugin</groupId> <artifactId>hello-maven-plugin</artifactId> <version>1.0-SNAPSHOT</version> <packaging>maven-plugin</packaging>
All necessary dependencies to develop your own maven plugin are already available. The archetype also creates a simple Mojo class:
/**
* @goal verify
* @phase verify
*/
public class MyMojo extends AbstractMojo {
/**
* @parameter default-value="${project}"
*/
private MavenProject mavenProject;
The maven goal as well as the default phase of the lifecycle the plugin runs in is given by the javadoc elements @goal and @phase. Newer versions of maven let you define these values with annotations. To access the outcome of the current build process we let our plugin run in the verify phase. We can access the file of our artifact by accessing the member variable mavenProject, which is injected by maven into our plugin with the above definition:
File artifactFile = mavenProject.getArtifact().getFile();
The File object from the above snippet will point to the jar file (in case we have chosen jar packaging for our artifact). Therefore we can open the jar file using Java’s SDK classes JarFile and JarEntry:
JarFile jarFile = new JarFile(artifactFile);
Enumeration<JarEntry> entries = jarFile.entries();
while(entries.hasMoreElements()) {
JarEntry jarEntry = entries.nextElement();
String jarEntryName = jarEntry.getName();
if(jarEntryName != null && jarEntryName.endsWith(".class")) {
getLog().debug(String.format("Processing jar file entry '%s'.", jarEntryName));
...
}
}
Now we can use a library like javassist or Java’s reflection API to analyze each class file within the artifact.
Compiler aware access to properties files in Java
The standard way in the Java world to provide internationalization to your application is the usage of properties files. You create a simple text file where you store your key/value pairs. Access to these files is normally done through the JDK class ResourceBundle:
ResourceBundle myResources = ResourceBundle.getBundle("MyResources", currentLocale);
myResources.getString("OkKey");
So far, so good. But what if your project grows? How do you keep track of the link between the Java code, i.e. means all the lines that access a particular key, and the properties files? You have to find a way to cope with this, because otherwise it becomes difficult to answer questions like: “Can I remove this key/value pair from my properties file or is it still referenced by some Java code?”. Or what if you want to rename a key? You’ll have to find all occurrences of this string in the Java code. Hopefully you will find all, otherwise ResourceBundle will throw a MissingResourceException.
A simple concept to overcome this problem is to route all access to your properties files to Java interfaces. Let a proxy implementation of this interface fetch the required key from the properties file. This way your IDE can help you to find all lines in your Java code where a certain property is accessed:
MyMessageResource myMessageResource = JB5n.createInstance(MyMessageResource.class); String ok = myMessageResource.ok();
The Google Web Toolkit (GWT) has introduced this mechanism as Messages. As I needed this kind of functionality quite often, I have implemented a library that implements this idea. In contrast to other implementations I wanted to be backward compatible, so that you can upgrade an existing application step by step. For this I have added an annotation to the interface methods that lets you define the key to use to access the properties file. Normally the name of the method is used as key.
@Message(key = "no.default.key") String noDefaultKey();
Beyond that the library should also stay extensible. If your requirements change and your customer wants to be able to change the translations without the need of recompilation, you could easily implement your own InvocationHandler that loads the messages e.g. from a database or some other kind of storage:
@MessageResource(invocationHandler=MyDatabaseMessageResource.class)
private interface MyInvocationHandler {
String ok();
}
Like Google’s GWT my implementation can of course also handle arguments to the message, using Java’s MessageFormat:
public interface MyMessageResource {
String youHaveNREtries(int numberOfRetries); // "You have {0} retries."
}
And last but not least the jb5n library also allows you to use inheritance to group the messages/translations and distribute the methods over different interfaces and therewith the messages over different files:
public interface MyMessageResource {
String ok();
}
public interface MySpecificMessageResource extends MyMessageResource {
String specificMessage();
}
But the library is not yet finished. A maven plugin as well as an ant task are on my todo list. This way you can check that interface and properties file are in sync during the build process.
The source code can be found on github: https://github.com/siom79/jb5n.




