Log rotation for WebLogic Server (and friends)

If you have a number of WebLogic Server instances running, or applications that write a lot of information into the logs, you might find that your log files, and your stdout files start to eat up a lot of space very quickly.

This can be easily managed with log rolling utilities like logrotate for Linux or logadm for Solaris. These allow you to automate the process of removing old log entries, and they work by moving the log file contents out through a series of files. This means that you will have effectively controlled the amount of space used for logs, plus you have set a time period that logs are kept online before being archived or deleted.

Let’s look at an example to understand how it works:

logrotation

In this diagram we are looking at the log growth over time, with time being the vertical axis. So at the top we have a single log file – AdminServer.log – and it grows over time, as indicated by the blue bars at the top.

Then, when the first log rotation occurs (the top red dotted line), the content of AdminServer.log is moved into AdminServer.log.0, as indicated by the purple arrow, and AdminServer.log is emptied out, so as WebLogic Server continues to write into this file, we have a much smaller file now (the green one).

This process then repeats. At the next log rotation, the lower red dotted line, we get the contents of AdminServer.log.0 moved into AdminServer.log.1, the contents of AdminServer.log moved into AdminServer.log.0, and AdminServer.log emptied out again.

This continues until we get up to eight files, then the oldest one is deleted and the other seven move one to the right.

Here’s how to set it up:

First, you need to be starting the WebLogic Server instance in a way that the stdout is being appended to a file, (as opposed to just written to a file). To do this, you need to make sure you use the >> shell redirection, not >. To get stdout and stderr in the same file, you would use &>>.

Here’s an example:

/your/fmwhome/user_projects/domains/base_domain/startWebLogicServer.sh &>> /your/logs/adminserver.out &

This assumes that you have WebLogic installed in the Oracle Home /your/fmwhome and that you are storing your stdout/stderr log files in /your/logs.

Note: It is important that you use the append redirection (>>), otherwise your log files will not actually shrink in size after rotation, they will just keep growing, so that defeats the purpose of rotation in the first place.

Note: If you use the Node Manager to start WebLogic Server, it will automatically open the log files in append mode.

Next, you need to set up your log rolling utility – logrotate (on Linux) or logadm (on Solaris). Let’s look at each of these in turn.

logrotate (Linux)

The configuration for logrotate is kept in /etc/logrotate.conf. You need to add a stanza in there for each of the log files you want to rotate. Here is an example for the stdout file from above, and the server log file:

/your/fmwhome/user_projects/domains/base_domain/servers/AdminServer/logs/AdminServer.log {
  copytruncate
  daily
  rotate 8
}
/your/logs/adminserver.out {
  copytruncate
  daily
  rotate 8
}

Let’s explore these. The first line names the log file to rotate. Rotation is essentially going to move that log to a different file and create a new empty log. The line ‘rotate 8‘ tells logrotate to keep up to eight log files. The ‘daily’ means to roll them once a day, and the ‘copytruncate’ tells it which method to use – in this case to copy the file into a new file, then empty it out (as opposed to the other method which is to rename the file and create a new one – this method will not work with WebLogic Server or other JVM applications).

You can also force logrotate to run immediately, which is useful for checking you have everything set up correctly. This is done (as root) by issuing the command:

logrotate -f /etc/logrotate.conf

logrotate has many other options that allow you to specify different time periods, actions to take before and after log rotation, and size limits for when rotation should occur, to name a few. You should take a look at the logrotate documentation to see how to use it to best suit your scenario.

logadm (Solaris)

logadm provides essentially the same capabilities as logrotate, but the configuration is slightly different. To set up the same example as we saw above with logadm, we need to issue the following commands (as root):

logadm -w /your/fmwhome/user_projects/domains/base_domain/servers/AdminServer/logs/AdminServer.log -P 1d -c
logadm -w /your/logs/adminserver.out -P 1d -c

These commands will update the logadm configuration file (/etc/logadm.conf) with the necessary entries – the -w option means ‘write to configuration file’. It is not recommended to edit the file directly, but to use the logadm -w command to update it – to prevent errors.

After the -w, we see the name of the file to rotate, then -P 1d, which means period of one day – i.e. daily – and -c which tells logadm to use the copy and truncate method.

To force logadm to run immediately, you need to do two things. First, you need to tell it to assume the last run was some time in the past, this is done by issuing the same command (as root) with a timestamp on it, e.g.:

logadm -w /your/logs/adminserver.out -P 1d -c -p 'Mon Feb 25 02:00:00 2013'

This tells logadm to assume the last time it ran for this file was on that date, which is more than a day ago.  The last run timestamps are stored in /var/logadm/timestamps if you want to take a look.

You can then issue the following command (as root) to force logadm to run immediately:

logadm

Just like logrotate, logadm has a bunch of other options that let you control how often rotation is done, size limts, pre/post actions, etc. Take a look at the documentation to see what you need for your scenario.

Enjoy!

Posted in Uncategorized | Tagged , , , , , | Leave a comment

BPM 11g Performance Tuning Whitepaper published

I am happy to announce our new BPM 11g Performance Tuning whitepaper is now available on OTN (here).  This white paper captures real world best practices from actual performance tuning exercises across many real BPM implementations – that’s ‘best practices’ in the sense that these are the things that we have found over time and over many engagements to give the best results.

This whitepaper has been under development for quite a while now, and has been through a heap of reviews and revisions.  So it is great to finally get it out there, and hopefully you will find it useful!

Many people have contributed to this whitepaper – from reporting on tuning experiences, to writing, reviewing, and testing.  I would like to thank the following folks:

Vikas Anand, Deepak Arora, Partricio Barletta, Heidi Buelow, Christopher Karl Chan, Manoj Das, Andrew Dorman, Pete Farkas, Mark Foster, Simone Geib, Kim LiChong, Ralf Mueller, Bhagat Nainani, Sabha Parameswaran, Robert Patrick, David Read, Derek Sharpe, Sushil Shukla, Kavitha Srinivasan, Meera Srinivasan, Will Stallard and Shumin Zhao.

I sincerely hope that I have not forgotten anyone, but if I have, the error is entirely mine.

This whitepaper is meant to compliment the Performance and Tuning Guide in the Fusion Middleware documentation.  Readers should also consult the excellent whitepaper on purging SOA/BPM 11g databases by Michael Bousamra with Deepak Arora and Sai Sudarsan Pogaru which is available on OTN (here).

For those with an interest in BPM 10g, I remind you of our previously published BPM 10g Performance Tuning whitepaper, which continues to be available on OTN (here).

Posted in Uncategorized | Tagged , , | 1 Comment

Thinking of getting certified for SOA Suite?

If you are thinking of getting your Oracle SOA Suite certification, you may like to check out the new beta release of certification exam 1Z1-478 for the Oracle SOA Suite 11g Certified Implementation Specialist certification.  I was lucky enough to be able to write some of the questions for this exam.

You can find the beta exam here.

Posted in Uncategorized | Tagged , , , | Leave a comment

Collecting diagnostic information for BPM

From time to time, you may experience some kind of issue in your BPM environment. Issues could be caused by a wide variety of reasons – changes to the environment, the pattern of load on the environment, product defects, bad process design, insufficient resources allocated to the environment, network instability – just to name a few!

When something goes wrong, it is important to know how to collect the diagnostic information that will be needed to analyse the problem, work out the root cause, and come up with a resolution. In some cases, you may be able to do this analysis yourself. In other cases you may need to involve specialists like network engineers, directory administrators, or Oracle Support, for example.

Let’s take a look at the kinds of diagnostic information that may be needed. Of course, it may not be necessary to collect all of these for any given issue. If you are unsure, then it is a good idea to collect them anyway, just in case you need them.

Note: The purpose of this article is to tell you how to collect the data, not how to analyse it. Sometimes that analysis requires specialist skills and experience, but even then, those specialists rely on having access to the data.

BPM server/cluster configuration files

The first thing that you will want to collect is the configuration files for your environment. There are many different types of configurations that are possible, and these files contain the information necessary for someone to understand exactly how your particular environment is configured.

These files are located inside your WebLogic domain’s home directory, in the config directory. You will see files and directories like this:

config
|-- config.xml
|-- configCache
|-- deployments
|-- diagnostics
|-- fmwconfig
|-- jdbc
|-- jms
|-- nodemanager
|-- security
`-- startup

You can just zip up this whole directory to collect the files. You might use a command like this for example:

tar xzvf bpm_config.tar.gz /home/oracle/fmwhome/user_projects/domains/base_domain/config

Note: All of the examples in this post show the Oracle Middleware home as /home/oracle/fmwhome and the WebLogic domain name as base_domain. You will need to adjust these to suit your own environment.

The next useful piece of information to capture is a list of which patches (if any) you have installed in your environment. The best way to collect this information is to capture the output of the opatch lsinventory command. You should run this twice, first with ORACLE_HOME set to the Oracle_SOA1 directory under your install directory, and second with it set to the oracle_common directory under your install directory.

The example below shows running the opatch lsinventory command for ORACLE_HOME=/home/oracle/fmwhome/Oracle_SOA1 and the output, which in this case shows that no patches have been installed. In this example, you would also run it again with ORACLE_HOME=/home/oracle/fmwhome/oracle_common.

[oracle@ps5 Oracle_SOA1]$ export ORACLE_HOME=/home/oracle/fmwhome/Oracle_SOA1
[oracle@ps5 Oracle_SOA1]$ export PATH=$ORACLE_HOME/OPatch:$PATH
[oracle@ps5 Oracle_SOA1]$ opatch lsinventory
Oracle Interim Patch Installer version 11.1.0.9.0
Copyright (c) 2011, Oracle Corporation.  All rights reserved.

Oracle Home       : /home/oracle/fmwhome/Oracle_SOA1
Central Inventory : /home/oracle/oraInventory
   from           : /home/oracle/fmwhome/Oracle_SOA1/oraInst.loc
OPatch version    : 11.1.0.9.0
OUI version       : 11.1.0.9.0
OUI location      : /home/oracle/fmwhome/Oracle_SOA1/oui
Log file location : /home/oracle/fmwhome/Oracle_SOA1/cfgtoollogs/opatch/opatch2012-12-20_11-18-33AM_1.log

Patch history file: /home/oracle/fmwhome/Oracle_SOA1/cfgtoollogs/opatch/opatch_history.txt

OPatch detects the Middleware Home as "/home/oracle/fmwhome"

Lsinventory Output file location : /home/oracle/fmwhome/Oracle_SOA1/cfgtoollogs/opatch/lsinv/lsinventory2012-12-20_11-18-33AM.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1): 

Oracle SOA Suite 11g                                                 11.1.1.6.0
There are 1 products installed in this Oracle Home.

There are no Interim patches installed in this Oracle Home.

--------------------------------------------------------------------------------

OPatch succeeded.

BPM log files

The information we have collected already is generic in nature and is used to ensure the domain configuration is correct and there are no obvious problems. From this point on, we are looking at information that is used to analyse a specific problem.

The server log and ‘out’ files are often the very first place we will look when there is a problem. These files will usually contain error messages that ,will give some information about the cause of the problem.

You can use a command like this to collect the logs. Remember to collect the logs from your AdminServer and each of your managed servers.

tar xzvf soa_server1_logs.tar.gz /home/oracle/fmwhome/user_projects/domains/base_domain/servers/soa_server1/logs

This will also collect the diagnostic_images if there are any available. These provide additional information about certain problems.

It is important to understand that a problem may occur only on one server, or on a number of servers. This is why it is important to collect the logs from all of the servers. Sometimes it is necessary to analyse data from several sources in order to understand what was happening in the environment.

Sometimes, during the analysis of a problem, you may be asked to turn on some debug/trace settings and attempt to recreate the problem. If this happens, the output from those traces almost always end up in these logs.

Incident logs

WebLogic collects some data by default when various ‘incidents’ occur, for example when a ‘stuck thread’ is encountered. The data collected depends on the incident, but it usually contains things like thread dumps, logs, and error messages.

These data are stored inside the server directories in your domain directory. To collect them, you could use a command like the example below. Remember to collect the incident logs for you AdminServer and each of your managed servers.

tar xzvf incident_logs.tar.gz /home/oracle/fmwhome/user_projects/domains/base_domain/servers/soa_server1/adr/diag/ofm/base_domain/soa_server1/incident

Thread dumps

A thread dump is a snapshot of what is happening in the server at a particular point in time. It allows us to see what each thread in the server process is doing. This information is helpful to understand how the server is behaving and what it is doing.

You can take a thread dump in a variety of ways, and how you do it depends on your operating system, how you started the server, e.g. whether you started it from a command line or the node manager, and if the server has become unresponsive.

Here are some of the common ways to take a thread dump:

  • Pressing Ctrl-Break on Windows, or Ctrl-\ on Linux/Solaris/etc. in the window running the WebLogic process (in the foreground),
  • Sending signal 3 (SIGQUIT) to the process (kill -3 PID),
  • Connecting to the process with a utility like jvisualvm and pressing the Thread Dump button:bpmdiag1
  • Requesting a thread dump in the WebLogic Server console by navigating to the server, then the Monitoring tab and the Threads sub-tab and pressing the Dump Thread Stacks button:bpmdiag2
  • Use jstack PID (or jrcmd PID print_threads for JRockit).

Most of the time, more than one thread dump will be required. A series of thread dumps over some time period are needed in order to understand how the server is behaving over time. For example, a thread dump might show that a particular thread is ‘stuck’. Another (later) thread dump will be needed to see if that thread becomes unstuck by itself later on (as commonly happens) or not. Thus the two thread dumps together would be necessary to determine if the stuck thread was a problem or not.

It is also important to take thread dumps on all of the servers that are (or could possibly be) affected by or contributing to the problem. If in doubt, take thread dumps on all of the servers.

As a general rule of thumb, you should take five thread dumps over a period of time. How do you work out a suitable period of time? If you have a specific problem, for example you see some error message and then a minute later all of your servers become unresponsive, then the time period is that minute. Take a thread dump when you first see the error message appear, then one every 20 seconds (or so). If you don’t have any way to guess the suitable time period, just take them a minute apart.

The example below shows what the output from the thread dump looks like. Note that many lines have been removed from this output.

2012-12-31 10:26:12
Full thread dump Java HotSpot(TM) 64-Bit Server VM (20.10-b01 mixed mode):

"JMX server connection timeout 48" daemon prio=10 tid=0x00007fabf8006800 nid=0x232f in Object.wait() [0x00007fac330b4000]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
	at java.lang.Object.wait(Native Method)
	- waiting on <0x00000000f61ad8c0> (a [I)
	at com.sun.jmx.remote.internal.ServerCommunicatorAdmin$Timeout.run(ServerCommunicatorAdmin.java:150)
	- locked <0x00000000f61ad8c0> (a [I)
	at java.lang.Thread.run(Thread.java:662)

(many lines deleted)

"main" prio=10 tid=0x00007facc4008800 nid=0x21db in Object.wait() [0x00007facc9f38000]
   java.lang.Thread.State: WAITING (on object monitor)
	at java.lang.Object.wait(Native Method)
	- waiting on <0x00000000e0bb22a0> (a weblogic.t3.srvr.T3Srvr)
	at java.lang.Object.wait(Object.java:485)
	at weblogic.t3.srvr.T3Srvr.waitForDeath(T3Srvr.java:981)
	- locked <0x00000000e0bb22a0> (a weblogic.t3.srvr.T3Srvr)
	at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:490)
	at weblogic.Server.main(Server.java:71)

"VM Thread" prio=10 tid=0x00007facc406e000 nid=0x21e4 runnable 

"GC task thread#0 (ParallelGC)" prio=10 tid=0x00007facc401b800 nid=0x21dc runnable 

"GC task thread#1 (ParallelGC)" prio=10 tid=0x00007facc401d800 nid=0x21dd runnable 

"VM Periodic Task Thread" prio=10 tid=0x00007facc40ad000 nid=0x21eb waiting on condition 

JNI global references: 1601

Heap
 PSYoungGen      total 111744K, used 71412K [0x00000000f5560000, 0x00000000fdaa0000, 0x0000000100000000)
  eden space 89856K, 76% used [0x00000000f5560000,0x00000000f985f160,0x00000000fad20000)
  from space 21888K, 12% used [0x00000000fc540000,0x00000000fc7fe198,0x00000000fdaa0000)
  to   space 23296K, 0% used [0x00000000fad20000,0x00000000fad20000,0x00000000fc3e0000)
 PSOldGen        total 174784K, used 65947K [0x00000000e0000000, 0x00000000eaab0000, 0x00000000f5560000)
  object space 174784K, 37% used [0x00000000e0000000,0x00000000e4066c60,0x00000000eaab0000)
 PSPermGen       total 131072K, used 125060K [0x00000000d0000000, 0x00000000d8000000, 0x00000000e0000000)
  object space 131072K, 95% used [0x00000000d0000000,0x00000000d7a211b8,0x00000000d8000000)

Heap dumps

Another kind of dump that may be required for some problems is a heap dump. A heap dump is essentially a copy of everything that the JVM has in memory (in the heap) at a particular point in time. These are usually going to be pretty big files – they will be at least as big as the amount of used heap. So if you are running your BPM managed server with an 8GB heap, and it is 75% in use when you take the heap dump, then the heap dump is going to be about 6GB in size.

Heap dumps are used to look at the contents of the JVM’s memory in detail. They allow us to look at every object in the JVM and see the state of those objects.

Heap dumps are often used to diagnose a class of problems called ‘memory leaks’. While a single heap dump can lead us to suspect a memory leak, two heap dumps (from the same JVM at different times) are needed to confirm that a memory leak actually exists.

Heap dumps are also useful for other kinds of problems, where we need to look at the contents of various objects to understand what the server is doing.

It is a good practice to collect heap dumps when problems occur, but you should not send them to Oracle unless they are requested. Since they are so large, you may also wish to compress them and delete them after the problem they relate to has been resolved.

You can generate a heap dump from a tool like jvisualvm (by pressing the Heap Dump button) as shown below:

bpmdiag3

You can also collect a heap dump using jmap using a command like the one below:

jmap -dump:format=b,file=heap_dump_1.bin pid

If the problem is suspected to be a memory leak, you may be asked to carry out the following steps:

  • allow the server to come to a steady state after startup,
  • perform six full garbage collections (by pressing the Perform GC button, next to the Heap Dump button, six times),
  • take a heap dump,
  • attempt to reproduce the issue, i.e. do whatever it is you do to make the problem occur,
  • take another heap dump.

Another good practice is to ensure that you have configured WebLogic to automatically take a heap dump if it runs out of memory. This is done by adding the following parameter to the JVM:

-XX:+HeapDumpOnOutOfMemoryError

This setting often saves a lot of pain – if your server crashes because it ran out of memory, then this setting is pretty likely to capture the information needed to work out what went wrong. If you do not have this setting, you would need to add it, and wait for the problem to happen again. It is safe to have this setting on all of your production servers. Note that it takes some time to take a heap dump (how long depends on the size of the heap and the speed of your disks) so there is a trade-off here – collecting the information needed to fix the problem will mean that your server restart will take a bit longer, as you will have to wait for the heap dump to finish before you restart the server(s).

Garbage Collection logs

Garbage collection logs are very useful for analysing memory related issues. The JVM will not produce these logs by default, you need to tell it to produce them.

These three settings will cause the JVM to print out more detailed information about garbage collection and to produce a log (called gc.log in this example) that contains garbage collection statistics and information that is very useful when trying to do some JVM tuning:

    -XX:+PrintGCTimeStamps
    -XX:+PrintGCDetails
    -Xloggc:gc.log

And, as mentioned in the previous section, it is also a good idea to turn on this setting:

    -XX:+HeapDumpOnOutOfMemoryError

These settings are safe to leave on all the time in your production environment.

Database information – AWR reports

Many performance related issues may have to do with the underlying database. For this reason, it is important to capture some information about the database performance as well. You should collect the AWR reports for the same period during which you observed the problem in BPM. To be on the safe side, start a little earlier and end a little later. For example, if the problem occurred from 10am until noon, you might collect AWR reports from 9am to 1pm.

You can find more information about what AWR reports are and how to collect them in this post.

HTTP Server logs

For some kinds of problems, it is useful to see the logs from the HTTP Server (if any) which is in front of your BPM server or cluster. These are often useful if you are getting refused connections for example.

You should gather the following logs:

  access.log  
  error.log

If you are using Oracle Web Tier (or Oracle HTTP Server), these logs will be located in the following directory, assuming your Oracle Web Tier Home is /home/oracle/httphome and you used the default names for the instance:

/home/oracle/httphome/Oracle_WT1/instances/instance1/diagnostics/logs/OHS/ohs1

Debug logs for the WebLogic plugin may also be useful if you are seeing nodes being evicted from the cluster or if you suspect that the cluster is unbalanced – e.g. you can see a different number of sessions on each node in the cluster.

To obtain these, you need to set DEBUG=ALL in the httpd-vhosts.conf file. This will produce a log called wlproxy.log.

Operating system level information

Sometimes performance information from the operating system level can be helpful as well. You might want to consider using tools like top or prstat (with thread/’lightweight process’ support), sar, vmstat, mpstat, iostat, and netstat. If you are have a possibly network-related issue, for example loss of communications between cluster members, then tcpdump may also capture useful information.

Remember, if you are running a cluster, you would need to collect these on all nodes in the cluster at the same time.

Java information

There are also several Java tools that can help you to collect additional information. If you are not familiar with these, it might be a good idea to explore what they can do for you. I would suggest looking at jps, jstat, jinfo, jstack, jmap, and jtop.

How to send information to Oracle Support

If you need help with the problem, you should contact Oracle Support and open a Service Request (SR). The Oracle Support system will allow you to upload attachments to the SR so that you can provide information you have collected. If the files are large, like a heap dump for example, then you should upload them to Oracle Support’s FTP server instead. Support will give you instructions on how to access the ftp server and where to put your files.

Posted in Uncategorized | Tagged , , , , , | Leave a comment

BPMN process editor problems in 11.1.1.6 (update)

I wrote some time ago (in this post) about a patch for some issues with the layout in the BPMN process editor in 11.1.1.6.  I know that a lot of folks have contacted Support to ask for the patch that I mentioned in that post, and I know that some of you were told by Support that there was no patch available.

We have worked with Support to fix this problem, and I am happy to say that the patch is available to download from Oracle Support now.  I hope you did not have too much inconvenience.

The Patch number is 13088538: NPE IN O.BPM.UI.LAYOUT.MIGLAYOUT:114.

Posted in Uncategorized | Tagged , , | Leave a comment

A review of Oracle SOA Suite 11g Administrator’s Handbook

Highly recommended, a tour de force.

Packt’s new Oracle SOA Suite 11g Administrator’s Handbook by Ahmed Aboulnaga and Arun Pareek is packed full of essential information for the Oracle SOA administrator, in fact I would go so far as to say that it should be required reading for administrators who are new the the Oracle SOA Suite platform.  I think that reading it would greatly shorten the learning curve and help new administrators avoid many common problems or points of confusion.

More so than any other single piece of content that I have seen on the topic, it provides the information that a SOA administrator needs to know in order to successfully configure, manage, monitor, troubleshoot and backup an Oracle SOA environment.

It is clear and to the point, it presents just the information that you need, and the information is easy to find.  It is not cluttered up with a whole bunch of extra information you don’t need.  It is detailed and technical – providing information that you can use.  I think the book is not only a great introduction for a new administrator who needs to get a feel for Oracle SOA Suite, but it is also a great reference volume to keep on hand, even for experienced administrators.

It is obvious when reading the book that the authors have extensive experience and that they know what is important to their audience.  I have been working with Oracle SOA Suite for several years now, since 10g days, and I am one of the authors of the official Oracle SOA Suite Certification question base, and even I learned things from this book that I did not know.

The book covers topics like managing the SOA infrastructure, managing composite applications, monitoring SOA Suite, tuning, configuration and administration, troubleshooting, security policies, managing MDS and the dehydrations store and backup and recovery.

The bonus online chapter covers important issues like patching, upgrading from 10g, cluster configuration and silent (scripted) installation.

I for one will be keeping this book on my book shelf and I highly recommend it to anyone interested in or working with Oracle SOA Suite, in an administration capacity, or who just wants to know more about the product in general.

Packt Publishing provides reviewers with a free copy of the e-book.
Posted in Uncategorized | Tagged , , , , , , , , , , , | Leave a comment

New ADF Mobile released

Oracle has just released the new Oracle ADF Mobile which allows you to build native applications that will install and run on both iOS and Android devices from the same ADF source code.

Development is done with JDeveloper and ADF and leverages Java and HTML 5 technologies, while keeping the same visual and declarative approach ADF is known for.

You can read more about the Oracle ADF Mobile release here and learn more on its OTN page here.

Posted in Uncategorized | Tagged , , , | Leave a comment

Oracle releases ADF Essentials

In case you missed it, Oracle has released a new free version of ADF called ADF Essentials.  You can find more information in the press release or the online demo.

Posted in Uncategorized | Tagged | Leave a comment

Reading Oracle SOA Suite 11g Administrator’s Handbook

I am reading the new Oracle SOA Suite 11g Administrator’s Handbook by Ahmed Aboulnaga and Arun Pareek.  I am half way through it and I have to say – it is just great!  Will post some detailed comments soon!

Posted in Uncategorized | Leave a comment

Packt celebrating 1,000th title

As many of you will know from my last post, we are very happy to have released our first book with Packt – quite an achievement for us.  But Packt is celebrating an achievement of their own – they are just about to publish their 1,000th title!

To celebrate, Packt are inviting anyone already registered to www.packtpub.com, or who registers before 30th September 2012, to download any one of their eBooks for free. Packt is also opening its online library for a week for free to members, offering customers an easy to way to research their choice of free eBook.

Further details of the event can be found in the Press Release.

If you think a free eBook sounds like a good idea, head over to this link: http://www.packtpub.com/login.

Posted in Uncategorized | Leave a comment