What BPM adds to SOA Suite

Oracle has just released Oracle SOA Suite and Oracle BPM Suite 11.1.1.4 (often referred to as ‘Patch Set 3,’) the second release that includes comprehensive support for both Business Process Modeling Notation (BPMN) and Business Process Execution Language (BPEL) for modeling and executing business processes.

Organisations who have been using Oracle SOA Suite (and BPEL) for several years now sometimes ask us what extra value Oracle BPM Suite adds to the already rich SOA platform they are used to. And process analysts and integration developers often ask about the relative strengths of BPEL and BPMN – which to use when, and how they complement each other.

It turns out there is a lot of extra value added by the addition of Oracle BPM Suite – which is basically a superset of Oracle SOA Suite, but at the same time, it fits seamlessly into an existing SOA Suite environment and uses the same development tools, deployment and build processes, management and monitoring infrastructure and the same programming model – Service Component Architecture (SCA).

Oracle BPM Suite sits right on top of the solid foundation provided by Oracle SOA Suite.  Because of this is inherits significant integration capabilities.  It really is a ‘best of both worlds’ – providing excellent feature sets and capabilities for both business and technical people working in the business process management space.  The strong BPM capabilities really complement the SOA foundation.  It’s hard to ‘do BPM’ well without SOA, and you could argue that SOA lacks a real purpose without BPM.  A lot of people who have tried to justify an investment in SOA have found it very difficult to build a successful business case without tying SOA to business driven BPM initiatives.

Organisations with a significant investment in Oracle SOA Suite should see Oracle BPM Suite as an upgrade which provides additional value – and they wont need to retrain staff, replace existing infrastructure or migrate existing artifacts to realise the additional value added.

So, on the occasion of the second release of Oracle BPM Suite in the 11g release stream, let’s take a deep dive into the value it brings to the table and also look at how well it is integrated with Oracle SOA Suite.

The right tool for the right job

BPEL and BPMN are both ‘languages’ or ‘notations’ for describing and executing business processes. Both are open standards. Most business process engines will support one or the other of these languages. Oracle however has chosen to support both and treat them as equals. This means that you have the freedom to choose which language to use on a process by process basis. And you can freely mix and match, even within a single composite. (A composite is the deployment unit in an SCA environment.)

So why support both? Well it turns out that BPEL is really well suited to modeling some kinds of processes and BPMN is really well suited to modeling other kinds of processes. Of course there is a pretty significant overlap where either will do a great job.

There are different ways of looking at which language is more suited for various kinds of processes.  Let’s look a two common approaches – these both provide high level guidance and are not meant to be exhaustive or mutually exclusive.  Nor do they replace the need to do your own research and possibly a small ‘proof of concept’ modeling activity to validate which is right in your environment with your people and skills.

The ‘who is the audience’ approach

This approach looks at who is going to be doing the process modeling and whether models are going to be shared with ‘business’ people.

  • If the process models are going to be shared with business people, e.g. process participants, process owners or sponsors, I would tend to use BPMN,
  • If the people who are doing the process modeling are coming from a business background, e.g. process analysts or business analysts, I would tend to use BPMN,
  • If they were coming from an IT background, e.g. developers or architects, I would tend to use BPEL,
  • If the people who are going to be doing the modeling have extensive skills and experience in one language, I would probably be inclined to use that language, unless there was a good reason to introduce the other.

The ‘type of process’ approach

This approach uses a simple rule of thumb: If the process involves ‘people’ or ‘paper,’ I would lean towards BPMN. If it involves systems or applications integration, I would lean towards BPEL. That is a pretty high level and generic rule of thumb, so there are also some other things I would consider:

  • Generally speaking, I would tend to use BPMN for higher level, more ‘business’-oriented processes and BPEL for lower level, more ‘system’-oriented processes,
  • If the ‘process’ is really an ‘integration’ or a ‘service,’ I would tend to use BPEL.

Layers of business process

The natural result of both of these approaches tends to be a pattern where the higher level processes – the ones that business users interact with – are modeled in BPMN and these in turn call other processes that are also modeled in BPMN which in turn call ‘services’ that are implemented in BPEL. In fact, if you take a look at the Oracle Application Integration Architecture Reference Process Models, you will see that they follow this same pattern (with even higher level models in Value Added Chain diagrams.)

Structure

BPEL is a ‘structured’ language – much like Java is – that means it has ‘control structures’ like sequence (one activity follows another), decisions (called switches), looping (using a ‘while’ loop) and ‘scopes’ which set boundaries for exception handling. Exceptions are handled in a ‘try/catch’ style like many modern programming languages. A scope in BPEL can ‘throw’ and exception to it’s parent scope where it may be handled or ‘rethrown’ to a higher scope still.

As a result of this, BPEL feels very natural to people from a programming background. It has the same kind of logic and control structures that they are used to, and lets them think about problems the way they are accustomed to thinking.

BPMN on the other hand is a ‘directed graph.’ This means that it allows you to arbitrarily move around the process. We often find that real world business processes are able to be modeled directly using directed graphs, that is we don’t need to do a lot of analysis to work out how to structure the process in such a way as to make it ‘fit’ into the language.

Now of course there is a healthy overlap where many of the processes that you could model in BPEL could also be modeled in BPMN and vice versa. However, there are some processes that can be model very simply in BPMN which are quite difficult to model in BPEL. Take for example the following hypothetic ‘flight booking’ process. For whatever reason (probably the way the ‘legacy’ system works) there are only certain points where the customer can go back to an earlier step and, depending on where they are in the process, it is a different point they can return to.

This process can be modeled very simply in BPMN, as shown below, however it would be quite difficult to model in BPEL. It could be done, of course, but it would be necessary to sit down and work out the logic. We would probably need to introduce some kind of ‘state’ variables and use them as ‘guards’ in some large switch construct inside a while loop. It could be done, but we might lose a lot of the clarity that the BPMN model (below) has – that is, it might be harder for us to just look at the model and understand the process logic.

clip_image002

So this is an example of the type of process that is easier to model in BPMN due to its directed graph nature. Many Oracle SOA Suite (BPEL) users may have come across processes that required a bit of work to model in BPEL, so here is one benefit we can see of ‘upgrading’ to Oracle BPM Suite.

Sub-processes

BPMN includes an ‘embedded sub-process’ activity that allows for looping, parallel execution and iterating over members of collections (like arrays.) The embedded sub-process runs in the same instance, so it does not incur the overhead of starting a new process instance.

Embedded sub-processes can be nested and you can choose to execute the iterations sequentially or in parallel. This allows for very elegant modeling of processes that involve looping through collections (and nested collections). The example below shows a BPMN process that processes a set of pathology test series in parallel, each of which may contain multiple individual tests which are processed sequentially, before consolidating the results for review and the possible repeating of some or all tests.

clip_image008

Interruption

Often we have a part of a process that will take some time to execute but which may be cancelled during that time. For example, while fulfilling and order (picking, packing, shipping, etc.,) we may receive an order cancellation from the customer. BPMN includes a concept called a ‘boundary event’ which can be used to model this kind of situation.

The example below demonstrates such a process. The ‘Fulfill Order’ activity is actually an embedded sub-process (shown in its ‘minimised’ form to reduce clutter.) The sub-process has a ‘message boundary event’ attached to it. If the matching message is received at any time while the sub-process is still executing, the sub-process will be interrupted and the exception path (to ‘Cancel Order’) will be followed immediately.

clip_image010

Boundary events can also be attached to individual activities (not just sub-processes) and can handle messages, time-related events and catch errors.

Conditional Flows

In a BPMN process each activity must have exactly one default flow coming out of it (except for the ‘end’ event) but many can also have zero or more conditional flows. A conditional flow is one that will be flowed if and only if the condition attached to it evaluates to true.

Conditions may be expressed using a simple visual editor/expression language, in XPath or may even be a set of rules that are evaluated by the embedded rules engine. Conditional paths can also be named and documented. The name appears on the process model making it easy for non-technical users to understand the process model without needed to learn how to read conditions.

BPMN also provides a rich set of ‘gateways’ that allow for modeling of different kinds of decisions in a process. These include the ability to follow exactly one path, some paths, or all paths and then to join the paths back together when one or all or completed.

Import models from Visio and other tools

Many organisations have created some process documentation using Microsoft Visio and want to be able to reuse that investment.  With the release of Oracle BPM Suite 11.1.1.4, Oracle has added the ability to import process diagrams from Visio, or other tools that can export in XPDL format, into BPM.

Many of these tools allow you to include multiple BPMN ‘pools’ on the same diagram.  The import facility gives you the option of importing the pools as separate process models or combining them into a single process model.  Multi-tabbed diagrams can also be imported, with each tab becoming a separate process model.

The import is tested with a variety of open source and commercial modeling tools which support XPDL export.

Business Catalog

BPM includes a ‘business catalog’ which contains shared artifacts like services, data definitions, business exceptions, event definitions and rules.  The business catalog promotes reuse and collaboration between integration developers and process modelers. It allows you to easily adopt a top-down (starting with the process flow) or bottom-up (starting with the services, data, interface definitions) approach to process modeling.

Process Templates

Both BPEL and BPMN have a ‘template’ mechanism which allows you to define a base process and a number of variations. These mechanisms work slightly differently but both provide a similar kind of capability. The template mechanism for BPMN is more geared to allow ‘business users’ to participate in the definition of variations.

The BPM Project Template mechanism allows business users to customise processes in the Process Composer (web-based modeling environment, more on this later) within certain constraints.  The constraints help to promote communication, governance and control of process customisations.

Templates include selected components like human tasks, services, business objects and of course the process flow.   Business/process analysts can reuse templates to create new processes or to modify existing processes, and can even deploy their customisations directly to the runtime environment without ever touching JDeveloper.

Of course, if you want to enable this capability, you should be careful to ensure that the customisations allowed will not require any additional integration developer work to implement them – that is, you should probably be using a ‘bottom up’ approach where the process analysts create the models from a set of well tested services and other components.

Simulation

Oracle BPM Suite, specifically the ‘design time’ environment in JDeveloper (sometimes called ‘BPM Studio,’) adds the ability to simulate a process before actually implementing and deploying it.

Simulation is the use of a mathematical model to predict how the process will behave in terms of time, cost and resource utilisation. Comparison of simulations with different parameters allows us to make some informed decisions about the design of the process and things like appropriate staffing levels for human tasks which are involved in the process.

When we define a simulation, we provide various parameters including the ‘arrival rate’ (or ‘creation rate’) for new instances, the ‘service time’ for each activity, which resources are required to perform each activity, the capacity of each class of resource (e.g. how many people we have in that role,) the probability that each path out of a decision point will be followed, and so on. Generally, these parameters can be provided as a ‘scalar’ value or a statistical distribution with the appropriate parameters. For example, we may define the arrival rate as a normal distribution with mean 50 and standard deviation 3.

The simulations can be animated on screen (as shown below). The animation shows each packet of work moving through the process. It also makes queuing (bottlenecks) obvious by showing queues develop before activities as instances wait for service. Queues often occur when there are not enough resources available (free) to process the amount of work arriving. Simulation animation provides a simple and effective way to clearly demonstrate bottlenecks in a process to business people like sponsors. It also provides a convenient and simple way to demonstrate the impact of changing something in the process, e.g. adding some more resources or changing the order of some activities in the process.

clip_image003

All of the raw data produced by the simulation engine can be saved and exported for use in other analysis tools, like Excel for example, and to make charts and tables for documents like business cases.  Simulation is another benefit of ‘upgrading’ to Oracle BPM Suite.

Business-friendly process modeling and discovery

Oracle BPM Suite includes a web-based process modeling capability called ‘Process Composer.’ Process Composer allows business users to easily access and review BPMN process models from a web browser without the need to install any special software. In addition to viewing the models, users with appropriate privileges are also able to change models and create new models.

clip_image004

Models can be easily sychronised between the web-based business user-friendly modeling environment and the design time tools used by integration developers who complete the implementation of the process models and prepare them for deployment.

‘Process discovery’ is a vitally important aspect of a successful BPM project.  Often, people may assume that process discovery means detailed workflow modeling of a process.  While the detailed workflow is important, it is just one part of process discovery – and not even the most essential one.

The most important things you need to understand about your processes during discovery are the key activities, milestones, responsibilities, resource requirements, problems affecting performance and the key goals and measures of the process.

Every time I have sat down with a group of business people and modeled a business process it has been abundantly apparent that the stakeholders do not have a common, agreed understanding of the process.  Usually I find that people like senior management, executives and process owners have a better understanding of the goals and measures, and how the process interacts with other processes in other parts of the business.  However, it is the process participants, the people who actually carry out the process on a day to day basis, who have a much better understanding of how things are actually done, often why they are done that way, and what the problems are.

In order for a BPM project to be successful, it is essential that you consult all the relevant stakeholders and that you drive towards a consensus.  This is the essence of what we call ‘process discovery’ and the business-friendly web-based modeling capability provided in Process Composer is a key enabler of the clear communication necessary to make this a reality.

Remember that automating garbage just gives you automated garbage.  The process models that are produced through discovery and consensus are not just documentation.  They are the actual requirements that get handed over to integration developers.  They represent well considered, agreed and tested (through simulation) requirements for a business process.  Process discovery not only helps to get better quality requirements, but it also reduces rework (removing some of the dependency on interpretation) and improves communication across the board.

Activity Guides

For many human-centric processes, conventional ‘work lists’ and BPMN diagrams are not the most intuitive way to present tasks and progress through the process to business users.  The ‘work list’ metaphor can make it difficult for users to understand where they are in the overall end-to-end flow of the process.  Without this context it can be difficult for them to give customers or constituents advice about the progress, next steps and expected completion time for the process.

To address this, Oracle created the notion of ‘guided business processes,’ in which process designers define milestones in the BPMN process model and users interact with the process through an alternative user interface called an ‘activity guide’ that tracks progress against those milestones.

image

The diagram above shows an example of an activity guide for a ‘new hire’ process.  Activity guides can have quite rich user interfaces and provide a lot more context to the user.

Process-oriented collaboration

Oracle BPM Suite provides the ability for business users to easily create a team space to facilitate collaboration around either a process or even a specific instance of a process. These ‘process spaces’ can be created with the click of button, in just a few moments, without any need for assistance from IT staff.

The self-service provisioned process spaces are built from templates which can be easily customized to suit your needs and give business users access to information about the process/instance and collaboration tools like presence awareness, instant messaging, email, shared document libraries, threaded discussion forums, lists and shared calendars.

Process spaces, like the example shown below, are a simple and cost effective way of facilitating collaboration amongst communities of interest or project teams. Because they are all stored on the central server, they are easy to manage, backup and search. And the environment will integrate easily with existing directories like LDAP and Active Directory.

clip_image006

Process Analytics

Oracle BPM Suite includes additional support for automatically generated analytics and dashboards, above and beyond the ‘Monitor Express’ dashboards you may be familiar with. Business users can easily create these dashboards. They can add various charts to display information about the performance of processes they care about.

Business/process analysts can include‘business indicators’ in their process model.  These can be used to count how often an activity occurs, take note of the value of some instance data, or measure the time between points in the process.

From the business indicators that process modelers include in their models, BPM will automatically create ‘process cubes’ which are star schemas containing ‘real time’ data about the process performance and support OLAP-style reporting and business intelligence using the defined dimensions and measures.

BPM provides a rich set of pre-defined ‘out of the box’ dashboards that can be automatically generated with just a couple of clicks.  The diagram below highlights a business indicator on the process model and a dashboard.  Additionally, you can easily use a more comprehensive and powerful business intelligence tool, like Oracle Business Intelligence or a third party tool, against the process cubes.

image

Of course, because BPMN processes are part of the SCA composite, just like BPEL processes, you can also send data out to Oracle Business Activity Monitoring if you have a need to monitor larger numbers of processes, include data from other sources as well, and/or create more complex dashboards for larger user communities.  Pre-defined dashboards are also available in BAM.  We will look at BAM in more detail later in this article.

Organisation modeling

BPMN processes are modeled in ‘swim lanes.’ Swim lanes represent participants’ roles in the process. They provide a clear visual representation of who carries out each activity in the process. The roles you define in your process model can then be easily mapped to users or groups in your corporate directory using either static or dynamic membership rules.

‘Business calendars’ can also be defined so that the process engine can understand when people in various roles will be unavailable due to holidays and operating hours. This allows expiration and escalation times specified on activities to be measure in ‘business hours’ rather than arbitrary ‘wall clock’ time which may produce incorrect results around holidays and weekends. It also allows for handling of process participants in different time zones or shift workers.

Built on a solid foundation

Oracle BPM Suite can be thought of as a layer on top of Oracle SOA Suite – it adds new capabilities, including those discussed already, but it also makes extensive use of the same core components that you would use when building a BPEL process. In fact, there is only actually one process engine which can run both BPEL and BPMN processes.

A key strength of Oracle BPM Suite is the extensive integration capabilities that it inherits from the very solid . Let’s take a tour through some of the other similarities to discover the depth of integration.

Oracle SOA Suite, and by extension Oracle BPM Suite, is based on the Service Component Architecture (SCA) standard which provides a language independent way of assembling ‘service components’ to create a ‘composite application.’ The composite is the unit which can be built, deployed, tested and managed. It is built using an assembly diagram like the one shown in the diagram below.

clip_image012

The ‘service components’ can have various implementation styles. They may be BPEL processes, BPMN processes, rules, mediators, human tasks and so on. You can also see references to external components on the right hand side of the composite diagram. These are the various services that are used (consumed) by this composite. They are often provided by JCA adapters or are web services. The lines (called ‘wires’) between the service components indicate usage not sequence.

Test suites can be defined at the composite level. Test suites are made up of test cases. Test cases can provide simulated inputs and check for the outcome of the composite’s processing. Services can be simulated if necessary, for example if they do not exist yet or if they do not have dedicated accounts or instances available for testing.

Within a composite you can freely mix and match processes that are modeled in BPMN and BPEL. Each can call the other as a sub-process or service.

BPEL and BPMN processes are both first-class citizens in a composite. Both can be exposed as a web service or using other binding styles, both can create and consume human tasks, both can call (consume) business rules, both can use JCA adapters to integrate with external systems, both have synchronous and asynchronous invocation styles.

Both are monitored and managed in exactly the same way. Both use the ‘execution context ID (ECID)’ for instance tracking. This allows you to view details of an instance of a composite and drill down through the instance to see all of the service components involved, regardless of implementation style. You can even drill right down to view the messages sent between them and variables updated in each activity in a process. The diagram below shows an example of drilling down into an instance of a composite and then into a service component in that composite that happens to be a BPMN process. You can see the green highlighting on the process model that tells us where the execution of the process instance is currently.

clip_image014

Both BPEL and BPMN processes can be secured and have logging and auditing policies applied to them using Oracle Web Services Manager, the component of Oracle SOA Suite that is responsible for policy-based management and security of composites.

Oracle Business Activity Monitoring (BAM) is a component of Oracle SOA Suite (and therefore Oracle BPM Suite also) that allows you to create comprehensive dashboards for reporting which are updated in ‘real time.’

BAM is different to the process analytics mentioned earlier in a few key aspects:

  • BAM dashboards can take input from many sources, not just performance metrics attached to processes,
  • They are automatically updated in ‘real time’ by a ‘push’-based update mechanism, i.e. the user does not need to ‘refresh’ them,
  • They can show consolidated metrics across a number of processes, services or other data sources,
  • You can define thresholds and alerts, and
  • You can display data using time series.

clip_image016

Oracle SOA Suite also includes a business to business engine called Oracle B2B that supports many common B2B protocols like AS2, EDI and RosettaNet for example. It handles issues like authentication, guaranteed delivery and non-repudiation in the business to business messaging context. B2B integrations manifest as adapter references in a composite and can be wired to BPEL and BPMN processes equally.

JDeveloper provides more and less technical views of process diagrams for both BPEL and BPMN. The less technical view is called a ‘blueprint.’ These can be used to facilitate exchange of models with other process modeling tools.

Footnote

If you are reading this in early 2011, just after the release of 11.1.1.4, then there are a couple of things that may currently be slightly easier to model in BPEL. If you have a process needs these kinds of capabilities, you might want to consider modeling it in BPEL.

The first is compensation. Compensation is the issuing of ‘reversal’ transactions to undo work that was previously done and committed. A business process can run for a long time (hours, days, even weeks) – far too long to hold a transaction open. BPEL has excellent support for compensation built in to the language and it is easy to model compensation in your processes. This also means that the process engine will know when it is running forwards through a process and when it is compensating.

It is of course possible to build compensation logic into a BPMN process, though the directed graph nature of BPMN can make compensation a little more complicated to define because there are potentially many more cases that you need to cater for.  It is perhaps better to model those parts of your overall business process that may need to be compensated in BPEL and call those BPEL processes from your overall BPMN business level process.

Correlation is another consideration. Correlation is the ability for a process which calls a service asynchronously to identify the corresponding response (‘callback’) from that service. This is especially important in loops or parallel execution, or when many instances of a process will be running concurrently. BPEL provides native correlation set support in the language, which allows you to define the keys to use to identify the correct response. BPMN provides correlation through its support of WS-Addressing correlation.

Update: These considerations are moot now that the BPM 11.1.1.5 ‘Feature Pack’ has been released.  See this post for more information.

Acknowledgements

A big thank you to Manoj Das, Robert Patrick, Dave Shaffer and Meera Srinivasan for their time and their most helpful input and suggestions.

Claudio Ivaldi created a presentation using information in this post, and it is available from here.

Posted in Uncategorized | Tagged , , , , , | 8 Comments

Improving XQuery performance

XQuery transformations are often used in pipelines in Oracle Service Bus to perform data transformation.  Performance of XQuery transformations is a key area to focus on when performance tuning an Oracle Service Bus environment.

During performance testing of your project you should use the activity timing metrics to identify any XQuery performance issues and optimize those queries for better performance.  The recommended approach is to measure the average time taken to run each query, then multiply this by the number of times you expect that query to be used over some defined time period, e.g. one hour.  The results should be sorted from largest to smallest.  You should start at the top of the list and optimize as many of the queries as necessary to obtain the necessary performance.  You should only optimize the queries if there is likely to be a significant performance gain.  Optimization of the queries should be done by looking for opportunities to reorder or modify the statements in order to obtain faster execution.

XQuery performance should be tested with large payloads whenever possible, or at least with many invocations of the same transformation and results averaged.

Some general quidelines for improving the performance of XQuery transformations are as follows:

  • Avoid the use of double slashes (“//”) at the beginning of XPath expressions.  They should only be used if the exact location of a node is not known at design time.  Use of “//” will force the entire payload to be read and parsed.
  • Index XPath expressions where applicable.  For example, if you know that there is only one “Order” and only one “Address” then using an XPath expression like “$body/Order[1]/Address[1]” instead of “$body/Order/Address” will minimize the amount of the payload that needs to be parsed.  Do not use this approach if the expected return value is a list of nodes.
  • Extract frequently used parts of a large XML document as intermediate variables.  This will consume more memory, but will reduce redundant XPath processing.  For example:
    let $customer := $body/Order[1]/CustomerInfo[1]
    $customer/ID, $customer/Status
Posted in Uncategorized | Tagged , , , | Leave a comment

Visualising Garbage Collection in the JVM

Recently, I have been working with a number of customers on JVM tuning exercises.  It seems that there is not widespread knowledge amongst developers and administrators about how garbage collection works, and how the JVM uses memory.  So, I decided to write a very basic introduction and an example that will let you see it happening in real time!  This post does not try to cover everything about garbage collection or JVM tuning – that is a huge area, and there are some great resources on the web already, only a Google away.

This post is about the HotSpot JVM – that’s the ‘normal’ JVM from Oracle (previously Sun).  It is the one you would most likely use on Windows.  If you are using a Linux variant that errs on the side of free software (like Ubuntu), you might have an open source JVM.  Or if your JVM came with another product, like WebLogic, you may even have the JRockit JVM from Oracle (formerly BEA).  And then there are other JVMs from IBM, Apple and others.  Most of these other JVMs work in a similar way to HotSpot, with the notable exception of JRockit, which handles memory differently, and does not have a separate Permanent Generation (see below) for example.

First, let’s take a look at the way the JVM uses memory.  There are two main areas of memory in the JVM – the ‘Heap’ and the ‘Permanent Generation.’  In the diagram below, the permanent generation is shown in green.  The remainder (to the left) is the heap.

The Permanent Generation

The permanent generation is used only by the JVM itself, to keep data that it requires.  You cannot place any data in the permanent generation.  One of the things the JVM uses this space for is keeping metadata about the objects you create.  So every time you create an object, the JVM will store some information in the permanent generation.  So the more objects you create, the more room you need in the permanent generation.

The size of the permanent generation is controlled by two JVM parameters. -XX:PermSize sets the minimum, or initial, size of the permanent generation, and -XX:MaxPermSize sets the maximum size.  When running large Java applications, we often set these two to the same value, so that the permanent generation will be created at its maximum size initially.  This can improve performance because resizing the permanent generation is an expensive (time consuming) operation.  If you set these two parameters to the same size, you can avoid a lot of extra work in the JVM to figure out if it needs to resize, and actually performing resizes of, the permanent generation.

The Heap

The heap is the main area of memory.  This is where all of your objects will be stored.  The heap is further divided into the ‘Old Generation’ and the ‘New Generation.’  The new generation in turn is divided into ‘Eden’ and two ‘Survivor’ spaces.

This size of the heap is also controlled by JVM paramaters.  You can see on the diagram above the heap size is -Xms at minimum and -Xmx at maximum.  Additional parameters control the sizes of the various parts of the heap.  We will see one of those later on, the others are beyond the scope of this post.

When you create an object, e.g. when you say byte[] data = new byte[1024], that object is created in the area called Eden.  New objects are created in Eden.  In addition to the data for the byte array, there will also be a reference (pointer) for ‘data.’

The following explanation has been simplified for the purposes of this post.  When you want to create a new object, and there is not enough room left in eden, the JVM will perform ‘garbage collection.’  This means that it will look for any objects in memory that are no longer needed and get rid of them.

Garbage collection is great!  If you have ever programmed in a language like C or Objective-C, you will know that managing memory yourself is somewhat tedious and error prone.  Having the JVM automatically find unused objects and get rid of them for you makes writing code much simpler and saves a lot of time debugging.  If you have never used a language that does not have garbage collection – you might want to go write a C program – it will certainly help you to appreciate what you are getting from your language for free!

There are in fact a number of different algorithms that the JVM may use to do garbage collection.  You can control which algorithms are used by changing the JVM paramaters.

Let’s take a look at an example.  Suppose we do the following:

String a = "hello";
String b = "apple";
String c = "banana";
String d = "apricot";
String e = "pear";
//
// do some other things
//
a = null;
b = null;
c = null;
e = null;

This will cause five objects to be created, or ‘allocated,’ in eden, as shown by the five yellow boxes in the diagram below.  After we have done ‘some other things,’ we free a, b, c and e – by setting the references to null.  Assuming there are no other references to these objects, they will now be unused.  They are shown in red in the second diagram.  We are still using String d, it is shown in green.

If we try to allocate another object, the JVM will find that eden is full, and that it needs to perform garbage collection.  The most simple garbage collection algorithm is called ‘Copy Collection.’  It works as shown in the diagram above.  In the first phase (‘Mark’) it will mark (illustrated by red colour) the unused objects.  In the second phase (‘Copy’) it will copy the objects we still need (i.e. d) into a ‘survivor’ space – the little box on the right.  There are two survivor spaces and they are smaller than eden in size.  Now that all the objects we want to keep are safe in the survivor space, it can simply delete everything in eden, and it is done.

This kind of garbage collection creates something known as a ‘stop the world’ pause.  While the garbage collection is running, all other threads in the JVM are paused.  This is necessary so that no thread tries to change memory after we have copied it, which would cause us to lose the change.  This is not a big problem in a small application, but if we have a large application, say with a 8GB heap for example, then it could actually take a significant amount of time to run this algorithm – seconds or even minutes.  Having your application stop for a few minutes every now and then is not suitable for many applications.  That is why other garbage collection algorithms exist and are often used.  Copy Collection works well when there is a relatively large amount of garbage and a small amount of used objects.

In this post, we will just discuss two of the commonly used algorithms.  For those who are interested, there is plenty of information available online and several good books if you want to know more!

The second garbage collection algorithm we will look at is called ‘Mark-Sweep-Compact Collection.’  This algorithm uses three phases.  In the first phase (‘Mark’), it marks the unused objects, shown below in red.  In the second phase (‘Sweep’), it deletes those objects from memory.  Notice the empty slots in the diagram below.  Then in the final phase (‘Compact’), it moves objects to ‘fill up the gaps,’ thus leaving the largest amount of contiguous memory available in case a large object is created.

So far this is all theoretical – let’s take a look at how this actually works with a real application.  Fortunately, the JDK includes a nice visual tool for watching the behaviour of the JVM in ‘real time.’  This tool is called jvisualvm.  You should find it right there in bin directory of your JDK installation.  We will use that a little later, but first, let’s create an application to test.

I used Maven to create the application and manage the builds and dependencies and so on.  You don’t need to use Maven to follow this example.  You can go ahead and type in the commands to compile and run the application if you prefer.

I created a new project using the Maven archetype generate goal:

mvn archetype:generate
  -DarchetypeGroupId=org.apache.maven.archetypes
  -DgroupId=com.redstack
  -DartifactId=memoryTool

I took type 98 – for a simple JAR – and the defaults for everything else.  Next, I changed into my memoryTool directory and edited my pom.xml as shown below.  I just added the part shown in red.  That will allow me to run my application directly from Maven, passing in some memory configuration and garbage collection logging parameters.

<project xmlns="http://maven.apache.org/POM/4.0.0" 
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.redstack</groupId>
  <artifactId>memoryTool</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>jar</packaging>
  <name>memoryTool</name>
  <url>http://maven.apache.org</url>
  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  </properties>
  <build>
    <plugins>
      <plugin>
        <artifactId>maven-compiler-plugin</artifactId>
        <version>2.0.2</version>
        <configuration>
          <source>1.6</source>
          <target>1.6</target>
        </configuration>
      </plugin>
      <plugin>
        <groupId>org.codehaus.mojo</groupId>
        <artifactId>exec-maven-plugin</artifactId>
        <configuration>
          <executable>java</executable>
          <arguments>
            <argument>-Xms512m</argument>
            <argument>-Xmx512m</argument>
            <argument>-XX:NewRatio=3</argument>
            <argument>-XX:+PrintGCTimeStamps</argument>
            <argument>-XX:+PrintGCDetails</argument>
            <argument>-Xloggc:gc.log</argument>
            <argument>-classpath</argument>
            <classpath/>
            <argument>com.redstack.App</argument>
          </arguments>
        </configuration>
      </plugin>
    </plugins>
  </build>
  <dependencies>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>3.8.1</version>
      <scope>test</scope>
    </dependency>
  </dependencies>
</project>

If you prefer not to use Maven, you can start the application using the following command:

java -Xms512m -Xmx512m -XX:NewRatio=3 
  -XX:+PrintGCTimeStamps -XX:+PrintGCDetails
  -Xloggc:gc.log -classpath <whatever>
  com.redstack.App

The switches are telling the JVM the following:

  • -Xms sets the initial/minimum heap size to 512 MB
  • -Xmx sets the maximum heap size to 512 MB
  • -XX:NewRatio sets the size of the old generation to three times the size of the new generation
  • -XX:+PrintGCTimeStamps, -XX:+PrintGCDetails and -Xloggc:gc.log cause the JVM to print out additional information about garbage collection into a file call gc.log
  • -classpath tells the JVM where to look for your program
  • com.redstack.App is the name of the main class to execute

I have chosen these options so that you can see pretty clearly what is going on and you wont need to spend all day creating objects to make something happen!

Here is the code in that main class.  This is a simple program that will allow us to create objects and throw them away easily, so we can understand how much memory we are using, and watch what the JVM does with it.

package com.redstack;

import java.io.*;
import java.util.*;

public class App {

  private static List objects = new ArrayList();
  private static boolean cont = true;
  private static String input;
  private static BufferedReader in = new BufferedReader(new InputStreamReader(System.in));

  public static void main(String[] args) throws Exception {
    System.out.println("Welcome to Memory Tool!");

    while (cont) {
      System.out.println(
        "\n\nI have " + objects.size() + " objects in use, about " +
        (objects.size() * 10) + " MB." +
        "\nWhat would you like me to do?\n" +
        "1. Create some objects\n" +
        "2. Remove some objects\n" +
        "0. Quit");
      input = in.readLine();
      if ((input != null) && (input.length() >= 1)) {
        if (input.startsWith("0")) cont = false;
        if (input.startsWith("1")) createObjects();
        if (input.startsWith("2")) removeObjects();
      }
    }

    System.out.println("Bye!");
  }

  private static void createObjects() {
    System.out.println("Creating objects...");
    for (int i = 0; i < 2; i++) {
       objects.add(new byte[10*1024*1024]);
     }
   }

   private static void removeObjects() {
     System.out.println("Removing objects...");
     int start = objects.size() - 1;
     int end = start - 2;
     for (int i = start; ((i >= 0) && (i > end)); i--) {
      objects.remove(i);
    }
  }
}

If you are using Maven, you can build, package and execute this code using the following command:

mvn package exec:exec

Once you have this compiled and ready to go, start it up, and fire up jvisualvm as well.  You might like to arrange your screen so you can see both, as shown in the image below.  If you have never used JVisualVM before, you will need to install the VisualGC plugin.  Select Plugins from the Tools menu.  Open the Available Plugins tab.  Place a tick next to the entry for Visual GC.  Then click on the Install button.  You may need to restart it.

Back in the main panel, you should see a lit of JVM processes.  Double click on the one running your application, com.redstack.App in this example, and then open the Visual GC tab.  You should see something like what is shown below.

Notice that you can visually see the permanent generation, the old generation and eden and the two survivor spaces (S0 and S1).  The coloured bars indicate memory in use.  On the right hand side, you can also see a historical view that shows you when the JVM spent time performing garbage collections, and the amount of memory used in each space over time.

In your application window, start creating some objects (by selecting option 1).  Watch what happens in Visual GC.  Notice how the new objects always get created in eden.  Now throw away some objects (option 2).  You will probably not see anything happen in Visual GC.  That is because the JVM will not clean up that space until a garbage collection is performed.

To make it do a garbage collection, create some more objects until eden is full.  Notice what happens when you do this.  If there is a lot of garbage in eden, you should see the objects in eden move to a survivor space.  However, if eden had little garbage, you will see the objects in eden move to the old generation.  This happens when the objects you need to keep are bigger than the survivor space.

Notice as well that the permanent generation grows slowly as you create new objects.

Try almost filling eden, don’t fill it completely, then throw away almost all of your objects – just keep 20MB.  This will mean that eden is mostly full of garbage.  Then create some more objects.  This time you should see the objects in eden move into the survivor space.

Now, let’s see what happens when we run out of memory.  Keep creating objects until you have around 460MB.  Notice that both eden and the old generation are nearly full.  Create a few more objects.  When there is no more space left, your application will crash and you will get an OutOfMemoryException.  You might have got those before and wondered what causes them – especially if you have a lot more physical memory on your machine, you may have wondered how you could possibly be ‘out of memory’ – now you know!  If you happen to fill up your permanent generation (which will be pretty difficult to do in this example) you would get a different exception telling you PermGen was full.

Finally, another way to look at this data is in that garbage collection log we asked for.  Here are the first few lines from one run on my machine:

13.373: [GC 13.373: [ParNew: 96871K->11646K(118016K), 0.1215535 secs] 96871K->73088K(511232K), 0.1216535 secs] [Times
: user=0.11 sys=0.07, real=0.12 secs]
16.267: [GC 16.267: [ParNew: 111290K->11461K(118016K), 0.1581621 secs] 172732K->166597K(511232K), 0.1582428 secs] [Ti
mes: user=0.16 sys=0.08, real=0.16 secs]
19.177: [GC 19.177: [ParNew: 107162K->10546K(118016K), 0.1494799 secs] 262297K->257845K(511232K), 0.1495659 secs] [Ti
mes: user=0.15 sys=0.07, real=0.15 secs]
19.331: [GC [1 CMS-initial-mark: 247299K(393216K)] 268085K(511232K), 0.0007000 secs] [Times: user=0.00 sys=0.00, real
=0.00 secs]
19.332: [CMS-concurrent-mark-start]
19.355: [CMS-concurrent-mark: 0.023/0.023 secs] [Times: user=0.01 sys=0.01, real=0.02 secs]
19.355: [CMS-concurrent-preclean-start]
19.356: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
19.356: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 24.417: [CMS-concurrent-abortable-preclean: 0.050/5.061 secs] [Times: user=0.10 sys=
0.01, real=5.06 secs]
24.417: [GC[YG occupancy: 23579 K (118016 K)]24.417: [Rescan (parallel) , 0.0015049 secs]24.419: [weak refs processin
g, 0.0000064 secs] [1 CMS-remark: 247299K(393216K)] 270878K(511232K), 0.0016149 secs] [Times: user=0.00 sys=0.00, rea
l=0.00 secs]
24.419: [CMS-concurrent-sweep-start]
24.420: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
24.420: [CMS-concurrent-reset-start]
24.422: [CMS-concurrent-reset: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
24.711: [GC [1 CMS-initial-mark: 247298K(393216K)] 291358K(511232K), 0.0017944 secs] [Times: user=0.00 sys=0.00, real
=0.01 secs]
24.713: [CMS-concurrent-mark-start]
24.755: [CMS-concurrent-mark: 0.040/0.043 secs] [Times: user=0.08 sys=0.00, real=0.04 secs]
24.755: [CMS-concurrent-preclean-start]
24.756: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
24.756: [CMS-concurrent-abortable-preclean-start]
25.882: [GC 25.882: [ParNew: 105499K->10319K(118016K), 0.1209086 secs] 352798K->329314K(511232K), 0.1209842 secs] [Ti
mes: user=0.12 sys=0.06, real=0.12 secs]
26.711: [CMS-concurrent-abortable-preclean: 0.018/1.955 secs] [Times: user=0.22 sys=0.06, real=1.95 secs]
26.711: [GC[YG occupancy: 72983 K (118016 K)]26.711: [Rescan (parallel) , 0.0008802 secs]26.712: [weak refs processin
g, 0.0000046 secs] [1 CMS-remark: 318994K(393216K)] 391978K(511232K), 0.0009480 secs] [Times: user=0.00 sys=0.00, rea
l=0.01 secs]

You can see from this log what was happening in the JVM.  Notice it shows that the Concurrent Mark Sweep Compact Collection algorithm (it calls it CMS) was being used.  You can see when the different phases ran.  Also, near the bottom notice it is showing us the ‘YG’ (young generation) occupancy.

You can leave those same three settings on in production environments to produce this log.  There are even some tools available that will read these logs and show you what was happening visually.

Well, that was a short, and by no means exhaustive, introduction to some of the basic theory and practice of JVM garbage collection.  Hopefully the example application helped you to clearly visualise what happens inside the JVM as your applications run.

Thanks to Rupesh Ramachandran who taught me many of the things I know about JVM tuning and garbage collection.

Posted in Uncategorized | Tagged , , , , | 16 Comments

Simple JMS client in Scala

In a small departure from our normal Java oriented examples, this post shows how to send a JMS message from Scala.  It is basically a port of the simple JMS client found in this post.

This example is written to run against WebLogic Server 10.3.2.  Details of the necessary WebLogic Server configuration necessary to run this code can be found in that same post referred to above.  You can also find a C# JMS client here.

Update: You can grab this code from our Subversion repository:
svn checkout https://www.samplecode.oracle.com/svn/jmsclients/trunk

Here is the code:

  1 import java.util.{Hashtable => JHashtable}
  2 import javax.naming._
  3 import javax.jms._
  4
  5 object SimpleJMSClient {
  6
  7   val DEFAULT_QCF_NAME = "jms/MarksConnectionFactory"
  8   val DEFAULT_QUEUE_NAME = "jms/MarksQueue"
  9   val DEFAULT_URL = "t3://localhost:7101"
 10   val DEFAULT_USER = "weblogic"
 11   val DEFAULT_PASSWORD =  "weblogic"
 12
 13   def sendMessage(theMessage: String) {
 14     sendMessage(
 15       url = DEFAULT_URL,
 16       user = DEFAULT_USER,
 17       password = DEFAULT_PASSWORD,
 18       cf = DEFAULT_QCF_NAME,
 19       queue = DEFAULT_QUEUE_NAME,
 20       messageText = theMessage)
 21   }
 22
 23   def sendMessage(url : String, user : String, password : String,
 24                   cf : String, queue : String, messageText : String) {
 25     // create InitialContext
 26     val properties = new JHashtable[String, String]
 27     properties.put(Context.INITIAL_CONTEXT_FACTORY,
 28                    "weblogic.jndi.WLInitialContextFactory")
 29     properties.put(Context.PROVIDER_URL, url)
 30     properties.put(Context.SECURITY_PRINCIPAL, user)
 31     properties.put(Context.SECURITY_CREDENTIALS, password)
 32
 33     try {
 34       val ctx = new InitialContext(properties)
 35       println("Got InitialContext " + ctx.toString())
 36
 37       // create QueueConnectionFactory
 38       val qcf = (ctx.lookup(cf)).asInstanceOf[QueueConnectionFactory]
 39       println("Got QueueConnectionFactory " + qcf.toString())
 40
 41       // create QueueConnection
 42       val qc = qcf.createQueueConnection()
 43       println("Got QueueConnection " + qc.toString())
 44
 45       // create QueueSession
 46       val qsess = qc.createQueueSession(false, 0)
 47       println("Got QueueSession " + qsess.toString())
 48
 49       // lookup Queue
 50       val q = (ctx.lookup(queue)).asInstanceOf[Queue]
 51       println("Got Queue " + q.toString())
 52
 53       // create QueueSender
 54       val qsndr = qsess.createSender(q)
 55       println("Got QueueSender " + qsndr.toString())
 56
 57       // create TextMessage
 58       val message = qsess.createTextMessage()
 59       println("Got TextMessage " + message.toString())
 60
 61       // set message text in TextMessage
 62       message.setText(messageText)
 63       println("Set text in TextMessage " + message.toString())
 64
 65       // send message
 66       qsndr.send(message)
 67       println("Sent message ")
 68
 69     } catch {
 70       case ne : NamingException =>
 71         ne.printStackTrace(System.err)
 72         System.exit(0)
 73       case jmse : JMSException =>
 74         jmse.printStackTrace(System.err)
 75         System.exit(0)
 76       case _ =>
 77         println("Got other/unexpected exception")
 78         System.exit(0)
 79     }
 80   }
 81
 82   def main(args: Array[String]) = {
 83     sendMessage(
 84       theMessage = "hello from Scala sendMessage() with 1 arg"
 85     )
 86     sendMessage(
 87       url =        "t3://localhost:7101",
 88       user =       "weblogic",
 89       password =   "weblogic",
 90       cf =         "jms/MarksConnectionFactory",
 91       queue =      "jms/MarksQueue",
 92       messageText = "hello from Scala sendMessage() with 6 args"
 93     )
 94   }
 95
 96 }

Note that on line 26, we need to say JHashtable[String, String] to make this (old) Java API work in Scala.

To compile and run this code, you will need to put the wlthint3client.jar on your classpath, as shown below:

scala -classpath ~/Oracle/Middleware/wlserver_10.3/server/lib/wlthint3client.jar:. SimpleJMSClient

The output should look a little like this (if everything works!):

Got InitialContext javax.naming.InitialContext@435db13f
Got QueueConnectionFactory weblogic.jms.client.JMSConnectionFactory@21ed5459
Got QueueConnection weblogic.jms.client.WLConnectionImpl@41759d12
Got QueueSession weblogic.jms.client.WLSessionImpl@491cc367
Got Queue Module1!MarksQueue
Got QueueSender weblogic.jms.client.WLProducerImpl@72813bc1
Got TextMessage TextMessage[null, null]
Set text in TextMessage TextMessage[null, hello from sendMessage() with 1 arg]
Sent message
Got InitialContext javax.naming.InitialContext@5c2bfdff
Got QueueConnectionFactory weblogic.jms.client.JMSConnectionFactory@465ff916
Got QueueConnection weblogic.jms.client.WLConnectionImpl@5374a6e2
Got QueueSession weblogic.jms.client.WLSessionImpl@f786a3c
Got Queue Module1!MarksQueue
Got QueueSender weblogic.jms.client.WLProducerImpl@689e8c34
Got TextMessage TextMessage[null, null]
Set text in TextMessage TextMessage[null, hello from sendMessage() with 6 args]
Sent message

And, as shown in the referenced post, you can go into the WebLogic Server console and read the messages!

This code shows how we can create a nice, simple API to send a JMS message.  Combined with the native XML support in Scala, this could provide particularly clean and compact code for sending XML messages with changing fields, as we might want to do when writing load simulators.

This would be useful for Business Activity Monitoring or Oracle Service Bus, where we may want to send simliar messages repeatedly, with only small changes to some fields, e.g. incrementing order numbers or updating status fields.

Note: This content is also posted here in a slightly different treatment, for a different audience.

Posted in Uncategorized | Tagged , , | Leave a comment

Configuring Maven to run your Java application

Recently I was working on a project using Maven, and I really wanted to be able to run the project easily without needing to worry about all the classpath entries.

Turns out it is relatively easy to set up Maven to run your project for you and to automatically handle providing the right classpath for your code and all the dependencies.  Here’s how:

I created a simple Maven JAR project using an archetype as shown below:

$mvn archetype:create 
  -DarchetypeGroupId=org.apache.maven.archetypes 
  -DgroupId=com.redstack 
  -DartifactId=myproject

Then I edited the pom.xml file in the myproject directory.  I added the entries shown in red.  These tell Maven to compile the project using Java 1.6, and the name of the main class for the project.

<project xmlns="http://maven.apache.org/POM/4.0.0" 
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
 <groupId>com.redstack</groupId>
  <artifactId>myproject</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>jar</packaging>
 <name>myproject</name>
  <url>http://maven.apache.org</url>
 <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  </properties>
 <build>
  <plugins>
   <plugin>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>2.0.2</version>
    <configuration>
      <source>1.6</source>
      <target>1.6</target>
    </configuration>
   </plugin>
   <plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>exec-maven-plugin</artifactId>
    <configuration>
      <mainClass>com.redstack.App</mainClass>
    </configuration>
   </plugin>
  </plugins>
 </build>
 <dependencies>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>3.8.1</version>
      <scope>test</scope>
    </dependency>
  </dependencies>
</project>

Having done this, I can then build and then run the project by simply typing these two commands:

$ mvn package
$ mvn exec:java
...
Hello World!
...

There will be a bunch of Maven messages too, but in the middle there you can see the output from the project – “Hello World!” in this case.  This example is just running the App.java that was generated by Maven.  In your project, this class might start up a User Interface, or run any number of tasks.  It probably does a little more than printing “Hello World!”

[Updated Jan 7, 2011] This approach will run your application in the same process that Maven is running in, which may or may not be acceptable.  If you want to run it in a different process, you can modify the plugin configuration section to something more like this:

     <plugin>
        <groupId>org.codehaus.mojo</groupId>
        <artifactId>exec-maven-plugin</artifactId>
        <configuration>
          <executable>java</executable>
          <arguments>
            <argument>-Xms512m</argument>
            <argument>-Xmx512m</argument>
            <argument>-XX:NewRatio=3</argument>
            <argument>-XX:+PrintGCTimeStamps</argument>
            <argument>-XX:+PrintGCDetails</argument>
            <argument>-Xloggc:gc.log</argument>
            <argument>-classpath</argument>
            <classpath/>
            <argument>com.redstack.App</argument>
          </arguments>
        </configuration>
      </plugin>

Then use the goal:

mvn exec:exec

This will execute the JVM in a new process and allow you to pass in whatever arguments you like.  Notice the empty classpath tag.  This will insert the correct runtime classpath for you based on the dependencies in the pom.xml.

Posted in Uncategorized | Tagged | 2 Comments

Changing the port that WebLogic listens on (using WLST)

Occasionally I need to change the port that my WebLogic AdminServer is listening on, without actually starting up the server.  Usually this is because I have just created a new domain, and I forgot to change the port in the Domain Configuration Wizard (oops!), and now I can’t start the server up because there is already another one using that port.

One way to get around this is to use the WebLogic Scripting Tool (WLST) to change the listening port.

Start WLST by running:

/home/mark/Oracle/Middleware/wlserver_10.3/common/bin/wlst.sh

When you start up WLST, it scans all of the JAR files on its classpath, so it can take a few moments to start up.

The commands below are used to navigate to and change the listening port setting, in this case to 7777.

wls:/offline> readDomain ('/home/mark/Oracle/Middleware/user_projects/domains/base_domain')
wls:/offline/base_domain> cd ('Server')
wls:/offline/base_domain/Server> ls ()
drw- AdminServer
wls:/offline/base_domain/Server> cd ('AdminServer')
wls:/offline/base_domain/Server/AdminServer> ls ()
...
-rw- ListenPort 7001
...
wls:/offline/base_domain/Server/AdminServer> set ('ListenPort',7777)
wls:/offline/base_domain/Server/AdminServer> updateDomain ()
wls:/offline/base_domain/Server/AdminServer> exit ()

You can now start up your AdminServer and it will be listening on port 7777.

This process actually just updates the config.xml in the domain’s config directory.  It adds a <listenPort>7777</listenPort> entry in the AdminServer configuration.  Take a look at the file before and after to see what it does.

You can, of course, just go ahead edit the file too.  That is certainly faster, but not quite as instructive!  In WLST using the ls (list) command, you can see all of the other available settings.  Many of these are not in the config.xml file (as they are defaults).  Looking around in WLST can help you discover a lot of the options that are available for configuration.

Posted in Uncategorized | Tagged , | Leave a comment

Using WebLogic as a Load Balancer

Recently, I was working with a customer who was developing an application on Windows (developer) machines and was planning to deploy to a (production) cluster of WebLogic Servers running on Solaris, with a hardware load balancer.  In order to do some functional, load and availability testing of the application before deploying it into production, they needed to set up a cluster in their test environment, but they did not have access to a hardware load balancer in the test environment.

This is a common scenario for many development projects.  There are a number of good options available to set up a software load balancer in the test environment.  In this post, we will explore one such option – using the HTTP Cluster Servlet that is included with WebLogic Server.

In this post, we are using 64-bit WebLogic Server 10.3.3 running on 64-bit JRockit 1.6.0 on 64-bit Ubuntu 10.10.  We also use Maven 2.2.1 in this post.  The use of Maven is incidental to the purpose of the post, but since we use it often, we have included it here.  If you don’t use Maven, you can just create the necessary files in the correct directory structure and manually create your WAR files.

Thanks to Sushil Shukla and Robert Patrick for assistance in preparing this post.

We start with a freshly installed WebLogic Server 10.3.3.  Our first step is to create a WebLogic domain.  Our domain is going to contain four servers.

  • Firstly, the AdminServer which is used to run the administration console and to manage deployments and so on.
  • Secondly, we will run two managed servers that will run our web application.  These two servers will be clustered.
  • Thirdly, we will run another managed server which will be our load balancer.  This managed server will not be part of the cluster.
  • Finally, we will use the Node Manager to start and stop all of our servers.

Let’s use the WebLogic Domain Configuration tool to create our domain:

$ cd ~/Oracle/Middleware/wlserver_10.3/common/bin
$ ./config.sh

After a few moments, the Configuration Wizard will appear.  Select the option to Create a new WebLogic domain and click on Next.

For this example, we don’t need to select any of the options on the Select Domain Source page, we can just click on Next to continue.

On the Specify Domain Name and Location screen, we provide a name for our domain.  I took the default, base_domain, and clicked on Next.

Now, we provide the password for the weblogic administrative user.  Then click on Next.  You will need to remember this password.

For our example, we can leave the servers on Development Mode.  Choose your JDK, we use and recommend JRockit for 64-bit Linux environments, and click on Next.

On the Select Optional Configuration page, we need to tick the checkbox for Managed Servers, Clusters and Machines.  This will cause the configuration wizard to display some extra screens so we can set up our servers and clusters.  Click on Next to continue.

On the Configure Managed Servers page, click on the Add button three times to create three managed servers.  Enter names for the servers, as shown below.  I called mine server1, server2 and loadBalancer.  Note the listen ports that are assigned to each server.  We will need to know these later on.  Click on Next when you are ready to continue.

On the Configure Clusters page, click on the Add button to add a cluster, and give it a name.  I called mine cluster1.  Then click on Next to continue.

On the Assign Servers to Clusters page, highlight server1 and server2 and add each of them to cluster1 using the right arrow button.  When you are done, your screen should like like the image below.  Note that the loadBalancer server has not beed added to the cluster.  Click Next when your are ready.

In this example, we are going to just click Next on the Crate HTTP Proxy Applications screen.

On the Configure Machines page, click on the Add button to add a new machine and give it a name.  I called mine machine1.  The “machine” in WebLogic terms represents a single physical (or virtual) machine where one or more managed servers will run.  There is a special process, called the Node Manager, that runs on each machine and allows us to start and stop managed servers from the administration console without needing to log on the actual machine.  Click on Next to continue.

On the Assign Servers to Machines page, add all four servers to machine1.  In our example, we have everything running on the same machine.  It is also possible to use multiple machines if you have them available.  In that case, just define however many machines you have, run the Node Manager on each one (we will come to this later), and assign your managed servers to whichever machine you want them to run on.  Click on Next to continue.

The Configuration Summary is displayed.  Click on Create to create your domain.

While the domain is being created, you can watch the progress.

After a few moments (or minutes, depending on your machine) the domain creation will be completed.  Click on Done to close the wizard.

Now we can start up the Node Manager, which we will use to start and stop our managed servers.

$ cd ~/Oracle/Middleware/wlserver_10.3/server/bin
$ ./startNodeManager.sh

After a few moments, the log will show some messages to let us know that the Node Manager is running:

<20/12/2010 8:26:15 AM> <INFO> <Secure socket listener started on port 5556>
20/12/2010 8:26:15 AM weblogic.nodemanager.server.SSLListener run
INFO: Secure socket listener started on port 5556

Now we can start the AdminServer.  In another terminal window, execute these commands:

$ cd ~/Oracle/Middleware/user_projects/domains/base_domain
$ ./startWebLogic.sh

After a minute or two, the log will show us some messages to let us know that the AdminServer has started:

<20/12/2010 8:28:23 AM EST> <Notice> <WebLogicServer> <BEA-000331> <Started WebLogic Admin Server "AdminServer" for domain "base_domain" running in Development Mode>
<20/12/2010 8:28:23 AM EST> <Warning> <Server> <BEA-002611> <Hostname "mubuntu", maps to multiple IP addresses: 127.0.1.1, 172.16.95.131, 0:0:0:0:0:0:0:1>
<20/12/2010 8:28:23 AM EST> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RUNNING>
<20/12/2010 8:28:23 AM EST> <Notice> <WebLogicServer> <BEA-000360> <Server started in RUNNING mode>

We can now log on to the WebLogic Administration Console to start the other servers.  Point your browser to http://yourserver:7001/console and login using the weblogic user and the password you specified during the domain creation wizard a few minutes ago.

Here we see the WebLogic Console.  Click on the Servers link in the Environment section, or alternatively, you can expand the Environment tree in the Domain Structure on the left and click on Servers in the tree.

You will see a list of the managed servers we just created.  Note that they are all on machine1 and that server1 and server2 are in cluster1.  Click on the Control tab (at the top) to switch to control (not configuration) mode.

Select the three managed servers by ticking their checkboxes (as shown below) and then click on the Start button to start them up.

You can click on the little circular arrows icon just above the table of servers so that the table will be refreshed automatically.  The first time you start up the managed servers it does take a little longer.  So depending on how fast a machine you have, this may take a couple of minutes, or maybe up towards 5-10 minutes.

When you see that all of the servers are running (as below) we are ready to move on.

We are going to need an application to test our solution with, so let’s develop a very simple web application that will simply display the name of the server it is running on.  This will help us know that our load balancer is actually sending some requests to each of the managed servers.

I used Maven 2.2.1 to create and package my web application.  If you don’t use Maven in your environment (you might want to check it out) then you can just create the necessary files and put them in the right directories and then manually build the war file.

First, we create a project for our web application.  Here we are telling Maven to use the webapp “archetype” which is essentially a template to create the application and a bunch of rules about how to compile it and build and package it and so on.  There is plenty of good material on the web about Maven.  If you are not familiar with it, you might want to read some of the introductory material.  Sonatype have some free online books here that are a good start.

Enter this command all on the one line.

$ mvn archetype:create -DgroupId=com.wordpress.redstack 
-DartifactId=mywebapp -DarchetypeArtifactId=maven-archetype-webapp

Let’s take a look inside the project directory to see what was created:

$ cd mywebapp
$ find .
.
./src
./src/main
./src/main/resources
./src/main/webapp
./src/main/webapp/WEB-INF
./src/main/webapp/WEB-INF/web.xml
./src/main/webapp/index.jsp
./pom.xml

So we see that we have a src/main directory which contains a webapp directory for our web application’s source files and a resources directory for resources (we won’t be using this).  We have a standard Java web.xml deployment descriptor, and an index.jsp for our home page.  There is also a pom.xml which is used by Maven to describe the project.

Again, we wont drill in to the details of Maven here, there is plenty of information about that topic available already.

To create our simple web application, we are just going to edit the src/main/webapp/index.jsp to contain the following code:

<html>
<body>
<h2>Hello World!</h2>
<p>I am running on <%= System.getProperty("weblogic.Name") %>.</p>
</body>
</html>

It will simply print a “Hello World!” message and then tell us which managed server it is running on.  You can obtain the name of the managed server that your code is running on by getting the weblogic.Name property as shown above.  Thanks to Robert Patrick for this tip.

Next, let’s create a WebLogic deployment descriptor, src/main/webapp/WEB-INF/weblogic.xml to set the context root (URL) for our web application.  Put the following code in your weblogic.xml:

<!DOCTYPE weblogic-web-app PUBLIC "-//BEA Systems, Inc.//DTD Web Application 9.1//EN" "http://www.bea.com/servers/wls810/dtd/weblogic
810-web-jar.dtd">
<weblogic-web-app>
  <context-root>/mywebapp</context-root>
</weblogic-web-app>

Now we are ready to build and deploy our application.  To compile and package our application, issue the following command:

$ mvn package

If you are not using Maven, you will need to manually compile your code and build a WAR file at this point.  If you are using Maven, you should find a WAR file was created for you:

$ find . -name \*.war
./target/mywebapp.war

Now we can deploy our application.  We will do this using the WebLogic console.  In the Domain Structure on the left, click on the Deployments option.  The click on the Install button to install a new application.

Navigate to the folder where your WAR file is located.  Click on the radio box next to your WAR file, as shown below, and then click on the Next button.

Choose the option to Install this deployment as an application and click on Next.

Now comes the important bit!  We need to install this application on the cluster, as opposed to on an individual server.  Tick the checkbox for cluster1.  Note that this will automatically select the Part of this cluster option too.  You can change it to All servers in the cluster.  Click on Next to continue.

On the next page, we can just click on Next.

And then Finish.

After a few moment, the deployment will be complete, and you will see the settings screen for your web application, as shown below:

Click on the Deployments option on the left again, and then tick the checkbox beside your web application and the click on the Start button and Servicing all requests to start the web application.  This will make it run on both of the servers in the cluster.

You will get a message (in green at the top) to let you know the application has been started, and the State will change to Active.

Now, let’s test our application to make sure it is working the way we expect.  First, point your browser directly at your server1 managed server.  You will need to know the port number to do this.  If you followed the example, it is probably 7003.  If you don’t remember, you can get it from the Environment -> Servers page in the WebLogic console.

So the URL will be http://yourserver:7003/mywebapp/.  Substitute in the correct port number if yours is different.  You should see your index.jsp page display as shown below.  Note that it says it is running on server1.

Now try the other managed server.  The URL will be http://yourserver:7004/mywebapp/ and the output should indicate it is running on server2.  Again, substitute in the correct port number if yours is different.

So we can see that our web application is in fact working and that it tells us which managed server it is running on.

Now, let’s set up the load balancer.

We will use Maven again to configure the load balancer.  We do this with a simple web application.  Make sure you change directories outside of your previous web application, and then use the command below (type it all on one line) to create a new web application:

$ mvn archetype:create -DgroupId=com.wordpress.redstack 
-DartifactId=myloadbal -DarchetypeArtifactId=maven-archetype-webapp

Move into your new web application’s directory:

$cd myloadbal

If you care to check, you will note that the files and structure created by Maven are just as before.  Our first step is to edit the web.xml that Maven created.  You need to place the following code in it:

<!DOCTYPE web-app PUBLIC
"-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"
"http://java.sun.com/dtd/web-app_2_3.dtd" >
<web-app>
  <display-name>Archetype Created Web Application</display-name>
  <servlet>
    <servlet-name>HttpClusterServlet</servlet-name>
    <servlet-class>
      weblogic.servlet.proxy.HttpClusterServlet
    </servlet-class>
    <init-param>
      <param-name>WebLogicCluster</param-name>
      <param-value>localhost:7003|localhost:7004</param-value>
    </init-param>
  </servlet>
  <servlet-mapping>
    <servlet-name>HttpClusterServlet</servlet-name>
    <url-pattern>/</url-pattern>
  </servlet-mapping>
  <servlet-mapping>
    <servlet-name>HttpClusterServlet</servlet-name>
    <url-pattern>*.jsp</url-pattern>
  </servlet-mapping>
  <servlet-mapping>
    <servlet-name>HttpClusterServlet</servlet-name>
    <url-pattern>*.htm</url-pattern>
  </servlet-mapping>
  <servlet-mapping>
    <servlet-name>HttpClusterServlet</servlet-name>
    <url-pattern>*.html</url-pattern>
  </servlet-mapping>
</web-app>

You can just copy this as is, with the exception of the part highlighted in red.  This is a (pipe separated) list of the managed servers that make up the cluster in the format hostname:port.  You need to make sure this list matches your environment.  You can use localhost as in the example, or if you have a proper DNS name, you can use that instead.  This web.xml just contains standard Servlet definitions that point to the HttpClusterServlet that is part of WebLogic Server and will act as our load balancer.  You can find a lot more details here, including additional settings that you can use for security and other options provided by the Servlet.

Note: The example configuration given above will load balance requests to URLs ending with *.htm, *.html and *.jsp only.  You might want to add some more patterns, or make these ones more generic, to suit your own applications.  Thanks to James Bayer from pointing this out to me!

The next step is to create the WebLogic deployment descriptor, weblogic.xml, in the same directory as the web.xml, as we did before.  The weblogic.xml needs to contain the following code:

<!DOCTYPE weblogic-web-app PUBLIC "-//BEA Systems, Inc.//DTD Web Application 9.1//EN" "http://www.bea.com/servers/wls810/dtd/weblogic
810-web-jar.dtd">
<weblogic-web-app>
  <context-root>/</context-root>
</weblogic-web-app>

Now we are ready to package and deploy our web application, as we did before.  We will use Maven to compile and package the WAR file:

$ mvn package
$ find . -name \*.war
./target/myloadbal.war

We will use the WebLogic console to deploy the web application, as we did earlier.  Again, select Deployments on the left, then Install, then navigate to and select your WAR file, as shown below, and click on Next.

Next again.

This time, be sure to target this application to the loadBalancer managed server only.  This tells WebLogic that the application will only run on that one managed server.  Then click on Next.

Next again.

And Finish.

You will see the settings screen after the deployment has completed.

Click on Deployments on the left, select the new web application (myloadbal) and then Start -> Service all requests.

Now we can test our load balancer!

Point your browser at the web application path, but use the load balancer’s port.  If you followed the example, this will be 7005.  You can check it in the WebLogic console as mentioned earlier.  So for us, the URL is http://mubuntu:7005/mywebapp.  Note that the last part of the URL is the context root for the actual web application (mywebapp) that we want to run, not for the load balancer web application (myloadbal).  In the example below we can see that the load balancer has sent this request to server2.

Now, close your browser (or delete the cookies) to make sure that the session is destroyed, then open a new browser and go to that same URL again.  You will need to do this a few times.  You should find that you get some responses from server1 as shown below, and some from server2 as shown above.

So we can see that our load balancer is indeed distributing our requests across the two managed servers.

This now gives us a nice easy software load balancer to use when testing our applications in a cluster.  The load balancer web application we created here will work for any application that is deployed on the cluster, because we set the context root to “/” in the weblogic.xml and in the servlet mappings in our web.xml.

Enjoy!

Posted in Uncategorized | Tagged , , , | 2 Comments

Purging old instance data from SOA/BPEL 10g

My colleague Deepak Arora (see his blog here) has written an excellent white paper on purging old instance data from SOA Suite/BPEL 10g and also a great blog post on automating the deletion of old partitions.  If you are interested in this topic, I encourage you to check them out!

Posted in Uncategorized | Tagged , , , | Leave a comment

Extracting Garbage Collection messages from a WebLogic Server log file

Recently, I was doing some work on tuning Garbage Collection in a HotSpot JVM (i.e. “the Sun JVM”) underneath WebLogic Server 10.3.3.  In order to do this, I wanted to look at the Garbage Collection logs.  The JVM will produce these logs for you if you pass in the following parameters:

  • -XX:+PrintGCTimeStamps
  • -XX:+PrintGCDetails
  • -Xloggc:gc.log

In this particular case though, only the first two had been specified.  The third one produces a nice clean Garbage Collection log file that can be used with various tools to help with tuning, but unfortunately I did not get that file.  All I had was the WebLogic Server log file, with GC messages spread all through it.  This log file was several hundreds of thousands of lines in size, so manually editing it was not an option.  Re-running this application with the extra setting to capture a nice, clean log file was not a viable option either.

Solution: I wrote a small Java program to go through the WebLogic Server logs and strip out just the information I needed.  Will I ever need to use this again?  Maybe not, but I thought I would post it here for prosperity anyway.  You never know when you are going to need something like this.

Here is the Java code for my CleanGCLog class:

import java.io.FileNotFoundException;
import java.io.FileReader;

public class CleanGCLog {

    public static void main(String[] args) throws Exception {
        char currentChar = 0;
        boolean print = false;
	FileReader inputStream = null;

	// Check if the right number of arguments passed in
	if (args.length != 1) {
  	    System.out.println("\nCleanGCLog\n----------\n" +
	    	"This program will  read through the input  file and print out\n" +
    		"any GC messages found to the stdout.  You need to provide the\n" +
		"filename as an argument.\n" +
  	    	"   e.g. java CleanGCLog mylogfile.log\n\n");
	    System.exit(1);
	}

        // Try to open the file
	try {
            inputStream = new FileReader(args[0]);
	} catch (Exception e) {
 	    System.out.println("Could not open the input file.");
	    e.printStackTrace();
	    System.exit(1);
	}

	// Read through the file a character at a time
        while (currentChar != (char) -1)
        {
            currentChar = (char) inputStream.read(); 

	    // Look for the start of a GC message
            if (currentChar == '{') {
                print = true;
            }
            if (print == true) {
                System.out.print(currentChar);
            }
	    // Look for the end of a GC message
            if (currentChar == '}') {
                System.out.println("}");
                print = false;
            }
        }

        inputStream.close();
    }
}

This is compiled with javac CleanGCLog.java and will produce a single class file, CleanGCLog.class which can be run as shown below.  If you run it without any arguments, it will print its usage information.

In the sample below, I have just kept the first two GC messages to show you what they look like.  This file actually had several thousand of them.  The output is just written to the stdout, so you can easily redirect it to a file or pipe it to another command if desired.

mark$ java CleanGCLog ../gc.log.112710091630.log 
{Heap before GC invocations=0 (full 0):
 par new generation   total 3909824K, used 3723648K [0x00002aaaae1f0000, 0x00002aaba81f0000, 0x00002aaba81f0000)
  eden space 3723648K, 100% used [0x00002aaaae1f0000, 0x00002aab91650000, 0x00002aab91650000)
  from space 186176K,   0% used [0x00002aab91650000, 0x00002aab91650000, 0x00002aab9cc20000)
  to   space 186176K,   0% used [0x00002aab9cc20000, 0x00002aab9cc20000, 0x00002aaba81f0000)
 concurrent mark-sweep generation total 14385152K, used 0K [0x00002aaba81f0000, 0x00002aaf161f0000, 0x00002aaf161f0000)
 concurrent-mark-sweep perm gen total 524288K, used 158566K [0x00002aaf161f0000, 0x00002aaf361f0000, 0x00002aaf361f0000)
2010-11-27T09:17:17.951+1100: 47.077: [GC 47.077: [ParNew
Desired survivor size 95322112 bytes, new threshold 1 (max 15)
- age   1:  109823768 bytes,  109823768 total
: 3723648K->108098K(3909824K), 0.3358110 secs] 3723648K->108098K(18294976K), 0.3360230 secs] [Times: user=0.71 sys=0.54, real=0.33 secs] 
Heap after GC invocations=1 (full 0):
 par new generation   total 3909824K, used 108098K [0x00002aaaae1f0000, 0x00002aaba81f0000, 0x00002aaba81f0000)
  eden space 3723648K,   0% used [0x00002aaaae1f0000, 0x00002aaaae1f0000, 0x00002aab91650000)
  from space 186176K,  58% used [0x00002aab9cc20000, 0x00002aaba35b08e8, 0x00002aaba81f0000)
  to   space 186176K,   0% used [0x00002aab91650000, 0x00002aab91650000, 0x00002aab9cc20000)
 concurrent mark-sweep generation total 14385152K, used 0K [0x00002aaba81f0000, 0x00002aaf161f0000, 0x00002aaf161f0000)
 concurrent-mark-sweep perm gen total 524288K, used 158566K [0x00002aaf161f0000, 0x00002aaf361f0000, 0x00002aaf361f0000)
}
{Heap before GC invocations=1 (full 0):
 par new generation   total 3909824K, used 3831746K [0x00002aaaae1f0000, 0x00002aaba81f0000, 0x00002aaba81f0000)
  eden space 3723648K, 100% used [0x00002aaaae1f0000, 0x00002aab91650000, 0x00002aab91650000)
  from space 186176K,  58% used [0x00002aab9cc20000, 0x00002aaba35b08e8, 0x00002aaba81f0000)
  to   space 186176K,   0% used [0x00002aab91650000, 0x00002aab91650000, 0x00002aab9cc20000)
 concurrent mark-sweep generation total 14385152K, used 0K [0x00002aaba81f0000, 0x00002aaf161f0000, 0x00002aaf161f0000)
 concurrent-mark-sweep perm gen total 524288K, used 229038K [0x00002aaf161f0000, 0x00002aaf361f0000, 0x00002aaf361f0000)
2010-11-27T09:50:55.024+1100: 2064.149: [GC 2064.149: [ParNew
Desired survivor size 95322112 bytes, new threshold 1 (max 15)
- age   1:  145857248 bytes,  145857248 total
: 3831746K->186176K(3909824K), 2.3871680 secs] 3831746K->310053K(18294976K), 2.3873730 secs] [Times: user=3.95 sys=3.11, real=2.39 secs] 
Heap after GC invocations=2 (full 0):
 par new generation   total 3909824K, used 186176K [0x00002aaaae1f0000, 0x00002aaba81f0000, 0x00002aaba81f0000)
  eden space 3723648K,   0% used [0x00002aaaae1f0000, 0x00002aaaae1f0000, 0x00002aab91650000)
  from space 186176K, 100% used [0x00002aab91650000, 0x00002aab9cc20000, 0x00002aab9cc20000)
  to   space 186176K,   0% used [0x00002aab9cc20000, 0x00002aab9cc20000, 0x00002aaba81f0000)
 concurrent mark-sweep generation total 14385152K, used 123877K [0x00002aaba81f0000, 0x00002aaf161f0000, 0x00002aaf161f0000)
 concurrent-mark-sweep perm gen total 524288K, used 229038K [0x00002aaf161f0000, 0x00002aaf361f0000, 0x00002aaf361f0000)
}
Posted in Uncategorized | Tagged , , , | Leave a comment

Increasing swap size on Solaris (using ZFS)

Today, I was installing the Oracle Database 11g R2 on a Solaris system, but it failed a prerequisite check during the installation – it did not have enough swap space available.  This particular system I had installed with ZFS.  Turns out that adding extra swap space on a ZFS system is slightly different than what you might be used to.  I am sure I am going to want to do this again some time, and I guess other folks will too, so here are the details.

Firstly, check where your swap file is (it will be a ZFS volume created during the Solaris installation):

bash-3.00# swap -l
swapfile             dev  swaplo blocks   free
/dev/zvol/dsk/rpool/swap 256,1      16 4194288 4194288

Then you will need to unmount it:

bash-3.00# swap -d /dev/zvol/dsk/rpool/swap

You should validate it is unmounted:

bash-3.00# swap -l
No swap devices configured

Then you can resize the ZFS volume (just give it the pool name and the volume name):

bash-3.00# zfs set volsize=16G rpool/swap

And then add it back into your swap space:

bash-3.00# swap -a /dev/zvol/dsk/rpool/swap

And now we see the swap space is back online and larger than before:

bash-3.00# swap -l
swapfile             dev  swaplo blocks   free
/dev/zvol/dsk/rpool/swap 256,1      16 33554416 33554416

Pretty easy once you know how!

Posted in Uncategorized | Tagged , , , | 1 Comment