Sciology = Science + Technology

Commonsense in Technology

  • Archives

  • Pictures

Posts Tagged ‘Java’

Oracle + Java = Harmony

Posted by sureshkrishna on April 20, 2009

Everyone in the silicon valley is talking about it. Someone is excited more over others. However, this is not a shock or surprise to anyone. Sooner or the later this has to happen. Oracle has been VERY STRONG player in the middle-ware with its acquisition marathon, it built “near to complete” business vertical empire. Oracle has a complete stack of the enterprise software and is a REAL “Information Company”.

With the Oracle agreeing to buy Sun, there are plethora of possibilities for Synergy. Many view Oracle as an enterprise software company and till now all the databases, products are tuned to number of servers such as Sun Solaris, Linux, HP and Windows. With this acquisition, there would be greater and tighter integration along with  performance tuning for the Oracle products on Solaris OS + Sun hardware. This will also launch Oracle in the hardware race with the IBM and HP. Oh ya… Cisco has just started, but now Oracle is already in it (with this acquisition).

Oracle now will be able to steer the JCP along with IBM and other major players. Java would see a major boost and a new direction with focus on the Enterprise Software. I am specially interested to see if there would be more developments (and innovation) in JVM and other languages based on it such as JRuby, Scala, etc…

Sun has been very keen in technology innovation and result is the full stack of Web Services, JVM, JRE, GlassFish, JavaFX and last but not the least NetBeans. Of course Sun is one company that entered into the IDE market in the earlier stages but could not make a great IDE out of NetBeans when compared to Eclipse. Along with JDeveloper, Oracle contributed great plugins to Eclipse and has been a long term supporter of Eclipse. This is one area, i am curious to see what would happen to 3 IDEs (JDeveloper, Eclipse, NetBeans).

The next big thing Oracle will definitely gain in my perspective is Cloud Computing. Cloud Computing is relatively new and has a great potential to be the next wave in the  Infrastructure + Software + Internet technology. With the stack of its enterprise products + cloud computing Oracle and Sun could both have a great synergy in this area.

Not sure what would happen to MySQL database. This is a free, open source database also available in Enterprise flavor. The good thing about the MySQL is that is has a small eco system fo developers and tool vendors. Unlike Oracle, MySQL is targeted to wards small/medium scale applications and enterprises.

Finally, Oracle acquiring Sun is definitely good for struggling Sun. Oracle is very good at the business and it has a very good sales team. This, combined with the Sun’s technology would be a good news for the customers. It is too early to predict what would happen to the other technologies at Sun. But, customers are sure going to benefit from this acquisition.

Disclaimer : This is my personal opinion ONLY. None of these ideas or statements correspond, reflect or transform to any of my current or previous employers.

Posted in Technology | Tagged: , , , , | Leave a Comment »

Beware of writing regex and string functions

Posted by sureshkrishna on August 6, 2008

Recently i was involved in an issue took a week to come to know the root cause. In the end its an eye opener to many who does not give importance to string functions and regex. “Regular expressions and String functions are quite powerful in any language; however utmost importance should be given to such code.”

The issue is very simple. Set of Java Files need to processed to get some annotations and other proprietary stuff and also separate the main class names and inner class names. The customer created Business Entities which may contain inner classes and are passed through a pre-processor. Problem occurs in a particular case when the File name is “BlaSomeClassName_Bla.java” and it contains a inner class “SomeClass”.

–>Inner class name is SIMILAR to main class name.

Lets look at the following code and especially the line 6. This line tries to match the class names given by qdox (java source parser) with the java source file that is currently being processed.

1     if (classes.length == 1) {
2         _javaClass = classes[0];
3     } else {
4         for (int i = 0; i < classes.length; i++) {
5             JavaClass aClass = classes[i];
6             if (aSourceFile.getName().matches(".*" + aClass.getName() + ".*")) {
7                 _javaClass = classes[i];
8                 break;
9             }
10        }
11    }

This is the regular expression that took up my days and nights which rarely has any sort of consistency in execution. In the above example the source that is being processed is the “BlaSomeClassName_Bla.java” and the class names that you get from qdox will be “BlaSomeClassName_Bla” and “SomeClass”. And now probably you would have guessed. In the array “classes”, if the “SomeClass” comes as the first element you are screwed. The regular expression matches the “BlaSomeClassName_Bla” and the processing class is taken as “SomeClass”. Where as the right processing class is “BlaSomeClassName_Bla”.

This issue took quite a few days to really understand and get to the bottom of the code. Many many thanks to eclipse which enables a cool debugging. Conditional debugging is very useful in such scenarios where you would not want to wait for a long time to see the special case. Instead, introduce the right condition and rest is taken care by eclipse. This is what makes the eclipse my favorite IDE.

Do you have any such experiences with strings and regex ?

Posted in Uncategorized | Tagged: , , , | 9 Comments »

Java XML Libraries – Quick Reference

Posted by sureshkrishna on June 13, 2008

Reader Level : Basic

Recently i have been involved in a project that uses heavy XML which game me opportunity to look into many Java and XML related technologies/libraries/parsers. I tried to share some of interesting libraries that i dealt with. Interestingly, i have seen very few developers knew what each term (like “Reader”, “Parser”, “Builder” and “Factories”) means in the XML world. The idea of this article is to introduce basic terms and some resources to start in depth dissection.

XML Parser Technology / Types :

Many refer to “XML parsers” as “XML APIs”. Whatever you call it, in the end every one wants to read, process and build xml in some way or the other. Though its quite possible to consider XML file as sequence of characters and write custom parsers, thats not the recommended way if one need to do their job in a “easy” manner. In the XML world we often fined two widely used parsers; SAX (Simple API for XML) and DOM (Document Object Model). I am limiting the discussion only to the SAX and DOM.

SAX : sax is a event-based parsing mechanism. As the “SAX Parser” parses the XML input streams, events like startDocument, endDocument, startElement, endElement, ect… are encountered and the client program gets the call backs. As this parser type does not load the xml document in to the memory, its relatively low on resources. Sax is a READ-ONLY api (i.e. One can not change any content of the XML File). Client is able to traverse the document in a sequential manner. The new SAX2 specification incorporates name spaces, filter chains, and querying. Some time they are also refered to as push-parsers, as parser pushes recognized tokens to the client.

DOM : DOM is a comprehensive API for XML documents. It lets clients to navigate, retrieve, add, modify or delete the contents from the source XML. As opposed to SAX, DOM stores the entire content of XML file in the memory. As one can imagine that storing the XML document would require some sort of object representation for Nodes, Elements, Attributes, ProcessignInstructions, Comments and Text types, its relatively heavy on the memory. The memory consumption size is normally viewed as 5x the XML size. DOM enables clients to access data randomly from the in-memory document. Before we go any further, its important to understand that the current discussion is limited to Java technology. So, lets see a little about the most-frequently-used package from the SDK.

JAXP :

The Java API for XML Processing (JAXP) enables applications to parse, transform, validate and query XML documents using an API that is independent of a particular XML processor implementation. JAXP provides a pluggability layer to enable vendors to provide their own implementations without introducing dependencies in application code. Using this software, application and tool developers can build fully-functional XML-enabled Java applications for e-commerce, application integration, and web publishing.

JAXP is a standard component in the Java platform. An implementation of the JAXP 1.3 is included in J2SE 5.0 and an implementation of JAXP 1.4 is in Java SE 6.0. JAXP 1.4 is a maintenance release of JAXP 1.3 with support for the Streaming API for XML (StAX). JAXP 1.3 contained five JAR files which were jaxp-api.jar, sax.jar, dom,jar, xercesImpl.jar, and xalan.jar. The packaging reflected the technologies covered, as well as sources used in JAXP 1.3. In JAXP 1.4, these technologies and the newly added StAX package have been tightly integrated into the JAXP RI

Parser Implementations :

Xerces-J : The Xerces Java Parser 1.4.4 supports the XML 1.0 recommendation and contains advanced parser functionality, such as support for the W3C’s XML Schema recommendation version 1.0, DOM Level 2 version 1.0, and SAX Version 2, in addition to supporting the industry-standard DOM Level 1 and SAX version 1 APIs. This release includes full support for the W3C XML Schema Recommendation, except for limitations as described on their website.

In order to take advantage of the fact that this parser is very often used in conjunction with other XML technologies, such as XSLT processors, which also rely on standard API’s like DOM and SAX, xerces.jar was split into two jarfiles:

  • xml-apis.jar contains the DOM level 3, SAX 2.0.2 and the JAXP 1.3 APIs;
  • xercesImpl.jar contains the implementation of these API’s as well as the XNI API.

XPath Implementations :

Jaxen : Jaxen is an open source XPath library written in Java. It is adaptable to many different object models, including DOM, XOM, dom4j, and JDOM. Is it also possible to write adapters that treat non-XML trees such as compiled Java byte code or Java beans as XML, thus enabling you to query these trees with XPath too.

Saxon : Saxon is a full featured library for the XSLT 2.0, XQuery 1.0, and XPath 2.0 Recommendations. Saxon comes in two packages: Saxon-B implements the “basic” conformance level for XSLT 2.0 and XQuery, while Saxon-SA is a schema-aware XSLT and XQuery processor. Both packages are available on both platforms (Java and .NET). Saxon-B is an open source product available from this site; Saxon-SA is a commercial product available from Saxonica Limited. A free 30-day evaluation license is available.

Xalan : Xalan-Java fully implements XSL Transformations (XSLT) Version 1.0 and the XML Path Language (XPath) Version 1.0. XSLT is the first part of the XSL stylesheet language for XML. It includes the XSL Transformation vocabulary and XPath, a language for addressing parts of XML documents. Implements Java API for XML Processing (JAXP) 1.3, and builds on SAX 2 and DOM level 3.Implements the XPath API in JAXP 1.3.May be configured to work with any XML parser, such as Xerces-Java, that implements JAXP 1.3.

Java XML Document Builders :

Do NOT confuse Builders with parsers. Builders basically uses the default/underlaying parsers, gets the org.w3c.Document and converts them to specific Document type (e.g. org.dom4j.Document or org.jdom.Document). DOM4J seems to be quite advanced in terms of the functionality for a Java developer. The JDOM API seems to be quite simple for the implementation.

JDOM : JDOM is, quite simply, a Java representation of an XML document. JDOM provides a way to represent that document for easy and efficient reading, manipulation, and writing. It has a straightforward API, is a lightweight and fast, and is optimized for the Java programmer. It’s an alternative to DOM and SAX, although it integrates well with both DOM and SAX. Most importantly it uses Java Collection API. I hope its easy for a java programmer 🙂 .
As i understand JDOM relies on the Jaxen as the default XPath library. But we can also use any xpath lilbrary of our choice like xalan.

DOM4J : dom4j is an easy to use, open source library for working with XML, XPath and XSLT on the Java platform using the Java Collections Framework and with full support for DOM, SAX and JAXP.

References:

Posted in Uncategorized | Tagged: , , , , , , , , , | 11 Comments »

JDOM Quick Reference

Posted by sureshkrishna on June 9, 2008

JDOM: [www.jdom.org]

JDOM is a full featured Java API for the SAX and DOM accessing. Collections are used heavily for the results and queries to make Java programmer life easier. The SAX and DOM parsers would be the underlaying default parsers. i.e. JAXP is checked if it exists then the Apache parser then finally the hard coded internal parser. It also provides adapters to many other parsers like Oracle Parser, IBM Parser and Apache Xerces DOM.

Main Classes [JDOM Java Docs] :

SAXBuilder : Builds a JDOM document from files, streams, readers, URLs, or a SAX InputSource instance using a SAX parser. The builder uses a third-party SAX parser (chosen by JAXP by default, or you can choose manually) to handle the parsing duties and simply listens to the SAX events to construct a document.
SAXHandler : This will create a new SAXHandler that listens to SAX events and creates a JDOM Document. The objects will be constructed using the default factory.
SAXOutputter : Outputs a JDOM document as a stream of SAX2 events.

DOMBuilder : Builds a JDOM org.jdom.Document from a pre-existing DOM org.w3c.dom.Document. Also handy for testing builds from files to sanity check SAXBuilder.
DOMOutputter : Outputs a JDOM org.jdom.Document as a DOM org.w3c.dom.Document.

XSLTransformer : A convenience class to handle simple transformations. The JAXP TrAX classes have more bells and whistles and can be used with JDOMSource and JDOMResult for advanced uses. This class handles the common case and presents a simple interface. XSLTransformer is thread safe and may be used from multiple threads.

XSLTransformer transformer = new XSLTransformer(“file.xsl”);

Document x2 = transformer.transform(x); // x is a Document
Document y2 = transformer.transform(y); // y is a Document

JDOM relies on TrAX to perform the transformation. The “javax.xml.transform.TransformerFactory” Java system property determines which XSLT engine TrAX uses. Its value should be the fully qualified name of the implementation of the abstract javax.xml.transform.TransformerFactory class. Values of this property for popular XSLT processors include:

* Saxon 6.x: com.icl.saxon.TransformerFactoryImpl
* Saxon 7.x: net.sf.saxon.TransformerFactoryImpl
* Xalan: org.apache.xalan.processor.TransformerFactoryImpl

JDOMSource : A holder for an XML Transformation source: a Document, Element, or list of nodes.
public static List transform(Document doc, String stylesheet) throws JDOMException {
try {
Transformer transformer = TransformerFactory.newInstance().newTransformer(new StreamSource(stylesheet));
JDOMSource in = new JDOMSource(doc);
JDOMResult out = new JDOMResult();
transformer.transform(in, out);
return out.getResult();
}
catch (TransformerException e) {
throw new JDOMException(“XSLT Transformation failed”, e);
}
}

JDOMResult : A holder for an XSL Transformation result, generally a list of nodes although it can be a JDOM Document also. As stated by the XSLT 1.0 specification, the result tree generated by an XSL transformation is not required to be a well-formed XML document. The result tree may have “any sequence of nodes as children that would be possible for an element node”.

Sample programs :

All the examples uses the sample file “plugin.xml” in “c:\” directory.

#1 Create a document and output it via the XMLOutputter class.

package com.suresh.xml.jdom;

import java.io.File;
import java.io.IOException;

import org.jdom.Document;
import org.jdom.JDOMException;
import org.jdom.input.SAXBuilder;
import org.jdom.output.XMLOutputter;

public class TestJDOMOutputter {

    public static void main(String[] args) {
        try {
            // This SAXBuilder looks for the default SAXParsers, parses and builds the XML.
            // The default behavior is to (1) use the saxDriverClass, if it has been
            // set, (2) try to obtain a parser from JAXP, if it is available, and
            // (3) if all else fails, use a hard-coded default parser (currently
            // the Xerces parser).
            // SaxBuilder -> JAXPParserFactory -> SAXParserFactory ->
            // com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl or
            // org.apache.xerces.jaxp.SAXParserFactoryImpl

            SAXBuilder builder = new SAXBuilder();
            Document document = builder.build(new File("c:\\plugin.xml"));
            XMLOutputter outputter = new XMLOutputter();
            outputter.output(document, System.out);
        } catch (JDOMException e) {
            e.printStackTrace();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

#2 Traverse the entire tree and print the node statistics

package com.suresh.xml.jdom;

import java.io.File;
import java.io.IOException;
import java.util.Iterator;
import java.util.List;

import org.jdom.Comment;
import org.jdom.DocType;
import org.jdom.Document;
import org.jdom.Element;
import org.jdom.JDOMException;
import org.jdom.ProcessingInstruction;
import org.jdom.Text;
import org.jdom.input.SAXBuilder;

public class TestJDOMTraverseTree {

    public static void main(String[] args) {
        SAXBuilder builder = new SAXBuilder();
        try {
            Document doc = builder.build(new File("c:\\plugin.xml"));
            traverseXMLTree(doc.getContent());

            Element rootElement = doc.getRootElement();
            String baseURI = doc.getBaseURI();
            DocType docType = doc.getDocType();
            // this gives the file path "file:/c:/plugin.xml"
            System.out.println("base URI : " + baseURI);
            // this is "plugin" element
            System.out.println("RootElement : " + rootElement.getName());
            // if it has a doctype then get the info
            if (docType != null) {
                System.out.println("DocType : " + docType.getElementName() + " : " + docType.getPublicID() +
                        " : " + docType.getSystemID() + " : " + docType.getValue());
            }
        } catch (JDOMException e) {
            e.printStackTrace();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    private static void traverseXMLTree(List<Object> contentList) {
        Iterator<Object> contentIter = contentList.iterator();
        while (contentIter.hasNext()) {
            Object obj = contentIter.next();
            if (obj instanceof Element) {
                Element element = (Element) obj;
                System.out.println("Element Name[" + element.getContentSize() + "] : " + element.getName());
                traverseXMLTree(element.getContent());
            } else if (obj instanceof ProcessingInstruction) {
                ProcessingInstruction pi = (ProcessingInstruction) obj;
                System.out.println("PI as seen in Doc : <?" +  pi.getTarget() + " " + pi.getData() + "?>");
            } else if (obj instanceof Text) {
                Text text = (Text) obj;
                if (text != null && text.getText() != null && text.getTextTrim().length() > 0) {
                    System.out.println("Text : " + text.getTextTrim());
                }
            } else if (obj instanceof Comment) {
                Comment comment = (Comment) obj;
                System.out.println("Comment : " + comment.getValue());
            }
        }
    }

}

Example plugin.xml file used in the above examples :

<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.0"?>
<plugin
   id="com.example.eclipseTools"
   name="Tools Plug-in"
   version="1.0.0"
   provider-name="Example"
   class="com.example.eclipse.EclipseToolsPlugin">
   <runtime>
      <library name="tools/tools-2.2.0-SNAPSHOT.jar">
         <export name="*"/>
      </library>
      <library name="tools/lib/velocity-1.4.jar">
         <export name="*"/>
      </library>
      <library name="shared/shared-2.2.0-SNAPSHOT.jar">
         <export name="*"/>
      </library>
      <library name="shared/lib/c3p0-0.9.0.4.jar">
         <export name="*"/>
      </library>
      <library name="shared/lib/cglib-2.1.3.jar">
         <export name="*"/>
      </library>
      <library name="shared/lib/commons-beanutils-1.6.1.jar">
         <export name="*"/>
      </library>
      <library name="shared/lib/commons-collections-2.1.1.jar">
         <export name="*"/>
      </library>
      <library name="shared/lib/commons-logging-1.0.4.jar">
         <export name="*"/>
      </library>
      <library name="shared/lib/dom4j-1.6.1.jar">
         <export name="*"/>
      </library>
      <library name="shared/lib/ehcache-1.1.jar">
         <export name="*"/>
      </library>
      <library name="shared/lib/hibernate-3.1.3.jar">
         <export name="*"/>
      </library>
      <library name="shared/lib/junit-3.8.1.jar">
         <export name="*"/>
      </library>
      <library name="shared/lib/log4j-1.2.11.jar">
         <export name="*"/>
      </library>
      <library name="shared/lib/xstream-1.2.1.jar">
         <export name="*"/>
      </library>
      <library name="jta.jar">
         <export name="*"/>
      </library>
      <library name="shared/lib/asm-1.5.3.jar">
         <export name="*"/>
      </library>
      <library name="shared/lib/asm-attrs-1.5.3.jar">
         <export name="*"/>
      </library>
      <library name="eclipseTools.jar">
         <export name="*"/>
      </library>
      <library name="tools/lib/qdox-1.6.jar">
         <export name="*"/>
      </library>
      <library name="shared/lib/antlr-2.7.6.jar">
         <export name="*"/>
      </library>
      <library name="shared/lib/commons-lang-2.2.jar">
         <export name="*"/>
      </library>
      <library name="shared/lib/jaxen-1.1.1.jar">
         <export name="*"/>
      </library>
      <library name="shared/lib/ojdbc14-10.2.0.3.jar">
         <export name="*"/>
      </library>
   </runtime>
   <requires>
      <import plugin="org.eclipse.ui"/>
      <import plugin="org.eclipse.core.resources"/>
      <import plugin="org.eclipse.core.runtime"/>
      <import plugin="org.eclipse.jdt.ui"/>
      <import plugin="org.eclipse.jdt.core"/>
   </requires>
   <extension
         point="org.eclipse.ui.preferencePages">
      <page
            class="com.example.eclipse.preferences.SPLPreferencePage"
            name="Example Preferences"
            id="com.example"/>
      <page
            category="com.example"
            class="com.example.eclipse.database.preferences.PreferencePage"
            id="com.example.eclipseTools.database.preferences.PreferencePage"
            name="Database Connection Preferences"/>
   </extension>
   <extension
         point="org.eclipse.ui.propertyPages">
      <page
            class="com.example.eclipse.properties.ProjectPropertyPage"
            id="com.example.eclipse.properties.ProjectPropertyPage"
            name="Example Database Properties"
            objectClass="org.eclipse.jdt.core.IJavaProject"/>
   </extension>
   <!-- This is a Comment -->
   </plugin>

Posted in Uncategorized | Tagged: , , , , | 3 Comments »

UI Designers dilemma !!!

Posted by sureshkrishna on December 7, 2007

Eclipse Visual Editor, NetBeans Matisse project and Instantiations SWT Designer are wonderful WYSIWYG editors that i have used till now. With few projects that i have done, i have always hand coded the UI’s. And of course i have done UI development on the Eclipse through out. So all the perspectives, views, editors, wizards, preferences and properties are hand coded and i am quite comfortable with it. I do agree that probably it takes more time for me to hand code but i feel “personally” satisfied.

When i discuss this with few of my colleagues, there are for and against the UI Designers. For me and many others, Layouts and adjusting the controls on the screen are the challenges. I do spend a lot of time in adjusting the controls in a layout.

Why they like UI Designers…

  •  UI can be build very very fast, without knowing whats in the code
  • With the advanced UI Designers, its easy to adjust/auto-adjust the Layouts
  • The easy-to-use drag-and-drop paradigm makes it easy to quickly visualize
  • Properties are set in the palette and the same is reflected dynamically on the UI preview
  • Control and Widget hierarchy is what many appreciate. I know precisely, which controls are in UI

Why they want to handcode…

  • I can code. I am a developer. I am used to it.
  • Code generated by UI Designers are not optimized. I want my code to be optimized.
  • Many a times, generated code is not readable to customize.
  • It might make me less creative and insecure, as i do not know whats happening at the code level.

I am sure many of us share some of the above views and you might definitely have great experience in building UIs. Its really interesting to know, if there are really more number of developers who would use UI Designers. For now i would stick to the hand-coding. Do share your experiences with the UI Designers …

Posted in Eclipse, Java, Net Beans 6.0, SWT | Tagged: , , , | 7 Comments »

Application Performance : Part II

Posted by sureshkrishna on October 28, 2007

I have been dealing with many of the java applications for years and in the recent past i am finding all the areas that effect the performance. I have been reading lot of books and articles and i thought that a summary would help everyone. In the past 2 months, i have been reading the book Java Performance Platform. Thanks to the authors, they have really given valuable information in this book. As always, “Its not enough to read this book but to consciously write applications that takes care of the memory management and performance”.

Many of the developers and IT does complain about the huge foot print by applications and some times its difficult to know whats causes the foot print. Applications becomes memory hogs and instead of looking into the root cause, we tweak JVM settings, increase the virtual memory, increase the PermGen size, custom class loading. I am sure and any of you would agree that we do “all sorts of things” for the application performance. I have written some notes on the Garbage Collection  (which is Application Performance – Part I) in my previous blog. The current “notes” concentrates on the causes for the RAM footprint and different aspects of the things that we need to be aware of.

Memory foot print of the program is tricky to find out. Many of developers do look into the TaskManager to see RAM usage by the application. This definitely gives an idea of how much of memory your application requires. When this memory increases over a period of time, we suspect memory leaks and take corrective actions.

Programmatic Memory Usage : Certain in formation can be derived from the java.lang.Runtime class. This class can look for the heap size of the JVM. Two methods namely Runtime.totalMemory() and Runtime.freeMemory() gives the size (in bytes) information that many of us want. Heap memory can only give the size of the objects. But the actual size of the application is a combination of the Objects, Classes, Threads, Native Data Structures, Native Code.

App Runtime Size = funtion of (Objects + Classes + Threads + Native Data Structures + Native Code)

Depending on the OS, JVM and application the actual memory consumption changes. So any one of the above parameters could be the number one memory consumer for a particular application. In general the memory consumption depends on following items (but not limited to).

  • usage the native code
  • usage of java core libraries
  • bulk of the frameworks used in the application
  • number of classes loaded against the objects used

Most of the developers do have a great deal of control over the Objects and their sizes. It could help if the developer knows the approximate size contributed by the objects at the run-time to see what is the optimization area. i.e. if object memory is only a small % of the total size then perhaps we could concentrate on the classes or native libraries, etc.

When classes are loaded into the memory, there are few more dependant entities that contribute to the RAM footprint. Bytecodes is the intermediate format that a java class file gets compiled to. It is necessary that the bytecode gets loaded into the RAM. All the related contents are parsed and reflective data structures are created for methods and fields.  Constant pool is defined for all the classes. e.g. all the String literals are present in this constant pool along with all the class, method and fields names. Threads are another important item that could cause a large memory foot print. Its necessary to see what kind of computation is done in the Thread and what data-structures are used. Many of the UI level libraries/frameworks like SWT, AWT does depend on some sort of the native libraries. Its difficult to know which class of these frameworks directly access the native libraries.

Knowing some of the entities that increases the memory footprint would help many of developers. If not completely avoid the large memory footprints, we can at least be aware and work towards conscious usage of resources.

Posted in Java | Tagged: , , , , | 2 Comments »

Do you care about Garbage Collection ?

Posted by sureshkrishna on October 23, 2007

Recently there have been few instances our RCP application crashed with the OutOfMemory exception and it took lot of time to look reasons behind the crash. An easy solution is to increase the PermGen from 64mb (default) to 256mb (recommended). Well, this works for the application and i am happy. Since then, i explored few books and articles to look into the memory allocation, objects creation and garbage collection can effect the application and its memory needs.

Garbage Collection is the concept that its publicized as “JVM does automatic GC”. But in reality this does not seem to be as cool as it sounds. Developers should not completely relay on JVM to take care of your applications memory needs and garbage collection. A solid understanding on “How GC works” is essential to write robust and high-performance applications.

Its necessary to understand the complete life-cycle of an object and how it transforms its state from declaration till garbage collection. We will look into each of the state in detail.

  • Created
  • In Use
  • Invisible
  • Unreachable
  • Collected
  • Finalized
  • Deallocated

Created: Creating the object makes many things to happen. Space is allocated for the Object. Object constructor is called. If at all there is a super class, it constructor is called. Instance variables are initialized. In the end its important to realize that the object construction does take time and this depends on the JVM implementation for sure.

In Use : Once an object has strong reference, its said to be In Use. It is normal for any object to be In Use state for relatively longer than any other states.

  1. public class InUseClass {
  2. static List someList = new ArrayList();
  3. static void doSomething() {
  4. Customer customer = new Customer();
  5. somelist.add(customer);
  6. }
  7. public static void main(String[] args) {
  8. doSomething();
  9. }
  10. }

In the above code you can see that the at the time doSometing() is executing, there are 2 instances of the Customer that’s holds on. One in the line 4 and one in the line 2. Once the doSomething() returns, there is only one instance that remain of Customer which is line from line #2. Which is a reference of the Customer object in the List. Thus you can imagine that there would be lots of references that remain strongly referenced even after they are used up.

Invisible : When an object is no longer strongly referenced but references still exists, its said to be in Invisible state. Its not necessary that all objects go via this state. In the following code snippet you can see that the strong object references are lost once the something() method is returned. In other words, they go out of scope once the method is returned.

  1. public void something() {
  2. for (int i =0; i<5;i++) {
  3. Customer customer = new Customer();
  4. customer.printName();
  5. }
  6. }

This scenario is dangerous and one of the main causes for the memory leaks. In this code block the references become invisible and its recommended to make the references null explicitly.

Unreachable: An object is in Unreachable state when there are no strong references and it will be marked for the collection. Of course “marked for collection” does not mean that JVM does the GC immediately. JVM has all its freedom to delay till memory is required by application.

Collected: When an object is unreachable the JVM readies it for the finalization. If an instance has the finalize method then it is marked and if the instance does not have any finalize method, it will move directly to Finalized state. Its very important to note that the when an instance has the finalize method, the deallocation process is delayed.

Finalized: Once the finalize method is run, if the object is still unreachable, its in the Finalized state. A finalized object is waiting for the deallocation. Its certain that the object’s life is prolonged when the finalize method is attached to the object. Also its not recommended to attach the finalize method for short-lived classes.

Deallocation: After all the above steps if the object is still unreachable, then the object is marked for the deallocation state. Its not sure when exactly the JVM deallocates, but this is dependent on the implementation of GC algorithms.

In the end, GC is not really what many of us perceive. An extra care must be taken to carefully write the applications with a special check on the object references and freeing the memory when ever possible.

“Nothing comes for free and so is GC”.

Posted in GC, Java | Tagged: , , | 5 Comments »

Persist your EMF Objects with Teneo

Posted by sureshkrishna on October 9, 2007

JMatter, NakedObjects and EMF are few technologies that i have been interested recently. That fact that all these technologies allows to create the model, generate the code for UI and finally persist the UI State in Object Reational Databases, makes me get excited about these. Without the help of these frameworks, some business usecases which needs the domain model/metadata to be persisted in the databases does require a lot of hand-coding and often requires long months of implementation. JMatter and Naked Object are supposedly generate the OR mapping from the domain data to the direct database tables with the help of the Hibernate mapping.

For all the EMF and Eclipse lovers it would be difficult to change their applications to JMatter or NakedObjects.  And one reason that i do not want to do it right now is that these frameworks are yet to support the SWT and interop with EMF models. For the Object relational database persistence of the metadata and domain models i found the Teneo project from eclipse to be quite promising.

What is Teneo : Teneo is the eclipse sub-project from  EMFT, which aims at providing the database persistency solution for EMF using Hibernate or JPOX/JDO 2.0. It supports automatic creation of EMF to Relational Mappings and the related database schemas. EMF Objects can be queried and stored using the advanced queries like HQL and JDOQL.

Why use Teneo

  • Teneo allows you to start with your model (UML or XML Schema) and automatically generate the java source code and object-relational mappings.
  • Teneo takes over much (or even all) of the manual work of creating relational mapping schemes.
  • Teneo supports JPA annotations on model level, this keeps your java code clean from persistency specific constructs.
  • The integration with EMF allows you to generate Eclipse RCP editors which persist automatically to a relational database.

And More  Teneo automatically maps the EMF model to a Hibernate OR mapping. The automatic mapping can be done in-memory when your application is initialized or a separate hibernate mapping file can be generated. The generated hbm file can be adapted manually and used in the runtime layer. To handle the EMF resource management and also the Hibernate mappings, a special runtime layer called EMF-Hibernate Runtime layer was developed. Teneo takes care of instantiating the EMF Objects and getters/setters for EFeatures of EMF from database.

This project is definitely a great relief to the EMF and Eclipse developers. Now everyone can persist their models in the Databases with a cleaner Hibernate implementation.

References

Posted in Eclipse, EMF, Java, JMatter, Naked Objects, Plug-ins, Plugin | Tagged: , , , , , , | 4 Comments »

 
Design a site like this with WordPress.com
Get started