Sunday, December 9, 2012

java.sql.SQLException: No suitable driver Exception

Just a quick note, if you are seeing

 java.sql.SQLException: No suitable driver ...
although you are pretty sure you have :-
  • Got the jdbc connection information correct
    • db username
    • db password
    • db url
    • db driver class
  • Got the driver jar file in the classpath for your web all
  • You are using connection pooling such as c3p0 or commons-pool
It might be time to try sticking that jdbc driver jar file into the servlet container's global classpath. This approach seems to work well in my case. Haven't dig into connection pooling code but this is my guess. When the connection pool starts up it tries to register the jdbc Driver using perhaps
  Class.forName("... jdbc driver....");
and for some reason seems to be loading it through system classloader hence if our jdbc driver jar file is in our webapp level, it's being missed causing this exception.

Hopefully, this helps whoever hits this issue.


Wednesday, November 21, 2012

Sitemesh, my favourite web development companion

I've always like Sitemesh even those days before I was introduced to WebWork. I guess after that I just like it even more. Funny thing is not much people seems to be using Sitemesh out there though. Guess more people are familiar with Tiles thanks to Struts I guess.
This is what i know about Sitemesh and what works for me when setting up Sitemesh

#1 Declare sitemesh filters in web.xml

<web ...="...">

# 2 add sitemesh.xml and decorator.xml in '/WEB-INF/' directory

sitemesh.xml by default exists in sitemesh jar file and the copy in sitemesh jar file will be used if one cannot be found in /WEB-INF directory. It's really useful to have a custimizable copy of sitemesh.xml lying in your /WEB-INF directory so you can tweak and perhaps add custom parsers, mappers etc. Following is a copy of a typical sitemesh.xml file
  <property name="decorators-file" value="/WEB-INF/decorators.xml">
  <excludes file="${decorators-file}">
    <parser class="com.opensymphony.module.sitemesh.parser.HTMLPageParser" content-type="text/html">
    <mapper class="com.opensymphony.module.sitemesh.mapper.PageDecoratorMapper">
      <param name="property.1" value="meta.decorator" />
      <param name="property.2" value="decorator" />
    <mapper class="com.opensymphony.module.sitemesh.mapper.FrameSetDecoratorMapper">
    <mapper class="com.opensymphony.module.sitemesh.mapper.PrintableDecoratorMapper">
      <param name="decorator" value="printable" />
      <param name="" value="printable" />
      <param name="parameter.value" value="true" />
    <mapper class="com.opensymphony.module.sitemesh.mapper.FileDecoratorMapper">
    <mapper class="com.opensymphony.module.sitemesh.mapper.ConfigDecoratorMapper">
      <param name="config" value="${decorators-file}" />
Couple of important things :-


These are sitemesh parser parsing html into sitemesh internal Page object to be used by mappers, decorators etc., we'd rarely need to ever change this

<property> tag

These are tags so you can later refer to the property value through ${...} syntax.

<excludes> tag

Exclude files from being parsed, we wanna exclude ${decorator-file} which points to /WEB-INF/ cause we have a mapper that specifically parse this file later.

<mapper> tag

mappers map an incoming request to a decorator which decorates in incoming request and producing a decorated request ultimately returned to the browser. They are executed in order and ConfigDecoratorMapper should always be the last cause it is a 'catch all' mapper. Following is a typical copy of decorators.xml
  <decorator name="error_page" page="/WEB-INF/decorators/error_page_layout.jsp">
  <decorator name="information_panel" page="/WEB-INF/decorators/information_panel_layout.jsp">

With the above example
  • '/bootstrap/*' will not be decorated
  • '/' will be decorated by '/WEB-INF/decorators/error_page_layout.jsp'
In the layout jsp, we could use the following tags :-
  <%@taglib prefix="sitemesh-decorator"  uri="" %>
  <%@taglib prefix="sitemesh-page"  uri="" %>

  <sitemesh-decorator:title />
  <sitemesh-decorator:head />
  <sitemesh-decorator:body />
  <sitemesh-decorator:getProperty property="..." default="[default value if no property found]" writeEntireProperty="[yes/no]" />
  <sitemesh-decorator:usePage id="..." />

  <sitemesh-page:applyDecorator name="..." title="..." page="..."/>

<sitemesh-decorator:title/> tag

Get the content inside html <title> tag of our original html page

<sitemesh-decorator:head/> tag

Stick in the <head> tag content of our original html page, only the content not the enclosing tags though.

<sitemesh-decorator:body/> tag

Stick in the <body> content of our original html page, everything in it.

<sitemesh-decorator:usePage id="..." />

Stick sitemesh Page object into request scope under the variable name given by 'id' attribute.

<sitemesh-page:applyDecorator page="..."/>

Apply the decorator given by 'page' attribute against the body of this tag. In other words, the body of this tag is going to be decorated by the decorator given in page attribute.
   Apply decorator named 'myDecorator' on the body of this tag.
 <sitemesh-page:applydecorator name="myDecorator">

     Apply decorator located at '/WEB-INF/decorators/myDecorator' on the body 
     of this tag overriding the title (if exists) in the body of this tag.
  <sitemesh-page:applydecorator page="/WEB-INF/decorators/myDecorator" title="...">

<sitemesh-decorator:getProperty /> tag

Stick in the parsed original html page's bits and pieces Eg. <html> tag attributes are sticked in as properties without any prefix <title> tag content are sticked in as 'title' <body> tag attributes are sticked in with prefix 'body' <meta> tag 'content' attributes content are sticed in with prefix 'meta' Eg.
with :-
<html myAttribute="test">
    <title>my title</title>
    <meta name="meta-name" content="meta-content" />
  <body onload="alert('ready');">

  This will gives us "test" from  tag's attribute
<sitemesh-decorator:getProperty property="myAttribute" />

  This will gives us 'my title' from the content of <title> tag
<sitemesh-decorator:getProperty property="title" />

  This will gives us 'meta-content' from  tag with name 'meta-name'
<sitemesh-decorator:getProperty property="meta.meta-name" />

  This will gives us 'alert('ready');' from onload attribute in <body> tag
<sitemesh-decorator:getProperty property="body.onload" />

Chao ^_^

Tuesday, November 6, 2012

Do we need equals(...) and hashcode(...) overrides in JPA domain objects ?

The idea of overriding equals(...) and hashcode(...) is such that an object can be uniquely identified through some more meaningful semantics instead of the default semantics which basically says that "object of the same class are equals if they are in the same memory location".

With JPA implementations, the concerns we have when leaving equals(...) and hashcode(...) as defaults are :-

  1. Composite primary Key will not work
  2. Issues with detaching and merging of domain objects
  3. Multiple copies of objects that are semantically similar can exists in our collection object
  4. entityManager.persist(...)

#1 Composite Key will not work

If we are not using composite key at all, then we should be fine. Is composite key a good thing to used compared to just say running number generated off sequence by the database itself is another topic of itself. I suppose if we are doing it for an existing database with tons of data already populated with some natural composite key being already in place, we'll just have to live with it. If it's a green field project, do we really want to use natural keys as our composite key?
  • Natural composite primary key takes up more indexing space compared to just incremental number as primary key
  • Database might take up more time when storing natural composite key assuming the index is a BTREE, where it needs to find the slot to stick in the composite key. Incremental number is just more predictable to the db in this case and I guess most db will be coded to take advantage of this.

#2 Issues with detaching and merging

This is a major concern. I guess the question is do me really need to use the merging feature of JPA? Following are some bits we want to take into account when doing a merge.
  • lazy-loaded relationships aren't going to be merge even if CascadeType is MERGE unless they are triggered before detach
  • merging across a relationship that is being removed will caused an exception
  • merging across a relationship that doesn't exists in persistence context will have undefined consequences except if the CascadeType across that relationship is MERGE
  • accidently 'null'ing out a detached non-lazily loaded or a triggered lazily loaded relationship will null out its counterpart in the persistence context when merging

It's much more convenient, in my humble opinion, to just do an DIY merge rather then relying on JPA's merging mechanisms (this comes from a web developer perspective). Say, we have forms on web pages that upon submit gets to a Spring controller where the values from the form are being populated in to a command object. We could just do a DIY merge into the domain object we do an 'entityManager.find(...)'.

 private EntityManager em;

 public String submission(Command command, BindingResult bindingResult, Model model){
      MyDomainObject domainObject = em.find(...);
But doesn't that means I'll lost my Optimistic Locking check if i have a '@Version' property in my entity?

Unfortunately I guess to a certain extend yes. So we'd probably wanna do that bit of logic ourselves.

#3 Multiple copies of objects that are semantically similar can exists in our collection object

If we do
   Set set = ... ;
   set.add(new MyDomainObject("jack"));
   set.add(new MyDomainObject("jack"));
both will be treated as different entity and will result in 2 records in the set itself. However this is arguably controllable in our application, surely we'll have some sort of validation be it in the controller or service that restrict addition that do not make any business sense.

#4 entityManager.persist(...)

So if we have an auto-generated primary key eg.
  private long id;
the id will only be valid after entityManager.persist(...) is invoked, else it'd be zero (the default value for any long primitive type). Since we did not overrides equals(...) and hashcode(...) to based on 'id' for equality, we are safe to use our domain object after entityManager.persist(...) is called.

Just me 2 cents. If you have a natural business key for your domain objects that uniquely identify itself, feel free to override equals(...) and hashcode(...). If you decide not to and if you are ok with working around some restrictions, that you should be fine as well.


Thursday, August 23, 2012

Finding a file from a bunch of jar files

Just had a need to quickly find a file eg. the super pom location in maven which was suspected to be in one of the jar files located within Maven's lib director ($MAVEN_HOME/lib).

The following command seems to work for me, it took me some time messing around with commands to get this out, so guess it would be beneficial to put it down here just to keep myself reminded and also hopefully help some others who are googling for the same stuff.

Eg. to find the super pom located in one of the jars within Maven's lib directory we could do

find . -iname '*.jar' | xargs -n 1 jar -tvf | grep -i '**.xml'
This would gives us something like
  1523 Thu Feb 24 12:34:04 EST 2011 META-INF/maven/org.sonatype.aether/aether-api/pom.xml
  3908 Mon Feb 28 18:28:26 EST 2011 META-INF/maven/org.apache.maven/maven-embedder/pom.xml
  2222 Mon Feb 28 18:27:48 EST 2011 META-INF/maven/org.apache.maven/maven-settings-builder/pom.xml
  1931 Mon Feb 28 18:28:12 EST 2011 META-INF/maven/org.apache.maven/maven-repository-metadata/pom.xml
  2248 Mon Feb 28 18:27:52 EST 2011 META-INF/maven/org.apache.maven/maven-model-builder/pom.xml
But it doesn't tell us which jar file it's in. The simplest way I found was to
find . -iname '*.jar' | xargs --verbose -n 1 jar -tvf | grep -i '**.xml' 
jar -tvf ./aether-api-1.11.jar 
  1523 Thu Feb 24 12:34:04 EST 2011 META-INF/maven/org.sonatype.aether/aether-api/pom.xml
jar -tvf ./maven-embedder-3.0.3.jar 
  3908 Mon Feb 28 18:28:26 EST 2011 META-INF/maven/org.apache.maven/maven-embedder/pom.xml
jar -tvf ./maven-settings-builder-3.0.3.jar 
jar -tvf ./maven-model-builder-3.0.3.jar 
 13206 Mon Feb 28 18:30:32 EST 2011 META-INF/plexus/components.xml
  4840 Mon Feb 28 18:30:30 EST 2011 org/apache/maven/model/pom-4.0.0.xml
  2248 Mon Feb 28 18:27:52 EST 2011 META-INF/maven/org.apache.maven/maven-model-builder/pom.xml
Now we could tell Maven's super pom, pom-4.0.0.xml, is in maven-model-builder-3.0.3.jar file under /org/apache/maven/modem/ directory.

Wednesday, May 30, 2012

GZip Content-Encoding and and Chunked Transfer-Encoding with Liferay 6.0.x

Liferay has a GZipFilter which is turn on by default and gzip response for performance reason. Performance as in bandwidth saving and shorter download time for slow clients eg. those on dial-ups etc. This doesn't necessary means more CPU efficient, as gziping and related processing takes up CPU cycles.

GZipFilter is basically a javax.servlet.Filter that wraps up the HttpServletResponse with GZipResponse

  protected void processFilter(HttpServletRequest request, 
                               HttpServletResponse response, 
                               FilterChain filterChain) {
     if (isCompress(request) && !isInclude(request) &&
         BrowserSnifferUtil.acceptsGzip(request) &&
         !isAlreadyFiltered(request)) {

         GZipResponse gZipResponse = new GZipResponse(response);
         processFilter(GZipFilter.class, request, gZipResponse, filterChain);

GZipResponse is just a wrapper that overrides methods related to OuputStream and Writer such that it delegates the output streaming and writing to GZipStream (at least for Liferay 6.0.x it is)

    public class GZipResponse extends HttpServletResponseWrapper {

            public GZipResponse(HttpServletResponse response) {
                    _response = response;
            public void finishResponse() {
                    try {
                            if (_writer != null) {
                            else if (_stream != null) {
                    catch (IOException e) {
            public void flushBuffer() throws IOException {
                    if (_stream != null) {
            public ServletOutputStream getOutputStream() throws IOException {
                    if (_writer != null) {
                            throw new IllegalStateException();
                    if (_stream == null) {
                            _stream = _createOutputStream();
                    return _stream;
            public PrintWriter getWriter() throws IOException {
                    if (_writer != null) {
                            return _writer;
                    if (_stream != null) {
                            throw new IllegalStateException();
                    _stream = _createOutputStream();
                    _writer = new UnsyncPrintWriter(new OutputStreamWriter(
                            //_stream, _res.getCharacterEncoding()));
                            _stream, StringPool.UTF8));
                    return _writer;
            private ServletOutputStream _createOutputStream() throws IOException {
                    return new GZipStream(_response);

GZipStream upon close() just decides to write the content-length header to the response.

   public class GZipStream extends ServletOutputStream {
               public void close() throws IOException {
                    if (_closed) {
                            throw new IOException();
                    int contentLength = _unsyncByteArrayOutputStream.size();
                    _response.addHeader(HttpHeaders.CONTENT_ENCODING, _GZIP);
                    try {
                    catch (IllegalStateException ise) {
                    _closed = true;
What this means is that if you want Content-Encoding to be 'gzip' and Transfer-Encoding to be 'chunked' in Liferay 6.0.x, the answer is not possible, unless you disabled GZipFilter through, and cooked up one of your own. Might want to consider Tomcat's connector which does 'gzip' content encoding as well as auto transfer-encoding as chunk when the buffer size is exceeded, if i got my facts right. It's a bit disappointing as I was hoping Liferay to handle this better. Chunked transfer-encoding is good when you have a huge content where you don't want to buffer it up cause it use up too much unnecessary memory.

Friday, April 6, 2012

Must read article if you are doing anything serious using GIT

GIT reset explained in details in a very easy to understand manner. Even if you think you know about git rest already, pretty sure you'll be surprised with that subtle insight you get after ready this article by the author of ProGit

Friday, March 30, 2012

Sharing Liferay's Portal ClassLoader

It is possible to share Liferay Portal's ClassLoader with Liferay's plugins (eg. Portlets WARs and also independent Servlets)

Say if I have a Servlet called Servlet1 that was packaged in a separate WAR that I'd like to use Liferay Portal's classloader, I'd need to declare it as a PortalClassLoaderServlet through the following configurations in it's web.xml


With this declaration and with Liferay deployed as a separate WAR file, Servlet1 will be registered when Liferay starts up and is aware of Liferay Portal's lifecycle, and will get initialized and destroyed (through Servlet's init(ServletConfig) and destroy() methods respectively) when Liferay Portal startup and shutdown). When a request is being made to Servlet1, PortalClassLoaderServlet will act as a proxy that swaps the current request thread's context ClassLoader with the ClassLoader that Liferay Portal is started up with and so you get all the classes that comes with Liferay Portal.

Technically, i think it is possible to take this further and having PortalServlet loaded using the same classloader as Liferay Portal through the following web.xml configuration

     MyPortal Servlet

But care must be taken to make sure classes that was used also exists in Liferay Portal WAR or in say Tomcat's public library classpath. Messing with ClassLoader is always hazardous and best to be avoided if possible. That's my 2 cents.

Friday, March 23, 2012

Liferay's 3 Servlets Musketeers (PortalActionServlet, PortalServlet and MainServlet)


This is the main servlet in Liferay, it handles all request directed to the portal itself, mapped to '/c/*' as in Liferay's ROOT web.xml. Together with various FriendlyURLServlet mappings in web.xml, they almost handle all the happenings in Liferay. FriendlyURLServlet normally analyze friendly urls and do a redirect after regenerating friendly urls back to the complex form that Liferay happens to understand through PortalUtil


This is the servlet that will get autogenerated during deployment of Portlet Plugins WAR file. It is really just a simple proxy pumping in resources to your portlet. Checkout PortletAutoDeployer on how it is written to your Portlet Plugin WAR's web.xml during hot deployment. Remember that the whole Portlet framework is after all based on Servlet specs, this Servlet gives Liferay a way to push through resources to a specific portlet.


In short, forget about this Servlet, it is a legacy Servlet as far as my understanding goes. Why? All it does is make sure a copy of PortletRequestProcessor (sounds familiar? yes, its an extension of Struts's RequestProcessor) is available in ServletContext under WebKey.PORTLET_STRUTS_PROCESSOR. This is now (Liferay 6.1.x) being taken care of in MainServlet.checkRequestProcessor method. Think of it this way. Where does your portlet lives? Inside a Portal container of course. Any request to your portlet have to go through a Portal container first, and that means going through MainServlet which will make sure a copy of PortletRequestProcessor is available nicely before eventually hitting your portlet. Humbly think that this is a nicer way rather than having each Portlet WAR file declaring a PortletActionServlet, cause the following is how PortletActionServlet stick a copy of PortletRequestProcessor into ServletContext

                ServletContext servletContext = getServletContext();

                ModuleConfig moduleConfig =

                PortletRequestProcessor portletRequestProcessor =
                        PortletRequestProcessor.getInstance(this, moduleConfig);

                        WebKeys.PORTLET_STRUTS_PROCESSOR, portletRequestProcessor);
It's coupling in Struts's specific way of looking up ModuleConfig into it's code, which arguable isn't going to change much in the near future anyway but still ... I think it's tight coupling

This is how it's done in MainServlet.checkPortletRequestProcessor(...)

                ServletContext servletContext = getServletContext();

                PortletRequestProcessor portletReqProcessor =

                if (portletReqProcessor == null) {
                        ModuleConfig moduleConfig = getModuleConfig(request);

                        portletReqProcessor =
                              PortletRequestProcessor.getInstance(this, moduleConfig);

                              WebKeys.PORTLET_STRUTS_PROCESSOR, portletReqProcessor);

which is doing it through getModuleConfig(request) methods relying on Struts's ActionServlet to provide the implementation and hence abstracting it away from potential future changes in implementation

This are really just my understandings on briefly digging into Liferay's source through occasionally a VI on a terminal over a glass of red or coffee during spare times over lunch break or weekends. If you disagree in anyway, I'd love to be corrected.

Anyway Liferay looks like a promising portal solution and I would definitely be exploring further

Monday, March 12, 2012

Ever wonder how Liferay keeps track of it's WAR'ed Portlets

Liferay is a portlet container. With the distribution that comes with Tomcat bundled in by default, it deployed it's main portal into tomcat's ROOT. This main portal will have all the bits and pieces to manage portlets deployed subsequently as WAR files, which without much surprised gets deployed into separate webapps with it's own application context default to the war file name being dumped into Liferay's deploy directory, conveniently located one level above the tomcat home within Liferay distribution. To manage those Portlets located under different webapps, Liferay needs some way to tie the Portlet's Servlet path to it's ServletContext.

Ever wonder how Liferay does this?

Have the opportunity to dig around a bit with Liferay's code and guess what, this is how Liferay does it. It has a ServletContextPool which is basically something like a static factory singleton with a bunch of public static methods accessing a single instance of itself. Servlet corresponding to a particular Portlet when initialized will just make use of those relevant static methods to stick it's context path and ServletContext into a map.

You might wonder how is this possible when all webapps are being deployed to have it's own ClassLoader. Which means webapp1 will have it's own copy of ServletContextPool which will be separate instance from webapp2. Although they are accessing static methods they are still separate instance due to webapp1 and webapp2 have separate classloaders that do not allow them to see each other's classes.

But ClassLoaders are hierarchical and most app servers will eventually inherit a global ClassLoader, Tomcat is not difference. What Liferay does is it package ServetContextPool into portal-service.jar which unsurprisingly located in Tomcat's lib/ext folder which happens to be Tomcat's glabal classpath. Jars located there will be visible to all web application.

Interesting. Pretty simple and straight forward way of getting it done. Does this kind of implementation sounds familiar? Ring any bells? I'm guessing this is somewhat similar way SLF4j (Standard Log4j) decide which logging implementation to hook up in runtime.

Friday, March 9, 2012

Ouch!! ... mvn liferay:build-service hurts ...

I've been trying to generate bunch of services using

    mvn liferay:build-service -P liferay-plugins-development
where liferay-plugins-development profile contains properties setting of my liferay version and deployment directory, and was bumped with the following exception
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 17.172s
[INFO] Finished at: Sat Mar 10 16:50:43 EST 2012
[INFO] Final Memory: 4M/15M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.liferay.maven.plugins:liferay-maven-plugin:6.1.0:build-service (default-cli) on project myliferay-portlet: Execution default-cli of goal com.liferay.maven.plugins:liferay-maven-plugin:6.1.0:build-service failed: An API incompatibility was encountered while executing com.liferay.maven.plugins:liferay-maven-plugin:6.1.0:build-service: java.lang.AbstractMethodError:
[ERROR] -----------------------------------------------------
[ERROR] realm =    plugin>com.liferay.maven.plugins:liferay-maven-plugin:6.1.0
[ERROR] strategy = org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy
[ERROR] urls[0] = file:/...../maven_repo/com/liferay/maven/plugins/liferay-maven-plugin/6.1.0/liferay-maven-plugin-6.1.0.jar
[ERROR] urls[1] = file:/...../maven_repo/com/liferay/portal/portal-impl/6.1.0/portal-impl-6.1.0.jar
Suspect this has something to do with different version of SLF4j that is not compatible with the one that maven resolved to when running liferay-maven-plugin, so went ahead and force the SLF4j version that the plugin was using to 1.5.5
It seems to fix this issue, but now I'm bumped with another NullPointerException coming off liferay-maven-plugin source itself. Logged a defect with the Liferay team here (MAVEN-15). Will have to wait and see how this paints out.

Liferay-maven-plugin turns out to be quite a pain in the neck for me ... :-( Guess I should be evaluating Liferay IDE instead to see if it suits me better. It does required that the whole plugins project to be under a Liferay-Plugin-SDK directory though.

Monday, March 5, 2012

Liferay's Maven Archetypes

Been googling a bit for Liferay maven archetypes available, well typically for 6.1.0 as pre 6.1.0 does not have any maven archetypes in public repositories, and they aren't just listed in one web page. Putting them all down here in one place just so it's easier for whoever that's googling for this piece of information. This is the official Liferay blog that list down all the possible Liferay Maven archetypes, which are :-

  • liferay-ext-archetype
  • liferay-hook-archetype
  • liferay-layouttpl-archetype
  • liferay-portlet-archetype
  • liferay-theme-archetype
  • liferay-web-archetype
with the following properties
through my profile settings in ~/.m2/settings.xml


with the mvn command to invoke them as follows
mvn archetype:generate 
and the following mvn command just to install the whatever resultant artifacts using the 'liferay-plugins-development' profile
mvn install -P liferay-plugins-development

Tuesday, February 28, 2012

Bash command that save my ass over and over again

The following linux commands are really no brainer stuff that every elementary course would have taught us, and it saved my butt many times over and over again.

   find . -iname <some file name>
To find files recursively

   find . -iname <some file name> | xargs
To find files recursively and list them out horizontally separated by space and doing escaping when necessary

  find . -iname <some file name> | xargs <some other bash commands>
To find files recursively and list them out as arguments such that we could perform other commands on it like rm -f (force delete) for example

  grep -ir 'some text' .
Find all files recursively starting with the current directory

We could do interesting stuff like

  find . -iname '*.xml' | xargs grep -i "testing"
Find all xml files recursively starting with current directory that has text 'testing' contains in it.

Bash commands ROCKS! It's always worthwhile if you are using a windows machine to install Cygwin, it comes with bash by default so you could dig up code segment quickly rather than relying on the pathetic doggy search that comes with Windows. Just me 2 cents !