Thursday, July 9, 2009

Python's Lazy Evaluation on Exception Handling

I tried a simple code like this:

[sourcecode lang="python"]
try:
while True:
print('yipee')
except KeyboardInterrupt:
print('w00t')
[/sourcecode]

It runs successfully on command line interface, and keeps printing "yipee" until the user press Ctrl-C. After that I did a typo, which turns out to mistype the "KeyboardInterrupt" as "KeybaordInterrupt".

[sourcecode lang="python"]
try:
while True:
print('yipee')
except KeybaordInterrupt:
print('w00t')
[/sourcecode]

To my surprise, it still runs well, it keeps printing "yipee" to the screen. It's just that
when I press Ctrl-C, Python threw this error:

[sourcecode lang="shell"]
Traceback (most recent call last):
File '<stdin>', line 3, in <module>
KeyboardInterrupt
[/sourcecode]

During handling of the above exception, another exception occurred:

[sourcecode lang="shell"]
Traceback (most recent call last):
File '<stdin>', line 4, in <module>
NameError: name 'KeybaordInterrupt' is not defined
[/sourcecode]

I looked at the documentation of Python 3.1 which says:
The try statement works as follows.

First, the try clause (the statement(s) between the try and except keywords) is executed.

  • If no exception occurs, the except clause is skipped and execution of the try statement is finished.

  • If an exception occurs during execution of the try clause, the rest of the clause is skipped. Then if its type matches the exception named after the except keyword, the except clause is executed, and then execution continues after the try statement.

  • If an exception occurs which does not match the exception named in the except clause, it is passed on to outer try statements; if no handler is found, it is an unhandled exception and execution stops with a message as shown above.


The exception handling in Python works in a lazy manner, they will only validate the except clauses just before processing the exception!

Wednesday, July 1, 2009

Adding Embedded ActiveMQ to Spring Container (on Maven 2 Project)

Today, we need to add a queue to our application to better manage long running transaction on our web application. The choice fell into using embedded ActiveMQ.We plan to use the latest version available (which currently ActiveMQ 5.2.0). Previous version of our application use our proprietary code which I believe was an example of reinventing the wheel. For our latest version of application, we try to use as much as possible open standard and open source technologies. ActiveMQ is under the Apache Software Foundation umbrella which is suitable for us.

First I add the dependency for the latest version of ActiveMQ (version 5.2.0):

[sourcecode lang="xml"]
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-core</artifactId>
<version>5.2.0</version>
<optional>false</optional>
</dependency>
[/sourcecode]

Now it's time to add something into the Spring application context configuration file.
As suggested by the documentation, I added new namespace amq in my XML. It points to "http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core-5.2.0.xsd" schema location.

[sourcecode lang="xml"]
<?xml version="1.0" encoding="UTF-8" ?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:amq="http://activemq.apache.org/schema/core"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
[/sourcecode]

I tried to run a JUnit test just to make sure that the new change in the configuration works well. There were some 2 errors. It turns to be a stale schema location. I figured it out from opening the URL http://activemq.apache.org/schema/core and see what's in there. So now need to modify a bit, change "http://activemq.apache.org/schema/core/activemq-core.xsd" to "http://activemq.apache.org/schema/core/activemq-core-5.2.0.xsd".

So here goes the correct version that works (syntactically correct XML):

[sourcecode lang="xml"]
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:amq="http://activemq.apache.org/schema/core"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core-5.2.0.xsd">
[/sourcecode]

Now the next beast coming in, when I tried to use the new namespace by adding this inside the <beans/> tag.

[sourcecode lang="xml"]
<amq:broker useJmx="false" persistent="false">
<amq:transportConnectors>
<amq:transportConnector uri="tcp://localhost:0" />
</amq:transportConnectors>
</amq:broker>
[/sourcecode]

It produced this error:

[sourcecode lang="shell"]

java.lang.ExceptionInInitializerError
at com.triquesta.tcm.core.services.MessagingServiceTest.init(MessagingServiceTest.java:17)
[...deleted...]
Caused by: org.springframework.beans.factory.BeanDefinitionStoreException: Unexpected exception parsing XML document from class path resource [tcm_configs.xml]; nested exception is org.springframework.beans.FatalBeanException: NamespaceHandler class [org.apache.xbean.spring.context.v2.XBeanNamespaceHandler] for namespace [http://activemq.apache.org/schema/core] not found; nested exception is java.lang.ClassNotFoundException: org.apache.xbean.spring.context.v2.XBeanNamespaceHandler
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:420)
[...deleted...]
Caused by: org.springframework.beans.FatalBeanException: NamespaceHandler class [org.apache.xbean.spring.context.v2.XBeanNamespaceHandler] for namespace [http://activemq.apache.org/schema/core] not found; nested exception is java.lang.ClassNotFoundException: org.apache.xbean.spring.context.v2.XBeanNamespaceHandler
at org.springframework.beans.factory.xml.DefaultNamespaceHandlerResolver.resolve(DefaultNamespaceHandlerResolver.java:134)
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1292)
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1287)
[...deleted...]
Caused by: java.lang.ClassNotFoundException: org.apache.xbean.spring.context.v2.XBeanNamespaceHandler
at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
at org.springframework.util.ClassUtils.forName(ClassUtils.java:211)
at org.springframework.beans.factory.xml.DefaultNamespaceHandlerResolver.resolve(DefaultNamespaceHandlerResolver.java:123)
... 37 more
[/sourcecode]

Tuesday, June 30, 2009

Eclipse Update Site for Subclipse has been Updated, Now Works with Galileo

Three days ago, I tried installing Subclipse to Eclipse 3.5/Galileo without luck (for 1.4.x, the site was working previously with Eclipse 3.4 Ganymede). The update site doesn't work. Now, today I give it a try again.
The page at tigris.net had just been updated to incorporate the subversion 1.6.x, and will be using Subversion 1.6.x client and working copy format. It wasn't there 3 days ago.

It's like more like mechanic, now open Help - Install New Software..., click [Add...] button at the top right of the dialog, type "Subclipse" in the Name field, and "http://subclipse.tigris.org/update_1.6.x" then click the [OK] button.

Now, the update site works as expected.

Saturday, June 27, 2009

Eclipse 3.5 Galileo has been Released

On June 25th, 2009 Eclipse release its 3.5 codenamed Galileo.

I'm trying to explore this new stuff a bit, and try to experience what's new.


http://www.eclipse.org/galileo


My personal experience, I was always excited to see how fast Eclipse has been in incorporating features that was missing before since pre-Europa. Before Europa release, I hate it if I have to use it. But since the Europa, it has become my IDE of choice (also due to its licensing mode).


Today I downloaded the eclipse-jee-galileo-win32.exe file, as it is the most suitable for our team needs. If everything seems to be acceptable, we will soon moving our development environment here to make use of this new release.


What I am waiting for is to see how far the IAM has progressed. On Ganymede SR1, some of the link still isn't consistent yet.


The most notable change is in the application icon. Now it's no longer the shaded purple planet satellite with double white equatorial stripes. The icon changed color with less saturation, and there is a halo outside that looks like a gear.


Other notable change is in the way how Eclipse update its features as in OSGi components. This is one of the most confusing part of Eclipse IDE. Before Ganymede, you have to add your own update site, give the site a name and URL, then Eclipse will load components data. You must select the checkboxes. On Ganymede you don't have to specify a name, just the URL. On Galileo, now they separate between update of Eclipse and update of third party components.


Eclipse has notoriously been problematic in the updating components, where as the update on the Eclipse components could fail some others. You can't just update your Eclipse component right. Most of the time you need to reinstall your Eclipse and build yours from release, e.g. when updating from Ganymede to Ganymede SR1, it'd better safe your time to start from a clean fresh Ganymede JEE SR1 installer and re-add your custom components one by one, rather than taking the risk to update the Ganymede JEE release.


It seems that now Eclipse separates the update of Eclipse from other componets, now have "Check for Updates", and "Install New Software...".


I tried to use the Subversive, and it complains no JavaHL. Haven't had luck to find a way to add it. I tried to get Subclipse instead, and the update URL provided (http://subclipse.tigris.org/update_1.4.x) that was working still haven't supported Galileo.


I guess I have to wait some more before moving our development to the new version.

Fallback.



(I'm still exploring it now).

Thursday, June 25, 2009

Float and Byte Array Conversion

One of my friends raised this question: How to convert sequence of bytes into float in Java?

He/she bumped into this problem when porting C application to Java which parses UDP packets. Sounds like Java is lacking the flexibility of C whereby it could simply copy the memory block content to a pointer of any types (including struct).

While in C/C++ it is legal to do this:
[sourcecode lang="c"]
#include

char cbuf[] = { 0, 0, 96, 64 }; /* representing single precision floating point 3.5f */

int main(int argc, char **argv) {
int i;
for(i = 0; i < 4; i++) {
printf("cbuf[%d] = %d\n", i, cbuf[i]);
}

float *fp = (float *) cbuf;
printf("*fp = %3.3f\n\n", *fp);
}
[/sourcecode]

Which prints the result:
[sourcecode lang="shell"]
cbuf[0]=0
cbuf[1]=0
cbuf[2]=96
cbuf[3]=64
res = 3.500

[/sourcecode]

You just can't do that in Java (plus you won't be getting mystifying errors due to memory stomping as well).
NOTE: because we are using C language, result might be different from platform to platform, due to Little Endian/Big Endian issue.

In order to achieve the conversion in Java, you have to parse the byte array (equivalent to char[] in above C example code).

I constructed a Java utility class named FloatByteArrayUtil:

[sourcecode lang="java"]
public class FloatByteArrayUtil {
private static final int MASK = 0xff;

/**
* convert byte array (of size 4) to float
* @param test
* @return
*/
public static float byteArrayToFloat(byte test[]) {
int bits = 0;
int i = 0;
for (int shifter = 3; shifter >= 0; shifter--) {
bits |= ((int) test[i] & MASK) << (shifter * 8);
i++;
}

return Float.intBitsToFloat(bits);
}

/**
* convert float to byte array (of size 4)
* @param f
* @return
*/
public static byte[] floatToByteArray(float f) {
int i = Float.floatToRawIntBits(f);
return intToByteArray(i);
}

/**
* convert int to byte array (of size 4)
* @param param
* @return
*/
public static byte[] intToByteArray(int param) {
byte[] result = new byte[4];
for (int i = 0; i < 4; i++) {
int offset = (result.length - 1 - i) * 8;
result[i] = (byte) ((param >>> offset) & MASK);
}
return result;
}

/**
* convert byte array to String.
* @param byteArray
* @return
*/
public static String byteArrayToString(byte[] byteArray) {
StringBuilder sb = new StringBuilder("[");
if(byteArray == null) {
throw new IllegalArgumentException("byteArray must not be null");
}
int arrayLen = byteArray.length;
for(int i = 0; i < arrayLen; i++) {
sb.append(byteArray[i]);
if(i == arrayLen - 1) {
sb.append("]");
} else{
sb.append(", ");
}
}
return sb.toString();
}
}

[/sourcecode]

One good thing about Java is, we don't have the Big Endian/Little Endian issue.

This is the sample code that shows how this utility works.

[sourcecode lang="java"]

public class SampleConversion {
public static void main(String args[]) {
float source = (float) Math.exp(1);
System.out.println("source=" + source);
byte[] second = FloatByteArrayUtil.floatToByteArray(source);
System.out.println("temporary byte array=" + FloatByteArrayUtil.byteArrayToString(second));
float third = FloatByteArrayUtil.byteArrayToFloat(second);
System.out.println("result=" + third);
}
}
[/sourcecode]

Which prints:
[sourcecode lang="shell"]
source=2.7182817
temporary byte array=[64, 45, -8, 84]
result=2.7182817
[/sourcecode]

Of course you also need to take consideration of the Little Endian/Big Endian issue when parsing the bytes passed from non-Java platform, or from native binaries (I think JNI supports the conversion seamlessly).

Thursday, June 18, 2009

I run my Tomcat, instead I get this "ORACLE DATABASE 10g EXPRESS..." message

You try to run  Apache Tomcat, but instead you  get this message "ORACLE DATABASE 10g EXPRESS EDITION LICENSE AGREEMENT".
This simply means
If you bump into this problem, you have an Oracle 10g Express Edition (OracleXE 10g) installation sitting on the same port as your Tomcat default port (TCP/IP port 8080). As the TCP/IP protocol doesn't allow you to have more than one process listening to the same port (except when you use the advanced channel/selector), then when you start Tomcat, it will fail. You might not be aware that your Tomcat has failed to start, until you get into that message mentioned above.

There are 2 cures to this symptoms, the first cure  is to let Oracle XE take the port, and we move the tomcat installation to another port. The second cure is to set the Oracle XE to use another port.

To achieve the first one, you need to modify your $CATALINA_HOME/conf/server.xml file. Find the portion of the file that contains this snippet (assuming you are using the standard Tomcat installation):

[sourcecode lang="xml"]
connectionTimeout="20000"
redirectPort="8443" />
[/sourcecode]

Change the port 8080 to something else, e.g. 8484.

[sourcecode lang="xml"]
connectionTimeout="20000"
redirectPort="8443" />
[/sourcecode]

By now you should be able to run you application.
Don't forget that you have to run it from port 8484. So if the application you deploy is someapp.war the URL that you aim should be http://localhost:8484/someapp/ instead of http://localhost:8080/someapp/

Ok, now if you want to take the second cure, go to your Oracle XE 10g console using sqlplus. Replace mypassword with that your SYS user password (as specified during the installation process). Note that the number 2, 3, 4 at the left side is generated by the sqlplus tools prompt (you don't need to type them in).

[sourcecode lang="sql"]
C:\> sqlplus sys/mypassword@xe as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Thu Jun 18 17:53:47 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production

SQL> begin
2 dbms_xdb.sethttpport('8484');
3 end;
4 /
[/sourcecode]

Oracle XE should reply:

[sourcecode lang="sql"]
PL/SQL procedure successfully completed.
[/sourcecode]

After that check to ensure that the configuration has been changed properly:

[sourcecode lang="sql"]
SQL> select dbms_xdb.gethttpport as "HTTP-Port is " from dual;
[/sourcecode]

You should get a message like this:

[sourcecode lang="sql"]
HTTP-Port is
------------
8484
[/sourcecode]

Exit from sqlplus by typing "exit" at the prompt.

If everything as expected, now you can start the Tomcat server and use the port 8080 for Tomcat. To access the Oracle XE 10g database web console, access through http://localhost:8484/apex

HTTP Error 404 on Your Java Web App?

Somebody in the JUG-ID mailing list was asking where does this HTTP Error 404 comes from? He has successfully run the application before, but now get this error when hitting the page.

For me, I would run through these diagnostic process:

  1. Because the browser returned error HTTP status 404, means that the application server is there serving the HTTP request; make sure that the application server serving there is the Tomcat/JBoss/WebLogic/WebSphere that you intended to get the data from; typical mistakes would be that you are trying to get the data from your application server, ended up that you hit an Oracle XE web console running on the same port, or you are hitting something else other than what you intended to.

  2. Check whether your application has been deployed. The error could come because your application actually has not been deployed to the application server. Use Tomcat Manager for Apache Tomcat, use WebLogic console, or Glassfish console, or whatever console your application server provides.

  3. Check whether there are some failure in the deployment process. This time you or your IDE deployed the application, yet there are some failure that causes the deployment to stop. It is typical for example, on finding error on your Spring context configuration or your Hibernate context configuration, your application will cease to deploy. It is also common when you have something wrong in your web.xml file, the application server will refuse to deploy. Check what error you get from your console and application server's log.

  4. And if you find out that the application has been successfully deployed, check whether the URL is actually available: check whether your web application uses filter (check the web.xml file), and check whether the JSP file exists (if it hits the JSP file directly), or check whether the action request is configured property (in action based MVC framework such as Struts1.x, WebWork, Struts2, Spring MVC, etc).


Better luck next time.

Wednesday, June 17, 2009

Syntax Highlighting in Wordpress.com blog

I have to admit that I am a late adopter. While I see my peers in JUG community already using syntax highlighter in their blog post, I was just discovered how to do it.

In order to do it, you need to use the

[sourcecode lang="<language>"] tag:
[sourcecode language="xml"]


burger


[/sourcecode]

Above code is using lang="xml".

For Java code snippets you could highlight it pretty easily using the lang="java" with the tag.
[sourcecode language="java"]
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, world!");
}
}
[/sourcecode]

I am just really excited on how it works. I think Alex Gorbatchev has done a great job on the syntax highlighter!

Tuesday, June 16, 2009

I have moved most of my blogging activity to this site:

Please go there for vibrant activities.

Friday, June 12, 2009

Spring Framework 2.5.6 Security Release (2.5.6.SEC01)

Today when I want to add spring-context-support I noticed that there is newer version in the Maven 2 repository. The version is a bit unusual, that is "2.5.6.SEC01". I googled a bit then found the announcement made by SpringSource in the security advisory section.

http://www.springsource.com/securityadvisory

According to SpringSource this ad-hoc release is due to bug in JDK 5 (not in the Spring Framework itself) that causes compilation process of certain java.util.regex.Pattern to be unusally long. This is a potential problem as it could be used as Denial Of Service (DoS) attack.

Friday, May 29, 2009

Artifactory Sent Error 401 to Maven 2 Build

Yesterday our build system had problem. It was caused by our Maven 2 process receiving response code 401. I looked up for the meaning of this 401. Google showed me that it is an Authentication Error.

[sourcecode lang="shell"]
[INFO] ------------------------------------------------------------------------
[ERROR] BUILD ERROR
[INFO] ------------------------------------------------------------------------
[INFO] Error deploying artifact: Failed to transfer file: http://localhost:9999/artifactory/libs-snapshots-local/com/companyname/modulename/artifactname-1.2.0-20090528.051102-1.jar. Return code is: 401
[/sourcecode]

We use Apache Maven 2 as our standard build cycle. Some component depends on snapshots of other components. All software components were stored in our local Maven 2 repository, server by an Artifactory server; also running in the same application server with Hudson. The problem that I mentioned before was caused by Artifactory didn't really accept the authentication sent by Maven client process.

I dig up some more, and eventually found out that somehow we were missing the settings.xml file which was supposed to be there before. It is located in your $USER_HOME/.m2 folder. So I copied back the file, restart the application server. It works again like magic.

Monday, March 23, 2009

Get 'This project needs to migrate WTP metadata' error on Eclipse/Maven 2

Today I took a look of old source code base which I had not maintain for such a long time. It is a mock object of a web service. I wrote an XML-RPC client for a vendor specific API, and I need a mock servlet to test the code.
Today I decided to mavenize the project, as now I believe that's the best way to maintain my source code base. Because the servlet is a web application, I create the maven source try.

Oh ya, by the way I'm using these:
Eclipse 3.4 (Ganymede)
Apache Maven 2.0.9
Eclipse IAM == Maven Integration for Eclipse plugin == Q4E plugin (still prefer to call it Q4E)

After creating the source tree base I created the Eclipse configuration file using the Maven's Eclipse plugin.
I just need to invoke this maven task:

[sourcecode lang="shell"]
mvn eclipse:eclipse
[/sourcecode]

Then I opened the project in Eclipse. I found my project caught a chicken pox (a part of the project got the red rashes - errors). I tried to rebuild, and got this message:

 This project needs to migrate WTP metadata


I knew I missed something, but it's been 2 months since last time I configured Maven 2 web application, and I have totally forgotten the exact steps to do it.

Later I realised that I need to specify the WTP version, so here's the solution:

1) Clean up the project file, run the Maven 2 Eclipse plugin clean task

[sourcecode lang="shell"]
mvn eclipse:clean
[/sourcecode]

2) Create the Eclipse configuration, now specifying the WTP version

[sourcecode lang="shell"]
mvn eclipse:eclipse -Dwtpversion=2.0
[/sourcecode]

3) Open the Maven 2 project in Eclipse IDE, refresh the project so that changes made on the file system is reflected in the Eclipse project as well

4) Remove the library references that contains M2_REPO

Because I'm using Q4E plugin and seemed that the Maven 2 Eclipse plugin was configured for use with other plugin (Mevenide or m2clipse), they specified the M2_REPO variable and repository which you don't need when using Q4E.

Just right click on the project, select Properties - Build Path, remove the libraries that contains M2_REPO.

4) Right click on the Project, select - Maven 2 - Use Maven Dependency Management

If it's already checked, then what you need to do is to uncheck it. After that refresh the project, and then check it again.

Wednesday, March 18, 2009

Hudson Build Stuck on the Build Queue

These few days we have had problem with our Hudson server. The build seems to be stuck in the build queue, there are some executors (I setup 2 executors) available though. I suspected that there are some thread starvation going on or lacking of worker thread in the application server?
We used Apache Tomcat 6.0 to run the Hudson server. We also run artifactory for local maven2 repository on the same application server.
These few days the problem could be temporarily fixed by restarting the application server. I did it several times, but seems this problem is a chronic problem that keeps recurring every 10-20 builds.

This morning I figured out something new, Hudson seems to complain that the executor was frozen (showing that both executors status is "Offline"). Then I take a look, I discovered that there is a configuration for the executor when you click on the [Build executor status] positioned on the bottom of the left menu.
From here I can see the problem clearly:
Hudson displays the name of the executor nodes (as it supports build on multiple nodes), the response time for each node, free swap space and free disk space and some other stuff.
On my machine Hudson showed a warning that the disk space is low (less than 1GB). So this is the reason why Hudson froze the executors. It prevents itself to bring down the machine buy hogging the disk space.

Then I try to find a solution for this problem. I know that on our Windows server Hudson is running as Default User on the user's home. So, there is high probability to fix this problem by moving Hudson to other user of moving this user's space to location other than C: drive.

The simplest working solution to this problem turned to be:

  • specifying Hudson home (other than the default $USER_HOME/.hudson) on the Apache Tomcat service configuration

  • specifying Artifactory home (other than the default $USER_HOME/.artifactory) on the Apache Tomcat service configuration

  • copy the content of $USER_HOME/.hudson to a new hudson home

  • copy the content of $USER_HOME/.artifactory to a new artifactory home

  • restart Apache Tomcat service

Tuesday, March 17, 2009

Cannot install WebLogic Connector on Eclipse 3.4 Ganymede...

Today I tried to install Eclipse Connector for WebLogic 10.3.

The connector component is a software component that responsible for deploy/undeploy/get status/configure the application server from within Eclipse.

I used the old way: 

1) Right click on the Servers view, New - Server

2) Click "Download additional server adapters".

Eclipse will do traverse several sites to get the connectors for application servers, such as  to http://www.webtide.com/eclipse for Jetty connetor, etc. Then Eclipse would display list of servers (Geronimo, Jetty Generic Server Adaptor, Oracle WebLogic Server Tools, WASCE, etc.). 

3) I chose the "Oracle WebLogic Server Tools" (v1.1.0.200903091141).

4) Click the Next button

5) Choose "Accept" radio button, then click "Finish" button

Eclipse displayed: "Support for Oracle WebLogic Server Tools will now be downloaded and installed. You will be prompted to restart once the installation is complete."

6) Click "OK"

Then nothing happened. Looks like Eclipse tried to display something error dialog, but it disappeared very fast.

I have been using the previous version of WebLogic Server 10.3 Tech Preview before on Eclipse 3.3 (Europa), and it worked fine with the online server connectors/adaptors download. But for now my Eclipse 3.4 Ganymede it has problem (at least on my laptop). That time WebLogic Server was still with BEA Systems, prior to the acquisition by Oracle Corp.

I try to find it on the Net if other people also have the same problem.
Then I found out that I need this software component:
Oracle Enterprise Pack for Eclipse 1.0

You need to update from this update site for Eclipse 3.4 (Ganymede) as follows:

1) On the menu, choose: Help - Software Updates...

2) Select Available Software tab, click "Add Site" button

3) Enter this url: "http://download.oracle.com/otn_software/oepe/ganymede"

For previous version of Eclipse 3.3 (Europa) from this site:
http://download.oracle.com/otn_software/oepe/europa

4) When done adding site, check the box to the left of "Oracle Enterprise Pack for Eclipse Update Site"

5) Click "Install" button

Eclipse will display which components will be installed. 

6) Select all components, then click "Next"

7) Accept the term/license

8) Click "Finish" button

9) When finished, restart Eclipse, here are the details:

After the installation it will display a dialog box asking you to participate in User Experience something program.

After that I got a warning that if I run Eclipse under JDK 6 I need to enable option in -Dsun.java.classloader.enableArraySyntax=true something in your eclipse.ini file. The installer would do it for you and ask you whether you want to restart? LOL.

Nah, now I got the WebLogic connector on my Eclipse.

Monday, February 23, 2009

Batch Process in Clustered Environment

This week I have to answer some questions relating to the running of batch process in clustered environment. Our prospective client, a big corporation with the sophisticated infrastructure wants to make sure that the application to be deployed could run well in their clustered environment. Having a nice clustered server for high availability with fail over fault-tolerance is really nice. But a single point of failure in the batch process would surely spoil the fun.

It seems like they have encountered problems in the past with batch processing that could clutter the production application. That's why they need to make sure every applications to be installed are robust enough to some disturbance and provide the same level of resilience with their software components running on the JEE application servers counterpart.

When talking about clustered environment, usually we aim for high availability and fail over. The data that we are processing should be consistent and the application should not get crippled when one of those clustered machine goes down, both intentionally or unintentionally.

When talking about batch processing, the point that we are trying to achieve are:

  • restartability

  • idempotence or rerunabilty


As it is common for the batch process to be in the class of long running processes, usually it will be very expensive for them if they have to start the whole thing from the beginning again.

For example, if you have a batch process that calculate the billing for public utility company, the billing generation itself could easily take 10 hours. If after 9 hours the machine crashes, powercuts, et cetere, it would then be too expensive to restart the whole billing generation again from scratch.

In some way this kind of long running process should maintain the state at certain check points, and pick up at that certain check point and then continue.

I remember the time when we have to generate 400MB dummy data to test our system. The generation itself took more than 3 days to run (running simple SQL INSERT statements). If you put all the rows in one transaction, both commit and roll back process will take a very long time. The other way also pose the same problem. If you use auto commit and commit the rows every INSERT operation, the performance will also be very very poor.

We decided then to commit the INSERTSs every certain number of rows (we chose 10,000 rows in PostgreSQL), we sacrifice atomicity for performance. We have to maintain the state when were the last time we generate and not repeating ourself.

Same thing happens when we are running a long running batch process in a clustered deployment. We expect the component to be able to pickup where it has left in case a failure happened. We don't want the component to assume it has finished, nor we want it to restart from the beginning.