Friday, December 16, 2011

My 1st Android App - Sudokroll

I know how to solve Sudoku puzzles but have never solved one by myself. I'm just too lazy to do that.

I developed a Sudoku solver a while ago. It has a command line interface, so just input your puzzle as follows and it'll get solved.
_____8_7_
91___45__
_7_5__8__
_3__86_5_
7_______3
_4_12__8_
__2__7_6_
__34___21
_9_8_____
Pretty cool, huh?

As I always said, Android development is a bonus to Java developers. I ported my Sudoku solver and made it an Android application. It's called Sudokroll. As the name suggests, you touch a cell and scroll on the screen to assign a number for the cell.


For better look and feel, it's for Android 2.3 Gingerbread and above. Have fun.

Thursday, December 01, 2011

When not to use Dependency Injection?

Dependency Inject and Inversion of Control are too popular for people to think when not to use them.

DI / IoC encourages high-level modules define abstract services for low-level modules to implement. The changes of the implementation in low-level modules won't affect the defined interfaces, so high-level modules can remain stable. Unit tests to high-level modules can be conducted even when mocked low-level modules are provided.

From this understanding, it’s always a good idea to use DI / IoC between layers in a multi-tier software architecture. On the contrary, whether or not to use DI / IoC within the same tier should be carefully examined. In most cases, factory method or even new statement will do. Here are a couple of reasons.

1. The components within same tier are usually tightly coupled. Changing one component and related components need to change accordingly is sometimes reasonable, if compared to inter-tier cases.

2. Within same tier, large amount of business objects may be created. This is very different from injecting a low-level service provider to high-level module. The performance of reflection in creating large amount of objects in IoC container may hurt the system.

Thursday, November 10, 2011

Upgraded to Subversion 1.7


If I am asked why you don't like SVN, the evil .svn folders is the answer. With the newly introduced Centralized Metadata Storage, there is only one .svn directory for each project, just like the .git folder in Git repository. Is it time to re-love Subversion? Well, yes and no.

In order to support this new feature, you have to install Eclipse Subclipse plugin 1.8.x. But before that, it's better do a
svn cleanup
for all of your working copies. Although I didn't do it and so far so good, don't risk it.

If you followed this tutorial and installed libsvn-java, please uninstall it. Because at the time of writing, the version of libsvn-java in Ubuntu repository is 1.6.12. When you try to access SVN server, you'll get
Incompatible JavaHL library loaded. 1.7.x or later required.
Follow this to install libsvn-java 1.7.0 and upgrade your working copies with
snv upgrade
Now you have a centralized .svn folder. But don't think you can operate directories and files freely from now on. I use Subclipse 1.8.2 on Eclipse 3.7 SR 1 on Ubuntu 11.10 and found if you delete files and folders directly from file system, the deleted files and folders wouldn't show up in Synchronize view. However, if you add new files and folders, or modify files directly in file system, the plugin will get all the changes.

Even if you delete a folder from Eclipse, and copy a folder with same name but with some changed files, you may get
Could not remove /home/jerry/java/workspace/...
It looks like the metadata fails keeping integrity. I suggest you keep all the folder structure and remove all the files in a folder tree in Eclipse, and copy a whole folder tree with files into project later. Although you may still have following errors, after you refresh project, everything should be fine.
Errors occurred while updating the change sets for SVNStatusSubscriber
org.apache.subversion.javahl.ClientException: SQLite error
svn: database table is locked
svn: database table is locked: WORK_QUEUE

Errors have occurred while calculating the synchronization state for SVNStatusSubscriber.
org.apache.subversion.javahl.ClientException: SQLite error
svn: database table is locked
svn: database table is locked: WC_LOCK

org.apache.subversion.javahl.ClientException: SQLite error
svn: database table is locked
svn: database table is locked: NODES
I summarize my suggestions:
  • always delete files from IDE;
  • if files changed but folder structure remain the same, don't delete folders from IDE



Tuesday, November 01, 2011

Funny statements regarding Database and ORM

Twitter is great to share things that's hard to misunderstand. 140 characters are just inadequate to explain your philosophy of software development, even a series of tweets. Is a meaningful tweet to you also meaningful to your thousands of followers?

I saw a series tweets from Uncle Bob Martin today and found them very confusing to me, and maybe misleading to others.

Databases are details to be hidden. They are not your central abstraction, nor are they the core of your application. Ever.

In any multi-tier software architecture, lower tier provides service to the tier that sits on it. Upper tier defines the contract for lower tier to implement, rather than vice versa. Databases, as a repository service, is by no means a central abstraction, or the core of the whole application. Otherwise, my garage is the core of my house because I store everything there.

But, databases provide an abstraction (no central here) to the tier just above the database layer. As long as the JDBC URL (from a Java developer's perspective, luckily Java is still popular in enterprise software development) is not used outside data access layer, we can say databases are hidden.

Personally, I really don't know how not to make databases hidden, how not to think databases are not central abstraction, or not the core of an application.

relational tables hold data structures, not objects. Objects are bags of behavior. Data structures are bags of data.

I think he wanted to express either relational tables hold data structures, not classes; or relational tables hold structured data, not objects. But anyway. Relational tables don't hold objects / classes is just the reason why we need ORM to map them to objects / classes, and operate hidden data though interfaces.

Any bridge is used to connect two different entities. ORM is the bridge between data and objects, or tables and classes. If the mapped objects don't provide something else convenient to application development, why do we do the mapping? From a bags of data to another bags of data? Yes, we can. But only in mapping to something like C.

The O in ORM is in error. It should be DS for Data Structure. No simple tool can map tables to objects.

Here comes the funniest part. Everyone knows ORM is first introduced in Java. That's easy to understand why ORM has a O in it. Java doesn't have struct keyword, so it's ridiculous to criticize the naming of ORM. Can't we teach Data Structure in Java? The getters and setters in mapped objects are not behavior?

Unless you apply Domain Driven Development, there are no operations other than getters and setters in mapped objects. Even if you apply DDD, the operations in mapped objects are not mapped from table, they come from business rules. For example, what, in a User table, can be mapped to a addUser() behavior in User class? Don't tell me it's mapped from a stored procedure called add_user. We're talking about tables, right? Stored procedures live inside database, but outside any tables.

If anyone really likes this kind of naming game, here is a free topic for you if you have thousands of followers like this guy. The disk in RAM disk is in error.

Have fun.

Thursday, October 20, 2011

Merit Certificate in Science Talent Search 2011

Not as lucky as in last two years.

It's a great project, got excellent comments from 2 judges, but failed to get another Bursary. If you check out all the Prime Factorization projects on Scratch, you'll find out it's the best so far.

Be your personal best is all we can do.

Sunday, September 25, 2011

Requirement for swap space (or virtual memory), not again

More and more users have more and more memory installed. In order to take advantage of their memory, some users move cache of Firefox or Chrome to memory from their disk, some Linux users move their temp directories to tempfs.

I mentioned MyEclipse needs swap, or virtual memory, during installation. I came across almost the same problem when I installed Oracle Database Express Edition.

This system does not meet the minimum requirements for swap space.  Based on the amount of physical memory available on the system, Oracle Database 10g Express Edition requires 1024 MB of swap space. This system has 0 MB of swap space.  Configure more swap space on the system and retry the installation.

I don't have swap partition or swap file at that time, but the installer ignored its own warning and installed itself successfully. I thank Oracle.

I repeat my point here, swap is not necessary in Linux desktop. Hope those applications that still require swap or virtual memory consider it. For those still not sure whether you can remove swap, please reduce your swappiness to 0 to see if you can live without swap.

Wednesday, September 07, 2011

Java development using Ubuntu 11.10 and OpenJDK 7


I tried Ubuntu 11.10 Oneiric Ocelot Beta 1. LightDM is lighter and beautiful. Maybe I can keep using Ubuntu on my ThinkPad T42, rather than jumping onto Lubuntu.

Due to the retiring of the "Operating System Distributor License for Java (DLJ)", Sun / Oracle JDKs / JREs cannot be installed by enabling Canonical Partners' repository any more. It's a good news to OpenJDK. With more usage of OpenJDK, we can expect higher priority and fewer bugs.

Since Java 7 is out, the first thing I did is installing OpenJDK 7. From Ubuntu Software Centre, you can only install JRE. I miss Synaptic Package Manager.
sudo apt-get install openjdk-7-jdk

Change default JRE of system from OpenJDK 6 to 7.
sudo update-alternatives --config java

I got following error when installing Subclipse plugin to Eclipse.
An internal error occurred during: "Install download0".
Library /usr/lib/i386-linux-gnu/libsoftokn3.so does not exist

Creating a symbolic link sovled it.
cd /usr/lib/i386-linux-gnu/
sudo ln -s nss/libsoftokn3.so libsoftokn3.so

After Subclipse is installed, I had a small problem in starting Eclipse 3.7 Indigo, but it's a common one.

sudo apt-get install libsvn-java
and
-Djava.library.path=/usr/lib/jni
in eclipse.ini solved it.

m2eclipse and Google plugin for Eclipse work out of box. Maven 3.0.3 works fine with OpenJDK 7.

Apache Tomcat 7.0.21 works fine with OpenJDK 1.7.0 but if you haven't created the above symbolic link, you'll get below error when starting Tomcat.
java.security.ProviderException: Library /usr/lib/i386-linux-gnu/libsoftokn3.so does not exist
at sun.security.pkcs11.SunPKCS11.(SunPKCS11.java:292)
at sun.security.pkcs11.SunPKCS11.(SunPKCS11.java:103)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

I also tried to install Gnome Shell.
sudo apt-get purge gnome-shell

Installation finished successfully but later when I specified Gnome to login, I got a dialog box saying
failed to load session "gnome"

There are a couple of solutions to the problem but none works for me, so I installed classic Gnome.
sudo apt-get install gnome-session-fallback

A fantastic experience, I'm looking forward to its release next month.

Wednesday, August 10, 2011

Return code 400 when creating a new feature type

I came across a 400 return code when I tried to use REST Configuration API of GeoServer to create a new feature type. The reference doesn't give any explanation to it. From GeoServer's log, I got

ERROR [geoserver.rest] - No such feature type:
ERROR [geoserver.rest] -org.geoserver.rest.RestletException
        at org.geoserver.catalog.rest.FeatureTypeFinder.findTarget(FeatureTypeFinder.java:40)
        at org.restlet.Finder.handle(Finder.java:268)
        at org.geoserver.rest.BeanDelegatingRestlet.handle(BeanDelegatingRestlet.java:37)
        at org.restlet.Filter.doHandle(Filter.java:105)
        at org.restlet.Filter.handle(Filter.java:134)
        at org.restlet.Router.handle(Router.java:444)
        at com.noelios.restlet.ext.servlet.ServletConverter.service(ServletConverter.java:129)
        at org.geoserver.rest.RESTDispatcher.handleRequestInternal(RESTDispatcher.java:77)

INFO [org.geoserver] - Loaded feature type '', enabled

ERROR [geoserver.rest] - Trying to create new feature type inside the store, but no attributes were specified
ERROR [geoserver.rest] -org.geoserver.rest.RestletException
        at org.geoserver.catalog.rest.FeatureTypeResource.buildFeatureType(FeatureTypeResource.java:174)
        at org.geoserver.catalog.rest.FeatureTypeResource.handleObjectPost(FeatureTypeResource.java:124)
        at org.geoserver.rest.ReflectiveResource.handlePost(ReflectiveResource.java:122)
        at org.restlet.Finder.handle(Finder.java:296)
        at org.geoserver.rest.BeanDelegatingRestlet.handle(BeanDelegatingRestlet.java:37)
        at org.restlet.Filter.doHandle(Filter.java:105)
        at org.restlet.Filter.handle(Filter.java:134)
        at org.restlet.Router.handle(Router.java:444)
        at com.noelios.restlet.ext.servlet.ServletConverter.service(ServletConverter.java:129)
        at org.geoserver.rest.RESTDispatcher.handleRequestInternal(RESTDispatcher.java:77)

It shows that I'm trying to publish a feature type that doesn't exist. Problem solved here but I'd like to go the extra mile, check the source of FeatureTypeResource#buildFeatureType.

170  if(fti.getName() == null) {
171     throw new RestletException("Trying to create new feature type inside the store, " +
172              "but no feature type name was specified", Status.CLIENT_ERROR_BAD_REQUEST);
173  } else if(fti.getAttributes() == null || fti.getAttributes() == null) {
174      throw new RestletException("Trying to create new feature type inside the store, " +
175              "but no attributes were specified", Status.CLIENT_ERROR_BAD_REQUEST);
176  }

WTF is line 173 doing?

Wednesday, July 20, 2011

Version Lock-in

Free and open source products are very popular in recent years. This helps to avoid vendor lock-in, but I noticed more and more version lock-in in using FOSS. As the name suggests, version lock-in is the state that a product that depends on specific version of a 3rd-party product.

When a project is developed from scratch, all the versions of 3rd-party products must be determined. Nothing wrong here. But if a 3rd-party product used is not its latest version, we have to take it very seriously. Sure you can say that version has everything you need, it also has potential bugs and performance issues you don't want. Imagine using a very old version of Yahoo! Mail web interface now and you'll get the idea.

If a project is forked from another one, re-examine all the versions of 3rd-party products as early as possible, and keep an eye on them until the project is deployed. I don't remember how many times I searched an exception and found it's a bug and fixed in version x.

On the other hand, by always trying to use latest version, you will be able to contribute to the community better. Consider when you share an experience or file a bug, it makes much more sense when they're based on latest version.

Be brave to use latest versions, or be braver to be version locked-in.

Friday, June 17, 2011

Performance of Virtual Machine

I gave up dual boot when I upgraded to Ubuntu 11.04 and moved Windows into virtual machine. The performance of Windows guest was good, 35 seconds from power on to desktop (very good as my HDD is 5400 rpm), until I began using it seriously.

I've done a benchmark of compiling one of my GWT project in Eclipse four times. I still remember Turbo C can compile 10 lines of code per second, while Turbo Pascal can compile 100 lines per second when I was in university. Compilation is quite suitable for benchmark because it involves both CPU and I/O tasks.

1st2nd3rd4th
native in Ubuntu82818079
in VirtualBox143195196265
in Vmware Player112107108108

The Windows I'm using in virtual machines is Windows 7 Professional 32-bit. IDE is Eclipse for Java EE 3.6.2 32-bit for Linux and Windows. Virtual machine software are Oracle VirtualBox 4.0.8 and VMware Player 3.1.4, for Linux of course.

This is by no means a scientific benchmark, but what I got is if you plan to use a Windows guest heavily in a Linux host, VMware Player is the one to go with at the moment.

Wednesday, June 08, 2011

gvfs-bin is not included in Ubuntu 11.04 by default

I developed a backup software running on Ubuntu at home. To get maximum transfer performance when copying files to remote machine, I use some VFS system commands like gvfs-rm, gvfs-copy and gvfs-mkdir. It's much faster than operating files and folders in ~/.gvfs, which, in my case, can get only up to 500kB/s.

However the first time I used it after I upgraded to Natty Narwhal was a nightmare, I took for granted that those gvfs commands are ready to use but I was wrong.

 sudo apt-get install gvfs-bin  

Murphy's law applies everywhere.

Saturday, April 30, 2011

Natty Narwhal installed

As always, I installed latest Ubuntu 11.04 at the first time. Unity is not that bad but I did experienced a problem that I had to force power off my laptop. Unity 3D doesn't require decent graphic card. Auto-hide launcher and global menu help to save screen real estate, new scroll bar is cool. But to be on safe side, I stick with classic Gnome at the moment.

One interesting issue occurred when I was installing MyEclipse 9.0 on my new system. Since I have 4GB memory I didn't create Swap partition. That's a reasonable choice but I got following prompt

Insufficient Memory
Your system does not have sufficient memory to support MyEclipse. MyEclipse requires 256 MBs physical memory and 64 MBs virtual memory. Your system only has 3913160 MBs of physical memory, and 0 MBs of virtual memory.

I don't know why Eclipse doesn't require virtual memory while MyEclipse does. And why 3913160 MBs (I didn't say I have 4 TB memory, MyEclipse said that) of physical memory is not sufficient.

Anyway, I'm pretty happy with new Ubuntu release but if part of Ubuntu users switch to Fedora 15, which will be released next month, I won't be surprised.

Thursday, April 07, 2011

Performance Tuning

It's a common sense in the industry that performance tuning should be done after feature complete. But very likely there is only a small time frame between feature complete and code freeze, and during this short time period we need to do so many more important things. There is another common sense after product release, if it ain't broken, don't fix it. Performance issues, if any, are far from being broken.

Sounds weird, but such is life. Thanks to modern programming languages, revolutionary methodologies, cutting edge hardware, we are in a time that you don't need to know how many registers are there in a CPU to development software. And more importantly, nobody actually knows what performance can be achieved for a certain system on a specific platform. I have some real life performance tuning examples and the amazing results here.


From 5 Minutes to 90 Seconds

When I worked in a product called Public Content Management 8 years back, there was a home-made caching system in this product. It worked beautifully after system is started but it took 5 minutes for the system to start. It became so frustrating that my manager gave me a week to figure it out if we could do something. By the end of the 4th day, the product was able to start in 1.5 minutes. After this, I began to pay more attention on what can be done to:
  • finish a time consuming task in (much) less time; and
  • leave more CPU cycles to customers.
And from that time, I put "Performance Tuning" as a speciality in my profile.

From 7 Seconds to 70 Milliseconds

Last year I participated in a system called Marin Safety. From my development environment, the loading of Waterway Management page took 7 seconds. I knew that this was mainly because I was using a remote database instance, which was not the case in a production environment. An amplified performance issue, ignore it or not? Several hours later, I shortened the loading time to 70 ms.

From 13 Seconds to 182 Milliseconds

Last week, I needed to parse some returning strings of WMS calls from a GeoServer. I could select format from text/html and text/plain. I should have one more choice, text/xml, according to the protocol. XML format was obviously the choice because it's self-explained and way easy to get parsed, but GeoServer doesn't even support it. HTML format is reasonably my next choice, it's at lease easier to parse than plain text. But for a sample request, it took GeoServer 13 seconds to return an HTML format result while only 182 ms to return a plain text result. I'm sure you know my answer at this point of time. Do the hard work myself and save user 10+ seconds per call.

From the above examples I just want to give you an idea about performance tuning. It can be done and should be done any time, especially when the environment is not ideal. The fact is, ideal environment (usually production environment) can only cover the performance issues, it won't solve them. It's too late to start performance tuning when you start thinking upgrade your sever or losing your impatient customers.

Thursday, March 24, 2011

How to add another data source in JPA

It's quite easy to create a data source using JPA support of Spring framework. It not so difficult to add another data source to your application as well.

In META-INF/persistence.xml, define another persistence unit.

     <persistence-unit name="anotherUnit" transaction-type="RESOURCE_LOCAL">  
         <class>com.youcompany.YourClass</class>  
         <exclude-unlisted-classes>true</exclude-unlisted-classes>  
         <properties>  
             <property name="hibernate.hbm2ddl.auto" value="update" />  
             <!-- validate | update | create | create-drop -->  
         </properties>  
     </persistence-unit>  

Note that you should define all the domain classes you will be using in the defined persistence unit in <class> elements.

Define another database context xml file.
 <?xml version="1.0" encoding="UTF-8"?>  
 <beans xmlns="http://www.springframework.org/schema/beans"  
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"  
     xmlns:tx="http://www.springframework.org/schema/tx" xmlns:p="http://www.springframework.org/schema/p"  
     xmlns:aop="http://www.springframework.org/schema/aop"  
     xsi:schemaLocation="http://www.springframework.org/schema/beans 
     http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
     http://www.springframework.org/schema/context
     http://www.springframework.org/schema/context/spring-context-3.0.xsd
     http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.0.xsd
     http://www.springframework.org/schema/aop
     http://www.springframework.org/schema/aop/spring-aop-3.0.xsd">  
     <!-- holding properties for database connectivity / -->  
     <context:property-placeholder location="classpath:config.properties" />  
     <bean id="anotherDataSource" class="org.apache.commons.dbcp.BasicDataSource"  
         destroy-method="close">  
         <property name="driverClassName" value="${db.driver}" />  
         <property name="url" value="${db.url}" />  
         <property name="username" value="${db.user}" />  
         <property name="password" value="${db.pass}" />  
         <property name="validationQuery" value="${dbcp.validationQuery}" />  
         <property name="testWhileIdle" value="${dbcp.testWhileIdle}" />  
         <property name="timeBetweenEvictionRunsMillis" value="${dbcp.timeBetweenEvictionRunsMillis}" />  
         <property name="numTestsPerEvictionRun" value="${dbcp.numTestsPerEvictionRun}" />  
         <property name="minEvictableIdleTimeMillis" value="${dbcp.minEvictableIdleTimeMillis}" />  
     </bean>  
     <bean id="anotherJpaAdapter"  
         class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"  
         p:database="${db.database}" p:showSql="${db.showSql}" />  
     <bean id="anotherEntityManagerFactory"  
         class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"  
         p:dataSource-ref="anotherDataSource" p:jpaVendorAdapter-ref="anotherJpaAdapter">  
         <property name="loadTimeWeaver">  
             <bean  
                 class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver" />  
         </property>  
         <property name="persistenceUnitName" value="anotherUnit"></property>  
     </bean>  
     <bean id="anotherTxManager" class="org.springframework.orm.jpa.JpaTransactionManager"  
         p:entityManagerFactory-ref="anotherEntityManagerFactory" />  
 </beans>  

In the JPA implementation of generic DAO class, annotate the 1st persistence unit in the setter of EntityManager.

     protected EntityManager entityManager;  
     @PersistenceContext(unitName="firstUnit")  
     public void setEntityManager(EntityManager entityManager) {  
         this.entityManager = entityManager;  
     }  

Create another generic DAO class for new persistence unit. All the operations to the new domain objects should be accomplished via this new generic DAO.

If you want to access data source directly, use
     @Autowired  
     @Qualifier("anotherDataSource")  
     private DataSource dataSource;  

Don't forget to getAutoCommit and keep the status of any connection you get from data source if you need to setAutoCommit yourself, and close the connection in finally statement.

That's it.

Monday, February 28, 2011

new firmware for my media streamer

I have a Astone AP-110D as media streamer in my house. I bought it 1.5 years ago and upgraded the firmware to v1.75. This is still the latest released version. Everything works fine except USB disk with EXT3 file system is mounted read-only by default. I have to login to the player, umount and mount again to make the disk writeable.

Last weekend, I found that this player can use 3rd party firmwares. Hoping I can get more functions like Youtube or something. I upgraded to firmware v1.9.9 for IBT-500A / ZP-500A and IBT-1073.

Although they are different devices but seem have same firmware. Unfortunately, no new functions found. But the USB disk is now mounted writeable by default.

I'm happy with this unexpected benefit of openness.