Maven Smart Release

Dezember 10th, 2012

Working on large software projects that use Maven as a build tool makes you wonder about the Maven release approach. Having large project structures makes a Maven release a gamble that needs a lot of manual intervention, if something goes wrong. But most annoying in my opinion is the need for new pom project versions after each release, especially if you release often. Which could be several times a day, if your goal is continues delivery. Unfortunately, each version means updates on all developer machines. After getting inspired by Axel’s blog entry Maven Releases on Steroids, I was determined to get rid of the Maven release hassle altogether. Actually even more than that, since the project that I am working on, created its own release plugin that makes it even more fragile.

So the goal was to have a release candidate from each top-down build, with its unique version assigned. Replacing the project version with a variable in the poms, as Axel suggested, had to many disadvantages and required to much tinkering with Maven for my taste. So we have chosen a slightly different approach. What you need for this is the Promoted Builds Plugin.

First we fixed a snapshot version in the project for development. In the daily work, I do not want to be interrupted because there is some release going on. So we set the project version to 2.0.0-SNAPSHOT and we could keep it for many releases. I also don’t want to have long code freeze phases for a release.

The next step was to find a way to give each Jenkins build a unique version. This can easily done by calling mvn versions:set in a pre build step. The target version is coming from a parameter and is increased after we decide that a release candidate becomes a real release. In order to make the releases unique, we just append the Jenkins build number.

pre_build_step_config_set_version.png

Newer Jenkins releases already support this pre build step configuration. Older Jenkins releases need a plugin to support this.

After the pre build step Jenkins builds the software the usual way. But instead of creating a SNAPSHOT, a release version is created. The version that we are building in the picture above is defined by ${env.SELECTED_VERSION}-${env.BUILD_NUMBER}, where SELECTED_VERSION is an arbitrary version like 1.1.0.0-B. The BUILD_NUMBER on the other hand is set by Jenkins. It starts with 1 for the first build and is continuously increased in each build. It is only used to distinguish one release candidate from another. We also tried to work with the Subversion number, but in the end the Jenkins build number was more practical. So in our example we would end up with a version number like 1.1.0.0-B-5 for the fifth build on this job.

In order to preserve the built artifacts and the code that was used, we need to deploy the artifacts to a repository and we tag the source code in the SCM with the version that we have just created. But, since this is only a release candidate, we do not want to deploy it directly to our release repository on Nexus. The artifacts are deployed to a special staged releases repository.

deploy_to_staged_repo.png

The tag in Subversion is based on the version number that we have chosen. Since we have many versions, we have many tags. But because Subversion is just linking the tags to revisions there is no issue with disc space. Based on those tags we are later able to reproduce every version and we can branch from there as well, if necessary.

tag_source_code_with_version.png

Since our project build runs for 1 1/2 hours with all unit, module and integration tests, Jenkins is configured to poll Subversion every two hours only. So if everything goes well, it will create a release candidate every two hours. Which it does most of the time.

Now next comes the step, where we want to promote a release candidate to an actual release. And this is literally what we do. Jenkins allows you to configure a promote step to run a task on an existing build. And the only thing that we have to do in this step is to tell our Nexus to move our project artifacts from the staged releases to the release repository. The pro version of Sonatype Nexus even has an API for that. But because we wanted to keep it cheap, we came up with a more pragmatic solution. Our promote task just copies the artifacts on the file system from the staged releases to the releases repo by triggering a simple script.

promote_build_run_script.png

The script is located on the machine hosting Nexus, which needs to be a Jenkins slave as well. The script just needs the version. It copies all artifacts with this version from the staged releases repo to the releases repo, preserving the folder hierarchy. A little cruel, but effective. The release is now available in the Nexus releases repo. As an alternative it is also possible to trigger a deployment of the archived artifacts to the Maven repository. Various actions are available in the promoted builds plugin.

Jenkins is showing the status of promoted builds in the job list. In our example with a purple star.

job_list_with_promo_state.png

Rebuilding any release is easy, even if the modified pom files are not checked into SCM. Building a job that checks out a given release based on the SCM tag, setting this version in the pom again and building everything is just a finger exercise. Actually, we never had the need to do this.

Furthermore there are some Jenkins features that can be used to support this approach. For instance it is possible to choose a Jenkins job parameter from a list of tags. That allows us to select a released build manually for deployment or rebuilding.

select_tag_version.png

When now starting such a build, all tags that match the pattern are shown in a drop down list, starting with the latest. The selected tag name is then assigned to the parameter SELECTED_VERSION for the job.

The project that I am working on switched to this approach almost a year ago. We are promoting releases instead of going through the pain of doing a release since then. We just decide which release to promote and the promotion itself just takes seconds. We only need to update the Maven version in the poms, if we need a branch for maintenance, because we have to distinguish the two snapshots.

I guess that this approach is not the best choice for every project, but it should at least show that there are ways around some Maven characteristics that may not fit as well.

Fitnesse Trouble

Februar 18th, 2011

In the last year I had a good chance of seeing and using Fitnesse from Uncle Bob in a large project. We are running all our system integration tests with Fitnesse, with one build server running multiple test suites in parallel against various target installations.

Unfortunately, from time to time we had the problem that we either got a NumberFormatException in the Fitnesse console log, or Fitnesse just stopped doing anything. The team soon found out that the problem was a port clash on the server, hosting the Fitnesse instances. A started Slim server stopped working, if it received a request from somewhere else than its hosting Fitnesse instance.

After multiple tries to get around this issue, including monitoring on all the ports, I was so annoyed that I finally took a look at the Fitnesse source code and tried to reproduce the issue.

Reproducing the problem was easy. While running a suite (in this case the acceptance test suite), a telnet connection to the Slim server’s port showed immediately the data stream that was targeted to the Fitnesse instance. So it was obvious that the Slim server was not thread safe at some point.

I found the problem in the SocketService implementation. It always called serve.socket() from the same object. This SocketServer object holds the socket’s reader and writer, which are initialized in the server.socket() method. The fix for this was easy. I just made sure that for new connection to the slim server a new SocketServer object is created. The code now looks like this:

server.getInstance().serve(socket);
synchronized (threads) {
threads.remove(Thread.currentThread());
}

There was some other trouble, since the Slim server stopped, when it did receive junk data. After tinkering with the exception handling a little, also this issue could be solved.

The new version looked promising. Now I could sent any garbage to the Slim server during a running suite and the suite did continue, with not even one failed test.

The adapted Fitnesse version can be found here on GitHub. Check out this  checkin.

Maven Incremental Builds

Februar 13th, 2011

It’s finally time for a new entry! In the last couple of weeks I was very much struggling with a large projects build time and I was looking for a good way to just build the parts of the software that really have changed and, most importantly, the parts that are dependent of those changes.

And the latter requirement is the tougher one. Even with the maven reactor build dependent projects are not build, if no source code changes have occurred. So after searching the Net a little I came across this french page, which grasps the problem and already gives a solution. On java.net a incremental build plugin is available. This plugin cleans a project, if a project, the current module is dependent on, has been newly build.

So if, for instance, the interfaces in your project structure are changed, the plugin recognizes that also the project containing the implementation of those interfaces needs to be build. If not adapted properly, your build would fail as it should be.

Some other problems to overcome are the correct dependencies in your project tree and the setup of your projects, so that they are not building, if this is not necessary without the clean command.

New blog

August 26th, 2009

I have moved the blog to a real blogging system, finally. This is obviously the first entry in it. All the old entried have been migrated and I hope to write more soon.

External Monitor MacBook Pro with open Lid

Mai 6th, 2009

Some of the MacBook users out there may know this problem. You want to connect an external monitor to your MacBook and use it as the only and primary monitor. The monitor of the MacBook should be switched off. All solutions that I have found on the net were not satisfactory. You either keep the lid close all the time (and cover your power button) or keep the internal monitor on all the time after you made the external monitor primary with a couple of clicks.

Now the solution that I use is very simple. I fake the closed lid with a very small magnet that has to be placed on the right spot on the MacBook. This works quite well and the spot can easily be found. Once touched with the magnet, the MacBook goes to sleep.

So I placed the magnet on this spot on the turned off machine and pressed the power button. The MacBook turns on, but leaves the internal monitor switched off and the boot screen is shown on the external monitor. The OS starts with the external monitor as primary and only monitor and the power switch is still accessible. More important, the heat flow seems to be better with the lid open.

On the picture you can see the magnet positioned on my MacBook Pro.



MacBook Pro 15″ (early 2008) with 6GB of RAM

April 15th, 2009

When I baught the MacBook Pro, I assumed that the maximum supported 4GB would be enough. But I was mistaken. And before anyone calls me crazy, because 4GB should be enough for almost everything, then I have to argue that I am usuallt working on enterprise server applications. And if I learned one thing then it is that you never have enough memory.
I have read that some people have tried to replace one of the 2GB memory modules with a 4GB module, summing up to 6GB total.

So I baught a 4GB Kingston module, since I had quite good experience with Kingston memory modules. The 6GB were recognized correctly and I was happy, until my system froze. I ran this setup a couple of days, but it was always the same. At least once a day the system froze, so I turned back to the two 2GB modules and installed the 4GB module in my Lenovo T61p, which also only supports 4GB and now runs with 6. The module works fine in there. No trouble whatsoever.

I almost wanted to give up until I came accross a 4GB module from Corsair that had „Mac Memory“ printed on its top. I baught it, I installed it and it works. No system freezes anymore.
So it seems not all the 4GB modules are suitable in the MacBook Pro, so remember if you want to upgrade and make sure that you could return it, if it does not work, since they are not cheap.

XBOX 360 Arcade Stick

März 15th, 2009

It may be a little awkward, since this is about something completely different and somehow unproffessional.

When I baught an XBOX 360 a few months ago, I thought that no game collection would be complete without a good fighting game, so I ended up with Virtua Fighter 5. But those kind of games are much more fun with a decent arcade stick. Unfortunately, it is almost impossible to get one. I stumbled over a documentation on how to build one yourself. This sounded like quite a good idea, and here is the result.

The inscription is not finally done yet (I am still looking for a good way to do it). But it works perfectly and it already ran through all kind endurance testing. 🙂
So I used a standard wired XBOX 360 controller and stripped it down. I connected all the new buttons and the new stick with wires directly to the controller board. There are soldering points available to attach the wires. The case is a standard console case with an aluminium face plate.

These are the components I used. And this is a picture from the assembly.
PS: And yes, the stick’s color is the one I have chosen. Actually, my daughter did.

Dynamic Process Design with Compensating Actions

November 22nd, 2008

One of the many problems in enterprise business process design is the lack of transactionality in a distributed environment. Processes need to handle this by considering all possible error situations and the way out of it. This can easily blow up the process complexity. Especially in cases were the business logic has to deal with lots of different use-cases, this is not often feasible in classic static process design.

On way to deal with this is to introduce dynamic, data driven processes. By maintaining an action list that decides which action to execute in each step, it is very easy to track all actions and to determine the counter actions. Furthermore, the action definition would contain the service call to execute and a reference to the service call that undoes the action.
A static process definition with steps that are just executed if an action for it exists would execute and mark off each action on the list. In case of a rollback the process can be just executed in reverse order to undo each action.


Since this is quite a complex subject which is probably far from clearly explained in this couple of lines, I am happy to answer all remaining questions.

Java UNIX Terminal Reader

April 27th, 2008

Lately, when I was working on a simple tool that allowed users to modify server objects, I was in need of a good text based user interface for Java. Unfortunately, Java does only come with very simple console input/output. The standard System.in functionality is dependeding on the shell, from which it is called. On windows this works quite well, since command line history and editing fatures are also available, when the Java application is waiting for input. On UNIX, on the other hand, those features do not work within the Java program and the input of characters is quite wearsome. So for UNIX derivates I decided to build something or to use a library that is already there. I found one lib that did the trick quite well, but it came with to many features. So in the end, I build my own terminal reader, which can be found here. The major issue is to switch the terminal to character mode, so you can grep all the single inputs, instead of just getting the line after the carriage return. In order to not leave the unix shell in character mode, when the program is interrupted, it is important to catch a CTRL-C and switch back.

Synchronous vs. Asynchronous Communication in Enterprise Application

April 27th, 2008

One of the major issues in enterprise application integration is the unpredictability of all the systems connected. You never know if a system is up and if it will respond in time when you need it. Technology has solved that issue by introducing message-oriented middleware that works asynchronously and buffers messages in case that a system is not responsive, or occupied with high load.

Unfortunately most EAI projects that I encountered ignored or undermined that very basic concept by implementing enterprise services synchronously, even over domain boundaries and even if there is some messaging server in place that would allow asynchronous communication. This has major drawbacks. The availability of system is directly dependent of the availability of each single service it is calling. If a service is not available or does not answer in time the overall process would stall. This becomes worse, since synchronous service calls also means that resources are blocked, actively waiting for responses. The higher the load the more slowly the system gets.

And a simple calculation shows how sever this might become. Let us assume that we call a service that takes around 5 seconds under normal load from a central component and the overhead for processing within this component takes around 1 second. With lots and lots of parallel calls, we have to limit the concurrent number, because of limited resources like memory. But no matter how high we could scale this, we always end up with a 1 to 5 ratio, between processing and waiting, assuming that we always have constant response times That means in the end, our central component is bored for 4/5 parts of the time. If on the other hand this call would have been implemented asynchronously then maybe the overhead processing takes a little longer, but we would completely eliminate that wait time and could therefore process 5 times the amount.