Sunday, January 29, 2012

Fun with Rails, JRuby, and JEE


So a little while ago we were looking at a possible complete rewrite of our customer's web site and we started evaluating what we could do. My first thought was to do a Rails app. I've done a few small Rails projects but none for a public site for an actual customer that expects results. Now our customer's data center only supports .Net and JEE applications so this posed an interesting integration issue. I had heard that you could get Rails applications to run in a JEE application server using JRuby but I had never done it. So I took some time to experiment with it and this is what I found...

First try

So first off I just installed JRuby. I'm on Ubuntu 11.10 so I just did what I do for installing anything:

apt-get install jruby
This got me a JRuby 1.5.1 installed. I then proceeded to install warbler (a Rails to JEE war file packager).
gem install warbler
Then I created an empty Rails app to experiment with:
rails new testapp --database mysql
This created a standard rails app directory. I then created a simple scaffold just to get something in there that would actually interact with the database.
rails g scaffold Person string:first_name string:last_name
I made sure that the databases existed:
mysql -u root mysql -e 'create database testapp...'
I then migrated the databases:
rake db:migrate RAILS_ENV=production
I then proceeded to package up the app in a war file by executing the warble command that the warble gem provides:
And it churned out a testapp.war file. I then deployed the war file to my local Tomcat directory, started it up and hit the app with my browser. All the static content was served up just fine and all the dynamic content that actually touched ruby code did not. In fact, when trying to reach the dynamic portions, the request timed out. Not a 500 error message or anything, just nothing. And nothing showed up in the Tomcat logs either. Which made trying to guess the issue a nasty nightmare.

Resulting war file

So of course I took a look at the generated war file and looked to see what was in there. And surprisingly enough, it looked much like my Rails directory. All the public content (static html, style sheet, javascript files) was in the top level so it could be served up by regular requests to the files. In the WEB-INF folder you get the app, config, gems, and vendor directories with what you would expect in them. The web.xml is real simple: a few context parameters and a request filter pointing all traffic to a RackFilter:

  "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"



And of course in the lib directory you get the 3 jar files that do all the black magic: jruby-core-XXX.jar, jruby-rack-XXX.jar, jruby-stdlib-XXX.jar These files hand off all servlet requests to the Ruby code that's hidden in the WEB-INF folders. Quite remarkable really. Simple. Noninvasive. Just wish it worked.

Logging and debugging

In general, I like the debugging that you get with Rails. Development logs are very verbose and you get lots of good information. However, there's a disconnect between what JRuby/Rails would log (actual errors in the Ruby code) and what Tomcat would log (war deployment issues and lifecycle errors). After a little googling I found that you can set Rails' logging to go to the STDOUT. That way Tomcat would pick it up in the regular Tomcat logging. Great! So I went into my Rails application and in the config/application.rb I added a line:

config.logger =
So I re-warbled (I'm liking that as a verb) and redeployed to Tomcat. And when I went to access the dynamic portion of the site, I got a stack trace in the log. Wonderful! Now I could try to fix something. Well at the top I noticed something rather odd. It said that I was using Ruby 1.8. Well when I got all the Rails stuff working I was working with a RVM install of Ruby 1.9.2. And I thought my JRuby was new enough that it'd be using 1.9 as well.

Version confusion

So I found out that you can get JRuby via RVM as well. So I installed the latest JRuby from RVM (1.6.5). And I found out that when you use JRuby with RVM your standard Ruby becomes JRuby.

rvm use jruby-1.6.5
ruby -v
jruby 1.6.5 (ruby-1.8.7-p330) (2011-10-25 9dcd388) (Java HotSpot(TM) Server VM 1.6.0_22) [linux-i386-java]
Note that in parenthesis it shows what Ruby version JRuby will be emulating. So even the latest JRuby was still going to be running as Ruby 1.8. After more googling I found that you can specify to use 1.9 mode as an argument to JRuby or you can set an environment variable to always use the 1.9 mode.
jruby --1.9 -v
jruby 1.6.5 (ruby-1.9.2-p136) (2011-10-25 9dcd388) (Java HotSpot(TM) Server VM 1.6.0_22) [linux-i386-java]

export JRUBY_OPTS=--1.9
jruby -v
jruby 1.6.5 (ruby-1.9.2-p136) (2011-10-25 9dcd388) (Java HotSpot(TM) Server VM 1.6.0_22) [linux-i386-java]
Later I was informed by @brianthesmith via @headius that just recently, JRuby master now uses Ruby 1.9 as the default. So hopefully this little version issue will go away with the next release.

The next issue I had was kinda a stupid on my part but again I'm not too familiar with the workings of Ruby/JRuby. When I install a gem using Ruby, either by doing a gem install or a bundler install, it doesn't install as a gem system wide that JRuby would be able to use. It's only available to that Ruby install. So JRuby knows nothing of the gems installed in your standard Ruby. Seems a little obvious now, but at the time it was a bit frustrating. And, JRuby cannot use native gems. Meaning that if something uses native code (like a database driver), JRuby will not be able to use it. For example, in your Rails app you might have ActiveRecord use the mysql2 adapter gem. In JRuby you would need the activerecord-jdbcmysql-adapter gem.

Also fun, is that if you execute:

gem install rails
You will get the latest greatest Rails (3.2 as of writing this). If you want to use an earlier version, good luck! You have to change your Gemfile to have an earlier version. But there are several gems listed in the automatically generated Gemfile that are dependent on a the initial version of Rails. In fact, there are several gems specified in the Gemfile that you don't need right away. Such as sass-rails and coffee-rails which are tied to the Rails version. The standard auto-gen Gemfile looks like this:
source ''

gem 'rails', '3.2.1'

# Bundle edge Rails instead:
# gem 'rails', :git => 'git://'

gem 'activerecord-jdbcmysql-adapter'

gem 'jruby-openssl'

# Gems used only for assets and not required
# in production environments by default.
group :assets do
  gem 'sass-rails',   '~> 3.2.3'
  gem 'coffee-rails', '~> 3.2.1'

  # See for more supported runtimes
  gem 'therubyrhino'

  gem 'uglifier', '>= 1.0.3'

First off, I noticed that when you execute bundle install it took forever or timed out. But if you change the source to point to instead of, or better yet, set the source to :rubygems, you'll actually be able to do a bundle install.

Another fun thing is that if you're using JRuby whilst executing rails new appname, it will give you JDBC database adapters in the Gemfile, but not in the config/database.yml file. But if you re-warble and create a war file with the non-JDBC adapters, it won't work in your JEE deployment. Fun.

What I wish the Gemfile defaulted to was this:

source :rubygems

gem 'rails', '3.2.1'
gem 'activerecord-jdbcmysql-adapter'
gem 'jruby-openssl'
Then add other gems as you need them. And if you want to use an earlier version of Rails, you could by just specifying a different version (ie. gem 'rails', '3.0.7').

Warble configuration

Finally, there's some adjustments I needed to make to the warble packager for the specifics of my app. To configure warble, you need to execute the following at the top directory level of your application:

warble config
This will generate a config/warble.rb file where you can make changes to the warble configuration.

So one change I needed to make was to provide a list of all the gems that the webapp needed when it is deployed. This will bundle them up in the war file. It'd be really nice if warble could read through your Gemfile and update this setting itself. But for now, it's pretty simple to explicitly specify the needed gems. To do this, go in and uncomment the config.gems line in config/warble.rb:

config.gems += ["activerecord-jdbcmysql-adapter", "jruby-openssl"]

One last configuration in the warble config file that I had to do was to set the JRuby compatibility version:

config.webxml.jruby.compat.version = "1.9"
This will set a context parameter in your web.xml file that will tell JRuby to be in the 1.9 mode.

Finally it works!

At this point, I was good to go. Re-warbled my app to get a new war file then deploy to my Tomcat server. And everything seemed to function as expected.

It seems that these war files are self-contained such that you don't need JRuby or any gems in a location outside of the server installed on the deployment server. So any standard JEE server should be able to deploy the JRuby/Rails app without any knowledge of Ruby or JRuby. That's pretty cool. So if your datacenter does not support Ruby on their production servers but will support JEE servers, this may be an alternative for you and your team.

Last thoughts

Well this was my first experience with JRuby and I gotta admit that these little hiccups were a bit frustrating. I'm not quite sure how things could be changed to help out complete noobs like me to setup an initial deployment, but I hope that this will help someone who's possibly struggling with the same setup issues. And if I got something terribly wrong in the above description, please correct me and I'll edit the post. Thanks!

Saturday, July 11, 2009

Animations in Java 3D

Wow, I haven't done much on this blog in a while... I haven't done a whole lot with the game lately but I got it up on github if anyone is interested:

So I have been messing around a bit with how the pieces move. The way it use to get done was by using keyframes. The problem with that there was no way to tell when you had reached the end of the animation to signal something else to happen. So I finally just switched it over to using Behaviors instead.

From my understanding all that you need to do is extend the class, set the scheduling bounds (setSchedulingBounds()), then add it to the scene graph. Then you to override two methods initialize() and processStimulus(Enumeration stimuli).

In the iniatilize method you specify what event, or wake up criterion, will trigger the behavior. For animations you can make it based on a certain number of frames has gone by or a specified amount of time has gone by or it an AWT even occurred.

In the processStimulus method you loop through all the stimuli looking for the wake up criterion that you specified. Then you do whatever you want done (move/rotate/whatever something a bit) then if you want to wake up when the next time a criterion happens you call the wakeupOn() method.

Ok I'm not great at describing it, let's just see some code...

public class AnimationBehviour extends Behavior {
private static final float MOVEMENT_SPEED = 0.1f;
private static final int ANIMATION_WAITING = 10;
private final ListenerManager finishedListenerManager
= new ListenerManager();
private final TransformGroup transformGroup;
private final Vector3f currentLocation;
private final Vector3f moveToLocation;
private boolean atLocation;

public AnimationBehviour(Bounds bounds, TransformGroup
transformGroup, Vector3f currentLocation, Vector3f
moveToLocation) {
this.transformGroup = transformGroup;
this.currentLocation = currentLocation;
this.moveToLocation = moveToLocation;


public void initialize() {
wakeupOn(new WakeupOnElapsedTime(10));

public void processStimulus(Enumeration stimuli) {
while (stimuli.hasMoreElements()) {
WakeupCriterion criterion = (WakeupCriterion)
if (criterion instanceof WakeupOnElapsedTime) {

if (!atLocation) {
wakeupOn(new WakeupOnElapsedTime(ANIMATION_WAITING));
} else {

private void moveCloser() {
Transform3D transformation = new Transform3D();

if (isFinished(currentLocation)) {
atLocation = true;
} else {
updatePosition(currentLocation, moveToLocation);


private void updatePosition(Vector3f currentPositionVector,
Vector3f endingPointVector) {
float[] currentPosition = new float[3];
float[] endingPoint = new float[3];

for (int i = 0; i < currentPosition.length; i++) {
if (currentPosition[i] < endingPoint[i]) {
currentPosition[i] += MOVEMENT_SPEED;
} else if (currentPosition[i] > endingPoint[i]) {
currentPosition[i] -= 0.1f;


private boolean isFinished(Vector3f currentPositionVector) {
boolean finished = false;

if (currentPositionVector.x <= moveToLocation.x + .1
&& currentPositionVector.x >= moveToLocation.x - .1) {
if (currentPositionVector.y <= moveToLocation.y + .1
&& currentPositionVector.y >= moveToLocation.y - .1) {
if (currentPositionVector.z <= moveToLocation.z + .1
&& currentPositionVector.z >= moveToLocation.z - .1) {
finished = true;

return finished;

public void addAnimationFinishedListener(
IListener animationFinishedListener) {

Also, since you are specifying whether or not to continue on each iteration you can add listeners to when the animation is finished and notify them. Here's how I call this class.

public void animateAlongPath(List path) {
this.path = path;

private void animateNextStep(Vector3f currentLocation) {
final Vector3f moveToLocation = getNextStep();
if (moveToLocation != null) {
AnimationBehviour animationBehaviour =
new AnimationBehviour(bounds, transformGroup,
currentLocation, moveToLocation);
new IListener() {
public void fireEvent() {


private Vector3f getNextStep() {
Vector3f nextStep = null;
if (pathIndex < path.size()) {
nextStep = path.get(pathIndex);
return nextStep;

So the piece will walk through the path of locations specified and when the animation is done for a given step it will notify the next step animation.

Saturday, May 23, 2009

The Importance of Stubs

Wow, I haven't posted in a while. I'm still crunching away at the game but a few things have popped up at home that have kept me from spending too much time. Leaky basements and a new puppies tend to do that I guess.

Nothing new to report with the game, still just hammering away at the game's user stories. I did switch over from Continuum to Cruisecontrol for my integrated environment. I've been trying out git-svn with some success and liking it for the most part.

So to make this post a bit more interesting I decided to rant about something that I've seen in a couple of teams I've worked with both as a developer and as a coach. The issue is that of having dependencies on other teams for artifacts that are critical to my team's product. This is especially problematic with larger organizations pushing for an "enterprise" solution - which typically translates into multiple development teams working separately for months trying to configure an off-the-shelf, over-priced product and then throwing everything together in an integration nightmare and regression testing for a period that might out last the actual development time.

Being responsible for a product that you don't fully control all the moving pieces can be frustrating and at times paralyzing. But I've found a solution that's produced some good results: create a stub of everything that you depend on but don't control.

For example, if your team is building a web client that consumes services for all of your back-end work, stub out each of the services that you rely on. Define an interface that your team and the team building the service can agree upon and build a basic implementation.

If you're relying on a web service that provides search capabilities, get the WSDL and generate a client and service. Put just enough implementation into the service to make it functional. Have it return one of ten result sets based on ten different query strings - something simple but functional as far as inputs and outputs.

Once you have the stub in place, you can write your automated UATs (User Automated Tests) around your application using the stub and ensure that your application is processing the results correctly. Once your UATs are in place and you have your continuous integration environment, you can swap in their actual services and just kick off the build to verify the integration. This should make it fairly painless!

Now obviously the interfaces can change as the project continues but just make sure that when the changes occur, all dependent teams get an updated version of the interface. Then it's as simple as regenerating the stub service and client code and making a few adjustments here or there. Then run the UATs again to ensure that you have integrated the changes correctly so that the behavior of your application is still what the user expects.

I'm convinced that this practice alone will save large development departments millions of dollars of teams wasting their time trying to throw everything together at the last minute. And it will probably save developers the stress of the integration nightmare.

End rant.

Saturday, March 28, 2009

Make things testable

So I was going along with my stories and I realized that there's a bunch of stuff that I am not testing. In general with an MVP pattern, you make the V as slim as possible because it's really hard to test things in the view. Can do you write a test that says that your "ok" button actually shows up in the lower right corner of the popup dialog? So you make it really slim so that you just kinda assume that Swing or SWT knows how to render the stuff correctly to the screen and that you are using the API correctly. So with that in mind, I went about writing my game... which is mostly stuff in the view. At this point in the game, there's very little non-view stuff going on. So I'm not testing a whole bunch and that really bothers me.

So I started re-evaluating whether or not my view could be tested. I'm writing this using Java3D which is all java so it should be fairly testable. In the end, to make something show up, you need to add it to your branch graph. So I started looking at ways to inject stubs for my branch groups into my objects and see what gets populated. Unfortunately, Java3D does not code to interfaces. So I made a bunch of adapters around class that implemented interfaces containing the methods I was using. Next I created a bunch of factories to make sure that I always got the same sort of adapter wrapping object back. Then I exposed an interface on the factory and injected that into a few generators that would add my game grid and game pieces. So once I started pulling this out, I realized that very little of my code is actually Java3D code and it it's all really testable.

So I guess the lesson that I've learned with this and a few other projects I've worked on is to try to get around the API/Framework/Whatever that you're working with by using interfaces (and adapters where needed) so you can test your own code.

Here's an example from what I did with the BranchGroup...

public interface IBranchGroup {
void compile();

void addChild(Node node);

void addChild(IBranchGroup child);

BranchGroup getInternal();

public class BranchGroupAdapter implements IBranchGroup {
private final BranchGroup branchGroup;

public BranchGroupAdapter(BranchGroup branchGroup) {
this.branchGroup = branchGroup;

public void compile() {

public void addChild(Node child) {

public void addChild(IBranchGroup child) {

public BranchGroup getInternal() {
return branchGroup;

public interface IBranchGroupFactory {
IBranchGroup createBranchGroup();

public class BranchGroupFactory implements IBranchGroupFactory {
public BranchGroupAdapter createBranchGroup() {
return new BranchGroupAdapter(new BranchGroup());

public class GameLauncher implements IGameLauncher {
public void launchGame(IUniverse universe, IBranchGroupFactory branchGroupFactory,
IGameEngineFactory gameEngineFactory) {
IBranchGroup branchGroup = branchGroupFactory.createBranchGroup();
IGameEngine gameEngine = gameEngineFactory.createGameEngine(branchGroup);

ISelecterFactory selecterFactory = new SelecterFactory(universe);




Saturday, March 14, 2009

Managing Your Tests

So I've been writing some code for a while and knocking out a few stories and the code base is growing, and so are the tests. The thing that allows you to refactor and to explore better ways of doing something is having good tests that pass. And by good tests, I mean that there are both unit tests and integration tests on all classes. Before you write any code, you have to write a test. If you find a piece of functionality that you can't test, try pulling it apart or isolating the un-testable portion as much as possible. I like to think of the tests as a safety net for your code. The tests describe the behavior of my application and so as long as all the tests pass, the code is free to be massaged into whatever I like.

However, an interesting issue arrises as the tests grow. The tests begin to exhibit some code smells. There's probably some code duplication or possibly some tests that test the same behvavior. Especially with some extensive integration tests. Eventually, I start to want to refactor my tests. Which really makes sense because if you don't maintain your tests they will become unusable and you eventually try to get around them. So refactoring the tests really is a necessity.

Now the danger here is, how do you know that you didn't break a test? I write tests for my code so that I can refactor my code but there is no tests for my tests to ensure that I didn't break the tests. I can run them against the codebase and ensure that they still pass, but it's easy to get false positivies that way.

I've been running into this at work recently and I'm trying to find a good principle or something that will ensure the stability of the tests through refactoring. Unit tests are usually fairly easy to refactor. The tests themselves are straight forward and you see real easily what they're testing. Integration tests are not so clear, or at least they can be more challenging. My advice so far has been to make sure that there is good unit tests before you start refactoring the integration tests and only make small changes at a time and as much as possible verify all the changes all the time.

If anyone has some other comments or suggestions about what to do for refactoring test, please let me know!

Saturday, February 28, 2009

What to do when you don't know what to do

The last story that I was working on required me to do some animation in Java3D, which I didn't how to do. So, I did a little Spike to learn a bit more about what was involved with that, and I finally finished up the story. With this on my mind, I thought I would spend some time talking about Spikes.

I think Spikes are one of the most misused parts of Agile. The general understanding of a Spike is that it is a research story. If you have questions about how to do your User Story, you spend time researching or spiking the questions that you have. I've seen different teams handle Spikes in different ways. The worst I have witnessed are spike stories that drag on from iteration to iteration. After a time, when something is finally delivered, it's not in a state that is usable, but too much time has been spent on it, so it winds up being put into the code base with no tests, no pair programing, etc. The best use of a Spike is to only use it when you don't know enough to estimate the story, then you spike until you know just enough to do the story. Any code that you write to answer those questions are considered "spike code" and you throw it out.

One really good practice to get into to avoid spike abuse is to always have your Spikes be time-boxed. You set a limit on the amount of time you will spend researching. Once that limit has been reached, the team can then review whether or not they need more time or need to possibly take a different route. If you can't learn enough within a day or so, to estimate, you really should reevaluate if that technology is worth the time. If an off the shelf product takes a week's worth of time to "spike" so that you know how to use it, maybe a simpler approach that doesn't involve that product is the better approach.

I've worked with some people that use the term "Spike" to justify taking a long time to writing crumby code that is meant to be used as a prototype. Truth is, that when they finally get to the point that they can write that code, they know enough to estimate the actual story and then begin working on it. Time spent on a spike after you have answered your question is no longer time that should be spent on spiking, but time spent working on the story.

I've also heard the phrase "architectural spike," which boils down to taking entirely too much time to write up a document to give to the team (that they'll probably never read) describing the solution with many charts and diagrams. And to me, that just goes against the concept of letting your tests drive your code and letting the design emerge from your refactoring.

So to recap, spikes are suppose to be a time-boxed (maybe a day) research efforts that answer enough questions so that you can estimate a story. Anything else, should be your standard test driven development based on satisfying acceptance criteria on your user stories.

Saturday, February 21, 2009

Alright, so I blew through the first two stores that I was attempting:

1. User opens the application and sees the game board. Game board is a chess board (8 x 8 - alternating black and white squares) background is a gray. Camera is looking at the center from above and toward one side.

2. The user has one piece (a blue ball) on the board that is located on one side of the board on one of the center squares.

And up until now there was really no design to it. I started coding a class that had the capability of drawing on the 3D canvas and just kept going. Neither of these stories contain any user interaction yet so I was having a hard time coming up with testable code.

I eventually saw that there was a bit of logic needed for creating the checkered game board. So I thought I'd extract out something that would need to know how to do that. So I started going into an MVP pattern. I wanted the view to get something that it could use to create the proper rows and columns with the right alternating colors without too much logic. It wound up looking like this:

public void constructGrid(GameGridData data) {
for (int x = 0; x < data.getTileData().length; x++) {
for (int z = 0; z < data.getTileData()[0].length; z++) {
Tile tile = new Tile(data.getTileData()[x][z]);

The Tile class was an abstraction I had done to encapsulate the creation of the geometry and details of creating the individual squares. The TileData is a bean that I had to construct to keep the back-end models from knowing anything about the Java3D APIs. The GameGridData has the algorithm needed to get all the positions and colors in the right data structure (the TileData bean) that the view needed. I have a feeling that GameGridData may morph into an abstract class where subclasses will be specific for Chess boards or terrain looking grids or vast spans of black space. Once that was all pulled apart, I was able to construct a Model that generated these TileData beans and a Presenter that could communicate between the two.

Now what I have is a bunch of smaller classes that don't contain "view code" all of which are very testable! So that's where the first real signs of a design started to form. I had a need to be able to test what I was doing and no real good way of isolating the code that needed testing. So by separating out what was just calls to the framework's API (or my abstractions around the framework) and the logic needed for the correct calls, I was able to write a few tests and get things a bit more agile.

The third story actually got me into some user interaction:

3. The user can select a square on the board and the ball will move to that square. Movement is shown and not just a sudden change in location.

Selecting an object in Java3D is a bit more complicated than in Swing. Picking an object is basically translating a point that your mouse picked on the screen to a ray or cone that extends from the point down into the canvas and then seeing what objects intersect with that ray or cone. So my abstractions around the actual tiles in the grid by my Tile class paid off when I found out that the API will return the Node or Shape3D object that was in the intersection path. So I was able to retrieve the same TileData bean that I used to create the selected Tile object and then notify the presenters that are listening to the view.

And this is what it wound up looking like:

final PickCanvas pickCanvas = new PickCanvas(canvas3D, board);

canvas3D.addMouseListener(new MouseAdapter() {
public void mouseClicked(MouseEvent mouseEvent) {
PickInfo pickClosest = pickCanvas.pickClosest();
if (pickClosest != null) {
Tile tile = (Tile) pickClosest.getNode();
selectedTile = tile.getTileData();

So then I needed a way to move the user's game piece once the position selection took place. I made some similar refactorings to the code that created the user's piece with a MVP pattern. And then I let the model from the game grid and the model from the user piece be able to communicate with each other. And then the piece model notified it's presenter which in turn told the view to move the piece to the correct location.

public UserPieceModel(final IGameGridModel gameGridModel) {
gameGridModel.addPositionSelectedListener(new IListener() {
public void fireEvent() {
currentPosition = gameGridModel.getSelectedPosition();

So this story is just about wrapped up. I currently just have the user's piece suddenly jumping to the new location but the story has some more specific requirements: "Movement is shown and not just a sudden change in location" I made it that way intentionally because I know nothing of Java3D's animation APIs. So now I'm just doing a quick spike to determine how to do that and then I'll be able to finish this story up and move on to the next.

So overall I think things are progressing nicely. I wasn't liking where this was going at first with a whole bunch of un-testable UI code but now it seems like I've got the start of a design that allows me to test what I'm creating. And really that's the point of this blog. I called it Emergent Development because that's what good software development should be. You start going and you realize you need something so that you can make it more testable, more loosely coupled, more flexible and so you interject a pattern or two so you can test your stuff and just keep going. So your design comes from need not from a over thought-out UML diagram that was created long before any real code started. Design comes as you need it, no sooner.