Editing EMF Ecore models graphically with Ecore Tools

The Ecore Tools Project (still in Incubation phase) provides a graphical editor for EMF Ecore models. The Eclipse Ganymede package called Eclipse Modeling Tools contains the Ecore Tools.

A notable feature is that you can create an Ecore diagram from an existing Ecore model. All model elements will be automatically added to the diagram and arranged. This makes it very easy to start using the diagram editor in projects that already have existing Ecore models.

The following screenshot shows an Ecore diagram for the EMF Library Example created with a few mouse clicks .


Leaving work with a red or a green bar

When using the Test Driven Development approach, is it better to leave work with a red or a green bar? I think there are valid reasons for both.

If you leave with one failing test, you know exactly where to start when you arrive the next morning. It is really obvious what your next task will be - making the test pass.

Leaving with a green bar has the advantage of a having clean state, i.e. it feels like being somehow done for the day. Also in theory a TDD cycle ends with a last successful test run after cleaning up (refactoring).

What is your opinion?


Static analysis

The idea of static code analysis is to find defects in software by analyzing the source code rather than executing the program. Tools can detect defect patterns, overly complex code, coding standard violations and much more.

In practice there are some problems that arise when trying to get static analysis in place. The most common ones according to my experience are:
  • Configuring the analysis tool. Some tools come with a predefined configuration of checks. In most cases they don't fit your individual needs.
  • Legacy code. In almost all cases you already have an existing code base that does not conform to the rules you have defined.
  • Generated code. Some projects use MDD tools that generate source code. This code usually does not comply to the static analysis rules either.
More often than not, these issues prevent teams from using static analysis at all. However I think there are some rather simple approaches to these obstacles.

For the first point, the only solution is to take the time and define your own rule set by considering every possible check and decide whether you want to have your source code comply to it or not.

The second and third point can also be solved. Tools like eclipse-cs (an Eclipse plug-in utilizing Checkstyle) allows for per project configurations. For new source code you take your complete configuration. For legacy code you can either use no static analysis at all or you decide to cleanup the code over time enabling one check after the other in the configuration. For generated code the best thing is to isolate it from hand written code and to completely disable the static analysis.


Finding unreferenced JUnit Tests in Eclipse

When working with JUnit 3.8.x, one usually has a TestSuite (often named AllTests), which contains all indivdual test cases of a Java Project. Especially when dealing with a large number of projects, it is hard to ensure that every single test case is included in an AllTests suite. Sometimes one encounters a test case and wonders why it is not executed on the continuous integration server. However in most cases, one just won't recognize.

I wrote a little Eclipse plug-in that addresses this issue. It searches all projects for test cases that don't have any references in the workspace.

The search for unreferenced tests can be started from the Search menu:

Here is what the result dialog looks like:

The plug-in can be downloaded here.

Notes: This plug-in is not applicable to JUnit 4; The plug-in does not handle more complicated scenarios, e.g. a hierarchy of TestSuites.


Extreme Feedback Device

Recently, I read about Extreme Feedback Devices (e.g. lava lamps).

The idea is to set up a physical device that displays the current status of an automated build in an eye-catching way, so that it is really hard to not recognize that the build is down. I loved the idea and decided we have to have an Extreme Feedback Device (XFD) on our own.

I wanted an especially cheap and simple solution. I used a mySmartControl M8 board with an ATMega8 AVR RISC microcontroller from Atmel, modified the circuit of a cheap LED effect lamp and connected it to the microcontroller, so that each color can be switched on and off individually.

The microcontroller board has an USB interface and shows up as a virtual COM port. A small C program running on to the microcontroller listens to commands sent via the COM port from the PC and reacts by switching the LEDs on and off. On the PC a Java programm polls the web page of the build server and sends commands to the microcontroller utilizing the serial communication library rxtx.

The price for the hardware was about 35€. It took me 5 hours to make everything work, whereas the hardest part was the microcontroller programming. Luckily there was a ready-to-use example program which exactly did what I needed :-)

Here is what the final result looks like:

After using the device for a couple of days it turned out to be very useful, if the device flashes a couple of times when the state changes, so that one recognizes a build failure immediately, even from the corner of one's eye.
As the device has also a blue LED, and can therefore display basically arbitrary colors by mixing the primary colors (RGB), one can think of visualizing additional aspects of the build status.


TDD cycle length

In theory, a Test Driven Development cycle constists of the following steps:
  • Write a new failing test
  • Run the test and see it fail
  • Code (in order to pass the test)
  • Run the test again and see it pass
  • Refactor
In literature the time for one TDD cycle is stated to be about a couple of minutes.

I was wondering how my personal cycle time is at an average. So I wrote a little plug-in for Eclipse that listens to test runs and computes the average time between a test run with failures and a test run without any failures.

Here is what the very simple plug-in looks like:

In case you want to give it a try you can download it here.
The plug-in contributes a view called TddMeter. In order to show it simply hit <CTRL>-<3> and type TddMeter.

Note: The plug-in requires Eclipse 3.3

Update: The plug-in now persists its state.


Eclipse Plug-in Dependency Visualization

When developing Eclipse-based software consisting of a large set of plug-ins it is a good idea to keep track of the dependencies between the individual plug-ins.
There is an Eclipse Incubation project that makes it possible to visualize the directly and indirectly referenced plug-ins of a selected plug-in:
PDE Incubator Dependency Visualization

When I gave the plug-in a try I missed two things:
  • I also wanted to show the incoming dependencies
  • I wanted to be able to exclude the Eclipse plug-ins
Basically, I wanted to show the complete dependency graph of all my plug-ins. The simplest solution I could think of was to
  • Add a checkbox to also show the incoming dependencies of the plug-ins
  • Provide a text field to define exclude filter patterns
So I checked out the source code and tweaked it a little. Now it is possible to show the whole dependency graph for all my plug-ins by selecting a plug-in which is directly or indirectly referenced by all other plug-ins.

Here is what the result looks like:

You can download a binary version of the modified plug-in here.

To get started, hit CTRL-3, type Graph and select the Graph Plug-in Dependencies View. To show the dependencies of a plug-in right-klick on the view and select Focus On...

[Update 26.05.2009] By now, the PDE Incubator Dependency Visualization provides the option to also show the incoming dependencies via a new toolbar action "Show Callers". (see also Bugzilla 206306)

Please observe my most recent post on this topic.