A notable feature is that you can create an Ecore diagram from an existing Ecore model. All model elements will be automatically added to the diagram and arranged. This makes it very easy to start using the diagram editor in projects that already have existing Ecore models.
The following screenshot shows an Ecore diagram for the EMF Library Example created with a few mouse clicks .
When using the Test Driven Development approach, is it better to leave work with a red or a green bar? I think there are valid reasons for both.
If you leave with one failing test, you know exactly where to start when you arrive the next morning. It is really obvious what your next task will be - making the test pass.
Leaving with a green bar has the advantage of a having clean state, i.e. it feels like being somehow done for the day. Also in theory a TDD cycle ends with a last successful test run after cleaning up (refactoring).
What is your opinion?
In practice there are some problems that arise when trying to get static analysis in place. The most common ones according to my experience are:
- Configuring the analysis tool. Some tools come with a predefined configuration of checks. In most cases they don't fit your individual needs.
- Legacy code. In almost all cases you already have an existing code base that does not conform to the rules you have defined.
- Generated code. Some projects use MDD tools that generate source code. This code usually does not comply to the static analysis rules either.
For the first point, the only solution is to take the time and define your own rule set by considering every possible check and decide whether you want to have your source code comply to it or not.
The second and third point can also be solved. Tools like eclipse-cs (an Eclipse plug-in utilizing Checkstyle) allows for per project configurations. For new source code you take your complete configuration. For legacy code you can either use no static analysis at all or you decide to cleanup the code over time enabling one check after the other in the configuration. For generated code the best thing is to isolate it from hand written code and to completely disable the static analysis.
I wrote a little Eclipse plug-in that addresses this issue. It searches all projects for test cases that don't have any references in the workspace.
The search for unreferenced tests can be started from the Search menu:
Here is what the result dialog looks like:
The plug-in can be downloaded here.
Notes: This plug-in is not applicable to JUnit 4; The plug-in does not handle more complicated scenarios, e.g. a hierarchy of TestSuites.
The idea is to set up a physical device that displays the current status of an automated build in an eye-catching way, so that it is really hard to not recognize that the build is down. I loved the idea and decided we have to have an Extreme Feedback Device (XFD) on our own.
I wanted an especially cheap and simple solution. I used a mySmartControl M8 board with an ATMega8 AVR RISC microcontroller from Atmel, modified the circuit of a cheap LED effect lamp and connected it to the microcontroller, so that each color can be switched on and off individually.
The microcontroller board has an USB interface and shows up as a virtual COM port. A small C program running on to the microcontroller listens to commands sent via the COM port from the PC and reacts by switching the LEDs on and off. On the PC a Java programm polls the web page of the build server and sends commands to the microcontroller utilizing the serial communication library rxtx.
The price for the hardware was about 35€. It took me 5 hours to make everything work, whereas the hardest part was the microcontroller programming. Luckily there was a ready-to-use example program which exactly did what I needed :-)
Here is what the final result looks like:
After using the device for a couple of days it turned out to be very useful, if the device flashes a couple of times when the state changes, so that one recognizes a build failure immediately, even from the corner of one's eye.
As the device has also a blue LED, and can therefore display basically arbitrary colors by mixing the primary colors (RGB), one can think of visualizing additional aspects of the build status.
- Write a new failing test
- Run the test and see it fail
- Code (in order to pass the test)
- Run the test again and see it pass
I was wondering how my personal cycle time is at an average. So I wrote a little plug-in for Eclipse that listens to test runs and computes the average time between a test run with failures and a test run without any failures.
Here is what the very simple plug-in looks like:
In case you want to give it a try you can download it here.
The plug-in contributes a view called TddMeter. In order to show it simply hit <CTRL>-<3> and type TddMeter.
Note: The plug-in requires Eclipse 3.3
Update: The plug-in now persists its state.
There is an Eclipse Incubation project that makes it possible to visualize the directly and indirectly referenced plug-ins of a selected plug-in:
PDE Incubator Dependency Visualization
When I gave the plug-in a try I missed two things:
- I also wanted to show the incoming dependencies
- I wanted to be able to exclude the Eclipse plug-ins
- Add a checkbox to also show the incoming dependencies of the plug-ins
- Provide a text field to define exclude filter patterns
Here is what the result looks like:
You can download a binary version of the modified plug-in here.
To get started, hit CTRL-3, type Graph and select the Graph Plug-in Dependencies View. To show the dependencies of a plug-in right-klick on the view and select Focus On...
[Update 26.05.2009] By now, the PDE Incubator Dependency Visualization provides the option to also show the incoming dependencies via a new toolbar action "Show Callers". (see also Bugzilla 206306)
Please observe my most recent post on this topic.