tag:blogger.com,1999:blog-11131336551966903212024-02-20T01:52:06.705+01:00testdrivenguyA blog about Software Development, Quality Assurance, Test Driven Development, Unit Testing, Eclipse and JavaUnknownnoreply@blogger.comBlogger14125tag:blogger.com,1999:blog-1113133655196690321.post-68285395328681077152010-06-03T18:27:00.006+02:002010-06-05T23:58:58.886+02:00Rewarding maintainable codeIn many software projects, developers are mainly rewarded for implementing new features. However, besides functionality, quality attributes like maintainability are also a key factor for staying competitive in the software industry. If the maintainability decreases over time, the cost for introducing new features will increase and in the worst case, the project will be unable to respond to new market demands in a timely manner.<br />
<div>In my opinion, developers should be explicitly and transparently rewarded for assuring the maintainability of the software, e.g. through refactorings or excellent technical design. </div><div>Unfortunately, the positive effect of new features is visible much earlier than the negative consequences of bad design or weak code quality. However, if we understand that investing in maintainability pays off on the long run, we could employ known methods for assessing the maintainbility of the software and reward developers for producing and sustaining maintainable code.</div>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-1113133655196690321.post-71704250962472648552010-06-03T18:03:00.003+02:002010-06-03T18:21:11.008+02:00Code ReviewsCode reviews are an effective method for finding defects in software. Unfortunately, they are not very common in practice. Often, they are omitted due to time pressure and/or for cost reasons.<div>However, it has been known for a long time that the later a defect is found in the software lifecycle, the more costly it is to fix it. Reviews can be used to find defects early, even in the requirements elicitation or design phase. I think reviews should be used in conjunction with tests, since they are complementary. The one is more effective in finding certain types of defects than the other and vice versa. </div><div>In my opinion, many software projects could benefit from employing a lightweight review technique like Peer Reviews using state-of-the art review tools like <a href="http://www.atlassian.com/software/crucible">Atlassian Crucible</a>.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1113133655196690321.post-43838183904349853362009-05-27T20:11:00.015+02:002009-05-28T20:59:54.383+02:00Common Unit Testing ProblemsIn my opinion the following issues are among the most common problems (in order of severity) that one can encounter in unit testing.<br /><span style="font-style: italic;"><br /><span style="font-weight: bold;">-> Nondeterministic tests<span style="font-style: italic;"><span style="font-weight: bold;"><br /></span></span></span></span>Tests that fail and succeed at random can be a real problem. Actually a nondeterministic test can be considered worthless, since you cannot rely on its result. A possibility is that the test contains assertions on behavior that is influenced by a (pseudo-) random value. A more complicated cause can be a race condition between two threads. The only options you have, is to track down the race condition or - if possible - try to eliminate the multi-threadedness of the test.<br /><br /><span style="font-style: italic;"><span style="font-weight: bold;">-> Tests running into deadlocks (usually in conjunction with the first one)<br /><span style="font-weight: bold;"><span style="font-style: italic;"><span style="font-style: italic;"><span style="font-weight: bold;"></span></span></span></span></span></span>To me this is the worst I can think of. Your build machine hangs and the value of continuous integration decreases radically. A good idea is to first set a timeout for the test runner, if possible. Then again, one has to either find the competing threads or eliminate the multi-threadedness.<br /><br /><span style="font-style: italic; font-weight: bold;">-> Interdependent tests</span><br />Sometimes a test fails after a change to another tests. Especially irritating can be the fact that each of the tests may pass when executed individually and the failure occurs only when the test suite is executed completely. This indicates that these tests (possibly unintendedly) share resources between tests runs. Ideally a test should create all its resources from scratch and destroy them afterwards. However there are cases where this is not pragmatic, since the creation and/or destruction of the resource is too expensive. In this case one has to make sure that each test leaves the resource in a state that every other test that is using the resource expects it to be.<br /><br /><span style="font-style: italic;"><span style="font-weight: bold;">-> Long-running tests<br /></span></span>Again, long-running tests also decrease the value of continous integration. Mostly this indicates that a test is testing too much at once i.e. it is too coarse, an integration test rather than a unit test. Of course, such tests might still be valuable, however they should not be executed in the continuous integration build. A nightly build is ideally suited for these kinds of tests.<br /><br /><span style="font-style: italic;"><span style="font-weight: bold;">-> Redundant tests<br /></span></span>If a small change in the systems causes a huge amount of tests to fail, this can be a sign for redundant tests. This means that too many tests are exercising the same aspects of the system. This causes high effort in maintaining the test suite while there might be yet untested parts of the system, where the effort could be spent more reasonable. Obviously the solution is to carefully identify and remove the redundancy in the test suite.<br /><br /><span style="font-style: italic;"><span style="font-weight: bold;">-> Overly complicated tests<br /></span></span>Tests that are hard to understand have the same problem as complicated code. Developers will have a hard time to maintain the test. If you reason 2 minutes about a test and don't get any idea of what this test is actually testing, this test is very likely just too complicated.<br /><br /><span style="font-weight: bold;"><span style="font-style: italic;">-> Tests with only weak assertions<br /></span></span>This is a little more subtle issue. Sometimes tests do a lot of stuff and finally have an assertion that only verifies very little of the functionality that was executed in the test. In other words, it would be possible to put lots of errors in the production code and the test would still pass. This is not a real problem, however, as stated before, each test has to be maintained and therefore has to provide a decent value for us.<br /><br /><span style="font-weight: bold;"><span style="font-style: italic;">-> Trivial tests</span></span><br />This issue is very similar to the previous one. Again, trivial tests provide only a small value/effort ratio and should therefore be avoided.<br /><br />What about you? Have you experienced other issues?<br />A great book covering so called <span style="font-style: italic;">test smells</span> is <a href="http://xunitpatterns.com/">xUnit Test Patterns</a> by Gerard Meszaros.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1113133655196690321.post-42000389079689798662009-05-26T21:08:00.011+02:002010-01-03T16:06:04.996+01:00Eclipse Plug-in Dependency GraphIn an <a href="http://testdrivenguy.blogspot.com/2008/01/eclipse-plug-in-dependency.html">earlier post</a> I wrote about the <a href="http://www.eclipse.org/pde/incubator/dependency-visualization/">PDE Incubator project <span style="font-style: italic;">Dependency Visualization</span></a> which allows to display the dependencies of Eclipse Plug-ins as a graph utilizing the Zest library. Since there seems to be considerable interest in this topic and there has been some progress in the project, I decided to have a second look at it.<br /><br />By now it is also possible to toggle between two modes on the toolbar: you can either show the <span style="font-style: italic;">callers</span> or the <span style="font-style: italic;">callees</span> of a selected plug-in. As mentioned in the earlier post, I wanted to be able to exclude Eclipse plug-ins in cases where the focus is solely on the dependencies between the self-developed plug-ins. Therefore I added a very simplistic filter capability. I provide the modified and updated version of this plug-in <a href="http://www.testdrivenguy.de/blog/depvis20090526.zip">here</a> (Eclipse 3.4.x) for your convience.<div><br /></div><div>[<b>Update 3.1.2010</b>] <a href="http://www.testdrivenguy.de/blog/depvis20100103.zip">Download</a> for Eclipse 3.5.x<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhO4jhB-3HpLD-WfzCFetH7cYB5tVdjIG6wljmxzoZlYKb9ow-55WL45xOmcJyNLdbUWxv0hSlytRaiUJ5NtXr3tkeftTerOIxDSgl1xo01QwmmOqNYqSk13A6wasq1DXyuJAnAJuwALL0/s1600-h/deps.png"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 400px; height: 257px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhO4jhB-3HpLD-WfzCFetH7cYB5tVdjIG6wljmxzoZlYKb9ow-55WL45xOmcJyNLdbUWxv0hSlytRaiUJ5NtXr3tkeftTerOIxDSgl1xo01QwmmOqNYqSk13A6wasq1DXyuJAnAJuwALL0/s400/deps.png" alt="" id="BLOGGER_PHOTO_ID_5340217887846982674" border="0" /></a></div>Unknownnoreply@blogger.com4tag:blogger.com,1999:blog-1113133655196690321.post-47219393474556367422009-04-18T16:24:00.005+02:002009-04-19T14:26:01.491+02:00When to write a unit test?In my opinion the following are situations to implement another unit test.<br /><ul><li>Before writing new production code (TDD)</li><li>Before fixing a bug (exposing the bug with the test)</li><li>In order to get to know an API where the documenation doesn't help much</li><li>When encountering untested yet critical code (e.g. using a coverage tool)</li><li>In case you are bored and have nothing else to do ;-)</li></ul>Of course, when strictly following TDD, the fourth situation cannot occur. However in real life we have to cope with untested legacy code.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-1113133655196690321.post-33949657848253708282009-03-20T22:36:00.001+01:002009-04-18T11:42:41.953+02:00Build timesWhen using Continuous Integration (CI), one common problem is that <span>the build simply takes too long</span>.<br /><br />In this case, the first thing to do, is to ensure that you have a <span>decent build machine</span>. This should not be an old developer box someone left in a dark corner of your office. Instead, use a true high end system that is dedicated for CI and nothing else. This can speed things up considerably.<br /><br />Very common is also the problem that the tests take too long. Usually this originates from tests that do too much. Mostly these are system tests rather than small unit tests. The best thing here is to <span>divide your tests into two groups</span>. The tests that run fast can be executed in each build. The slow tests can be executed in a second build that runs only once at night.<br /><br />Also, the CI build only has to do what's necessary to get confidence in the code you just commited to the VCS, that is: compile and test. Packaging and the like might be time-consuming parts of your build, which do not necessarily have to be done in the CI build. Again, the nightly build would do all that stuff and provide you with a complete build every morning.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1113133655196690321.post-27046103873003971162009-01-16T20:42:00.001+01:002009-04-16T20:50:31.917+02:00Iteration RetrospectivesThe idea of iteration retrospectives is to continuously improve the way your team develops software. Put simply, you sit together with your team at the end of each iteration and discuss what worked for the team during the iteration and what needs improvement. Additionally you derive a list of things you want to do differently in the next iteration. <br />To me, the benefit is obvious. You leverage the experience and creativity of the whole team for improving your process.<br />There is a complete book from the Pragmatic Programmers available on this topic: <a href="http://www.pragprog.com/titles/dlret/agile-retrospectives">Agile Retrospectives: Making good teams great</a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1113133655196690321.post-50225805405433265772008-07-03T22:08:00.009+02:002008-07-03T22:38:28.229+02:00Editing EMF Ecore models graphically with Ecore ToolsThe <a href="http://www.eclipse.org/modeling/emft/?project=ecoretools">Ecore Tools Project</a> (still in Incubation phase) provides a graphical editor for EMF Ecore models. The Eclipse Ganymede package called <span style="font-style: italic;"><a href="http://www.eclipse.org/downloads/packages/eclipse-modeling-tools-includes-incubating-components/ganymeder">Eclipse Modeling Tools</a> </span>contains the Ecore Tools.<br /><br />A notable feature is that you can create an Ecore diagram from an existing Ecore model. All model elements will be automatically added to the diagram and arranged. This makes it very easy to start using the diagram editor in projects that already have existing Ecore models.<br /><br />The following screenshot shows an Ecore diagram for the EMF Library Example created with a few mouse clicks .<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhD6gwo4bF04fXLZdUmXNQo27_QuyUhV-0QhsOv1SYXrWSo970w_rwLLaIwhnvA-SoLIgE8zsm6xMfkF_vfInRLbVImugpGvbW2AbxIF40HklN6J14vl5QTaBVcaFB6f4x4PegtlKorOeA/s1600-h/ecoretools.png"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhD6gwo4bF04fXLZdUmXNQo27_QuyUhV-0QhsOv1SYXrWSo970w_rwLLaIwhnvA-SoLIgE8zsm6xMfkF_vfInRLbVImugpGvbW2AbxIF40HklN6J14vl5QTaBVcaFB6f4x4PegtlKorOeA/s400/ecoretools.png" alt="" id="BLOGGER_PHOTO_ID_5218884943415237186" border="0" /></a>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-1113133655196690321.post-82261754578637852932008-06-04T21:46:00.023+02:002008-06-04T21:59:37.779+02:00Leaving work with a red or a green bar<div style="text-align: left;"><a style="font-weight: bold;" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiD3fMndZzmIGbSW7aQHC_-sa13bLl4J_mard_5iK-RUELd-gkTOcLrcMxbWpL-EJnvqE6nCch1AnymczBOYRVO2HBT3RaiNNOCN3yxt6bqFP-3_O1wmYStmGpHIDYIwBPIopGJfD5cfa8/s1600-h/redgreen.png"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiD3fMndZzmIGbSW7aQHC_-sa13bLl4J_mard_5iK-RUELd-gkTOcLrcMxbWpL-EJnvqE6nCch1AnymczBOYRVO2HBT3RaiNNOCN3yxt6bqFP-3_O1wmYStmGpHIDYIwBPIopGJfD5cfa8/s400/redgreen.png" alt="" id="BLOGGER_PHOTO_ID_5208111017738398690" border="0" /></a></div><br /><br /><br /><p>When using the <span style="font-weight: bold;">Test Driven Development</span> approach, is it better to leave work with a <span style="color: rgb(255, 0, 0); font-weight: bold; font-style: italic;">red</span> or a <span style="color: rgb(0, 153, 0); font-style: italic; font-weight: bold;">green</span> bar? I think there are valid reasons for both.<br /><br />If you leave with one failing test, you know exactly where to start when you arrive the next morning. It is really obvious what your next task will be - making the test pass.<br /><br />Leaving with a green bar has the advantage of a having clean state, i.e. it feels like being somehow done for the day. Also in theory a TDD cycle ends with a last successful test run after cleaning up (refactoring).<br /><br />What is your opinion?Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-1113133655196690321.post-41404787780864903782008-05-17T20:10:00.008+02:002008-06-04T23:16:20.824+02:00Static analysisThe idea of static code analysis is to find defects in software by analyzing the source code rather than executing the program. Tools can detect defect patterns, overly complex code, coding standard violations and much more.<br /><br />In practice there are some problems that arise when trying to get static analysis in place. The most common ones according to my experience are:<br /><ul><li>Configuring the analysis tool. Some tools come with a predefined configuration of checks. In most cases they don't fit your individual needs.<br /></li></ul><ul><li>Legacy code. In almost all cases you already have an existing code base that does not conform to the rules you have defined.</li></ul><ul><li>Generated code. Some projects use MDD tools that generate source code. This code usually does not comply to the static analysis rules either.</li></ul>More often than not, these issues prevent teams from using static analysis at all. However I think there are some rather simple approaches to these obstacles.<br /><br />For the first point, the only solution is to take the time and define your own rule set by considering every possible check and decide whether you want to have your source code comply to it or not.<br /><br />The second and third point can also be solved. Tools like <a href="http://eclipse-cs.sourceforge.net/">eclipse-cs</a> (an Eclipse plug-in utilizing <a href="http://checkstyle.sourceforge.net/">Checkstyle</a>) allows for per project configurations. For new source code you take your complete configuration. For legacy code you can either use no static analysis at all or you decide to cleanup the code over time enabling one check after the other in the configuration. For generated code the best thing is to isolate it from hand written code and to completely disable the static analysis.<br><br>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1113133655196690321.post-86083492997581979112008-02-28T19:13:00.012+01:002008-02-29T21:57:39.522+01:00Finding unreferenced JUnit Tests in EclipseWhen working with JUnit 3.8.x, one usually has a <span style="font-family: courier new;">TestSuite</span> (often named <span style="font-family: courier new;">AllTests</span>), which contains all indivdual test cases of a Java Project. Especially when dealing with a large number of projects, it is hard to ensure that every single test case is included in an <span style="font-family: courier new;">AllTests</span> suite. Sometimes one encounters a test case and wonders why it is not executed on the continuous integration server. However in most cases, one just won't recognize.<br /><br />I wrote a little Eclipse plug-in that addresses this issue. It searches all projects for test cases that don't have any references in the workspace.<br /><br />The search for unreferenced tests can be started from the Search menu:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjsIuUWXEFw3cS6j5I4Lq49VSd8XE43VR-iOJBMInTeyygEpYf36ibU_M6CGBggP_VxE8u4uqlRjmZTMvVelVs9LRPwU5AH1G_NIRjQkQ6LVJSLceRUIvC4_C13WFZNeMnSNFgm1qVyWI0/s1600-h/utfmenu.png"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 300px; height: 355px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjsIuUWXEFw3cS6j5I4Lq49VSd8XE43VR-iOJBMInTeyygEpYf36ibU_M6CGBggP_VxE8u4uqlRjmZTMvVelVs9LRPwU5AH1G_NIRjQkQ6LVJSLceRUIvC4_C13WFZNeMnSNFgm1qVyWI0/s400/utfmenu.png" alt="" id="BLOGGER_PHOTO_ID_5172099128367171074" border="0" /></a><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Here is what the result dialog looks like:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhIFhKpqJeae7l8VnRExUQ79JmIIop7DBy_tknFv_cuYm50IzynWoy8xWav95bhlNh8dKuvMAtF53N7vBirENtQt7yM0hQMYc5bV-A6F_TueC58LIBv4EvgTHKzZBJysfns93o6C0TL4iM/s1600-h/utfdialog.png"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhIFhKpqJeae7l8VnRExUQ79JmIIop7DBy_tknFv_cuYm50IzynWoy8xWav95bhlNh8dKuvMAtF53N7vBirENtQt7yM0hQMYc5bV-A6F_TueC58LIBv4EvgTHKzZBJysfns93o6C0TL4iM/s400/utfdialog.png" alt="" id="BLOGGER_PHOTO_ID_5172099334525601298" border="0" /></a><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />The plug-in can be downloaded <a href="http://www.testdrivenguy.de/blog/de.testdrivenguy.utf_1.0.0.jar">here.</a><br /><br /><span style="font-style: italic;">Notes: This plug-in is not applicable to JUnit 4; The plug-in does not handle more complicated scenarios, e.g. a hierarchy of TestSuites.<br /><br /><br /></span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1113133655196690321.post-56505389190223665892008-02-15T22:00:00.018+01:002008-10-29T21:19:05.804+01:00Extreme Feedback DeviceRecently, I read about Extreme Feedback Devices (e.g. <a href="http://pragmaticautomation.com/cgi-bin/pragauto.cgi/Monitor/Devices/BubbleBubbleBuildsInTrouble.rdoc">lava lamps</a>).<br /><br />The idea is to set up a physical device that displays the current status of an automated build in an eye-catching way, so that it is really hard to not recognize that the build is down. I loved the idea and decided we have to have an Extreme Feedback Device (XFD) on our own.<br /><br />I wanted an especially cheap and simple solution. I used a <a href="http://www.myavr.de/shop/article.php?artDataID=67">mySmartControl M8</a> board with an ATMega8 AVR RISC microcontroller from <a href="http://www.atmel.com/">Atmel</a>, modified the circuit of a <a href="http://www.hartig-helling.de/produkte/h_h_pid.php?thb=Produkte&pkhg=03LEB&pkug=120DOB&phg=Licht-+und+Effektbeleuchtung&pug=Design-Objekte&psg=&pid=1894&lang=de">cheap LED effect lamp</a> and connected it to the microcontroller, so that each color can be switched on and off individually.<br /><br />The microcontroller board has an USB interface and shows up as a virtual COM port. A small C program running on to the microcontroller listens to commands sent via the COM port from the PC and reacts by switching the LEDs on and off. On the PC a Java programm polls the web page of the build server and sends commands to the microcontroller utilizing the serial communication library <a href="http://www.rxtx.org/">rxtx</a>.<br /><br />The price for the hardware was about 35€. It took me 5 hours to make everything work, whereas the hardest part was the microcontroller programming. Luckily there was a ready-to-use example program which exactly did what I needed :-)<br /><br />Here is what the final result looks like:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj6FKWDpogoWo1ZofTTBHSYzxN-aBohTYt5elDXu0fCD3aqaYfLqbVYthCEFZZOwWjOo0af8rT_Pez7nmejMwoeNFrf7zuOIbh59ffOkB6m2kZg_vBiZ590knH0tB2xl27xII2VoGO7xkw/s1600-h/red.jpg"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 163px; height: 217px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj6FKWDpogoWo1ZofTTBHSYzxN-aBohTYt5elDXu0fCD3aqaYfLqbVYthCEFZZOwWjOo0af8rT_Pez7nmejMwoeNFrf7zuOIbh59ffOkB6m2kZg_vBiZ590knH0tB2xl27xII2VoGO7xkw/s400/red.jpg" alt="" id="BLOGGER_PHOTO_ID_5167313675805967842" border="0" /></a><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhT1ZDSg-g0CC7oL6E07dXxzXWl8a7RXC-YnKyzSpgapu0HFStvbi9RMjDwx5TsxknUmc7z8ZTzuiG0Ztab5CODncSOnZOEG_3VHFIxeuq3qEIz5hD-v_c1lGFNiqxiEsANW6q4i7le4kM/s1600-h/green.jpg"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 164px; height: 219px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhT1ZDSg-g0CC7oL6E07dXxzXWl8a7RXC-YnKyzSpgapu0HFStvbi9RMjDwx5TsxknUmc7z8ZTzuiG0Ztab5CODncSOnZOEG_3VHFIxeuq3qEIz5hD-v_c1lGFNiqxiEsANW6q4i7le4kM/s400/green.jpg" alt="" id="BLOGGER_PHOTO_ID_5167313864784528882" border="0" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWA-MHoqgL36HXFKjVzZpMKfa_kBLU4nS9Zucck1Mp7tISriI3Uo25-Cp3uPHbvrSf4vMbiHBegDpypgzchOPHcr5pArBQYBzbcXRvIcbYs450HeMpA9be5AcKtTDHNvi8_Pd_KP1oSPI/s1600-h/green.jpg"><br /></a><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />After using the device for a couple of days it turned out to be very useful, if the device flashes a couple of times when the state changes, so that one recognizes a build failure immediately, even from the corner of one's eye.<br />As the device has also a blue LED, and can therefore display basically arbitrary colors by mixing the primary colors (RGB), one can think of visualizing additional aspects of the build status.Unknownnoreply@blogger.com7tag:blogger.com,1999:blog-1113133655196690321.post-59898110628558186592008-02-09T21:06:00.001+01:002008-02-28T20:14:13.743+01:00TDD cycle lengthIn theory, a Test Driven Development cycle constists of the following steps:<br /><ul><li>Write a new failing test</li><li>Run the test and see it fail<br /></li><li>Code (in order to pass the test)</li><li>Run the test again and see it pass</li><li>Refactor<br /></li></ul>In literature the time for one TDD cycle is stated to be about a couple of minutes.<br /><br />I was wondering how my personal cycle time is at an average. So I wrote a little plug-in for Eclipse that listens to test runs and computes the average time between a test run with failures and a test run without any failures.<br /><br />Here is what the very simple plug-in looks like:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZM4-Jre3pNiMTYbAW0pYfi7MUM2Gtql15dCPySgzMmMHRUw6uKNiVMnRX-dVL34mESMjEpCWU2pCVA4uvGhIGFdNbnddMX-oux4Tga6X98ce1BM2dJBAGq6w_Oh6mO4ftEBN2Gcd-a1Q/s1600-h/TddMeter.png"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZM4-Jre3pNiMTYbAW0pYfi7MUM2Gtql15dCPySgzMmMHRUw6uKNiVMnRX-dVL34mESMjEpCWU2pCVA4uvGhIGFdNbnddMX-oux4Tga6X98ce1BM2dJBAGq6w_Oh6mO4ftEBN2Gcd-a1Q/s400/TddMeter.png" alt="" id="BLOGGER_PHOTO_ID_5165082427410793874" border="0" /></a><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />In case you want to give it a try you can download it <a href="http://www.testdrivenguy.de/blog/TddMeter_1.0.1.jar">here</a>.<br />The plug-in contributes a view called TddMeter. In order to show it simply hit <CTRL>-<3> and type TddMeter.<br /><br /><span style="font-style: italic;">Note: The plug-in requires Eclipse 3.3</span><br /><br /><span style="font-weight: bold;">Update: The plug-in now persists its state.<br /><br /></span>Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-1113133655196690321.post-34464789155384535242008-01-13T11:36:00.008+01:002009-05-26T21:43:37.665+02:00Eclipse Plug-in Dependency VisualizationWhen developing Eclipse-based software consisting of a large set of plug-ins it is a good idea to keep track of the dependencies between the individual plug-ins.<br />There is an Eclipse Incubation project that makes it possible to visualize the directly and indirectly referenced plug-ins of a selected plug-in:<br /><a href="http://www.eclipse.org/pde/incubator/dependency-visualization/">PDE Incubator Dependency Visualization</a><br /><br />When I gave the plug-in a try I missed two things:<br /><ul><li>I also wanted to show the <span style="font-weight: bold;">incoming</span> dependencies</li><li>I wanted to be able to exclude the Eclipse plug-ins<br /></li></ul>Basically, I wanted to show the complete dependency graph of <span style="font-weight: bold;">all my</span> plug-ins. The simplest solution I could think of was to<br /><ul><li>Add a checkbox to also show the incoming dependencies of the plug-ins</li><li>Provide a text field to define exclude filter patterns</li></ul>So I checked out the source code and tweaked it a little. Now it is possible to show the whole dependency graph for all my plug-ins by selecting a plug-in which is directly or indirectly referenced by all other plug-ins.<br /><br />Here is what the result looks like:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj13oQh03PYLwT9rgc5rVarpe_IRvnWe4_4Rg3_mqfOnaTL9K1zTrOSITAbCuzH7tD9nQ9vDTVciFh-Gqo_xAipinJDXIoql1iX51MKG_DiSP4LPtlRUJWIcWPpffv0D3fYtl9Xvk9qOO0/s1600-h/deps.png"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj13oQh03PYLwT9rgc5rVarpe_IRvnWe4_4Rg3_mqfOnaTL9K1zTrOSITAbCuzH7tD9nQ9vDTVciFh-Gqo_xAipinJDXIoql1iX51MKG_DiSP4LPtlRUJWIcWPpffv0D3fYtl9Xvk9qOO0/s400/deps.png" alt="" id="BLOGGER_PHOTO_ID_5155070516983561090" border="0" /></a><span style="font-style: italic;">You can download a binary version of the modified plug-in <a href="http://www.testdrivenguy.de/blog/depvis.zip">here</a>.<br /><br /></span><span>T</span><span>o get started, hit <span style="font-style: italic;">CTRL-3</span>, type <span style="font-style: italic;">Graph</span> and select the <span style="font-style: italic;">Graph Plug-in Dependencies</span> View. To show the dependencies of a plug-in right-klick on the view and select <span style="font-style: italic;">Focus On...<br /><br /></span><span><span style="font-weight: bold;">[Update 26.05.2009]</span> </span><span>By now, the PDE Incubator Dependency Visualization provides the option to also show the incoming dependencies via a new toolbar action "Show Callers". (see also <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=206306">Bugzilla 206306</a>)<br /><br />Please observe my most recent <a href="http://testdrivenguy.blogspot.com/2009/05/eclipse-plug-in-dependency-graph.html">post</a> on this topic.<br /></span></span>Unknownnoreply@blogger.com1