AutoCAD clones have been termed unstable and unreliable since they first appeared on the market. And this is not just Autodesk spreading FUD (Fear Uncertainty Doubt). I remember at the IntelliCAD World Meeting in Athens last year, Dave Lorenzo, the CTO of the ITC, painted a horrible picture of the old IntelliCAD 6 code. This was not in private, but in front of all the attendees and press. He mentioned that the old code had 1000 line functions, which any C/C++ developer will tell you is a pain to maintain, fix and debug.
For this very reason, Luc de Batseiler, the CTO of Bricsys, was quite keen to show me their automated testing system that they had developed in-houseas part of their rewrite of their CAD platform. Luc called the old ITC code a nightmare and the main reason for instability and poor performance, some of the adjectives that he does not want the rewritten Bricscad to be associated with.
The system is quite complicated but I will try and dumb it down a little for sake of clarity. The main highlight of their system is the way they write the tests. First let me explain how tests are normally written. A programmer writes code and builds it into an application. This application is used internally by the testing team or externally by beta testers, who report bugs and crashes to the development team. The concerned programmer then fixes his code and then proceeds to write a small test command (usually in a plug-in) which verifies that the bug has been fixed. This test is added to the list of existing tests which are run automatically or manually after every build is created. This is vital because it is quite possible that another programmer may modify something else in the code which brings back the bug that the first programmer was responsible for. This is called Regression Testing.
Bricsys does this a bit differently. They first write the tests and then write the code. Let me explain this with a simple example. Suppose I am writing code that computes the geometric propeties of a circle, say its area and circumference. After I write the code I go ahead and build the application. As part of testing my code, I run a command that I know uses the circle related code I just wrote. Lets assume that this particular command uses only the area calculating code and not the code to calculate the circumference. The command works fine and I release the build for the testers to do their work. The testers will run the same command but with more data. After testing is successful, I mark the build as a production build and it gets into the hands of the general public.
All this time the circumference code has not really been tested. I have used a very rudimentary example (area and circumference) to explain the concept, but you must understand that code is usually a vast set of logical instructions which behaves differently for different data sets. The Bricsys approach is to test all conceivable ways that code can be used in order to trap bugs and crashes before the software even lands in the hands of the testers. So continuing with the circle example, first Bricsys would write two tests – one to calculate the area and another to calculate the circumference of a set of circles and test the outputs with known calculated values. They would then proceed to write the code to do the actual calculations for the area and circumference.
So now hoping that I have not lost you, I will proceed to explain how the testing is carried out and to what extent it is automated. Bricsys has offices in Belgium, Russia and Romania. All their source code resides on a server. Before a programmer from any of their development centers works on the code he first checks out a local copy from the server and uses it. After he is done with his work, he pushes a button which automatically runs all the tests on his local build of the application. If all goes well, he checks his code back to the server. The moment he does that, a series of events are triggerred. The server rebuilds the code, builds an installer and initiates installations on the various testing servers. All this happens automatically without any human intervention. Once the testing servers have the latest build installed on them, they automatically fire the tests that they are programmed to carry out. After all the testing servers are done testing, the results of the tests are posted on the intranet for the concerned people to see and take necessary action. Either way, every night this entire operation happens anyways.
Luc showed me some of the testing code and the testing results of the past few days. He pointed me a jump in test failures on a particular day. “This is because on that day, we replaced the ITC geometry library with our own. We are now working towards eliminating the test failures“, he said. “When the number of failures become zero, we will release the build as a beta to our beta testers. Although we try and be as exhaustive as we can in our tests, it is by no means a fool proof approach. Our beta testers report bugs and crashes, which we analyze, fix and add then create tests to see that they do not surface again.”
I asked Luc how long the entire building and testing process took once a programmer checked in his code. He replied, “First we had one server for the code and another server that did the testing. As the number of tests grew, we added more servers and split the tests between them. To give you a sense, the DRX plug-in that contains the tests for Bricscad is 22MB, whereas the Bricscad executable file is just 6 MB. From the time a developer checks in his code, it will not take more than half an hour for the server to rebuild the software, rebuild the installer, reinstall the software on the testing servers and have the testing servers perform the tests and publish the results. If we see that the time exceeds half and hour, we add another testing server“.
Every software company talks about how they go to lengths to make their software the best. This was the first time that a company actually showed me their stuff. Luc showed me a great deal more, but that was for my eyes only.