Firebird source code reviews
Recently, Coverity presented their first results in their scanning technology that hunts for defects in source code. One of the addressed projects was Firebird. After Roman Rokytskyy reviewed the results, he concluded the failures are in two areas:
- ICU. This is IBM’s library to handle internationalization needs. ICU stands for International Components for Unicode. You can start here at SF or go directly to IBM site. According to the project’s page at SF, the programming languages used to build ICU are C, C++ and Java. However, what I read seemed to me mostly plain old C. This is where Coverity found most of the bugs. Should go we fixing a third party library? Given our resources, unlikely, unless these are very simply issues. The best we can do is to ask Coverity to report their findings directly to the ICU project or to IBM.
- Gpre. This is Firebird’s GDML and Embedded SQL Preprocessor. It takes .e or .epp files and outputs .c and .cpp although it can preprocess files in other programming languages (Ada, Cobol, etc.). If you crash it, it will be like crashing your code editor: the damage is on your own machine. In case it creates code than once compiled, crashes Firebird, it will mean we have to fix a vulnerability in the FB server itself: anybody could generate the same sequence manually or with a custom program. Consider that Jim has said many times that gpre was his first C program.
Chris Knight -who contributed in previous years to the FreeBSD port- was concerned that there was no reaction to the letter posted in our list by Coverity’s CTO. He said to me: «Given that you’re the official(?) “custodian” of the source, I just thought I’d chip in to say that the offer by Coverity is a good one to look at.» Formally, the keeper of the project is the team leader, Dmitry Yemanov, but probably I could say that since I’m the official code reviewer (now helped by Adriano dos Santos Fernandes), I could be considered the “custodian” of the sources. Then, given that most failures are outside our direct control, what could we do? Fixing buffer overruns due to extremely long paths or so in gpre is not a priority, for example. We have to deliver FB2. What I found astonishing is that at least Alex and me (not counting other people that probably noticed the same) are aware of several “glitches” (to name them in soft terms) that abound in the jrd and remote directories, but that seem too esoterical for the Coverity tool to detect them. Some places could be best served by a full code rewrite instead of tweaks here and there.
Now, to be blunt, if you review the concept of Total Quality you may come to the conclusion that one of the premises is “the earlier you pick a defect, the easier is to fix it”. The final idea is to minimize production cost and maximize net income. Obviously, FB Project itself isn’t a commercial venture although it has a Foundation to foster its development, but the customers (those that use FB in different environments) could have a big lost if the product was ostensibly flawed. Therefore, we have three methods in place:
- Public discussions before doing a big change.
- Code reviewer inspects the changes and deals privately with the committer while possible. There’s also peer-to-peer review as time allows. Apart from that, the code reviewer is constantly reading the plethora of files, looking for something suspicious that has not been caught in years.
- Testing suites. Although qmdb was somewhat official, Alex is still running Blas’ fbtcs to ensure known bugs that were fixed don’t return due to new changes. Finally, beta testers and enthusiast people do their own research.
I believe in source code inspection not because I do it, but because I’ve seen the number of bugs decreases before they go into an alpha, beta or final product. Also, some bugs may manifest in such ways that’s very hard to find the cause of the problem with functional tests (running the server against a set of SQL scripts) unless you were there when the change was committed and you noticed something strange immediately.
The Coverity tool falls in the middle between our automated tests (that look for known issues) and human source code inspection (that oversees everything). Since it’s generic, it has the potential to discover problems we didn’t notice yet. I’m concerned that some code may trigger false positives, like some places (destination buffers) that don’t seem to check bounds, but this is because their source of data is already of guaranteed limited length. Someone that goes looking blindly for strcpy would panic at first glance. However, Coverity scans seem to be very careful and didn’t report false positives. The lesson users can draw is that people in the project are doing their duties in responsible ways and that the different phases of bugs filtering are working. We can’t claim to be rock solid, but we are flying in the right direction.
Thanks for comments Claudio!
We are at OSNews and NewsForge 😉
Pingback: Firebird News » Open-source bug hunt results posted