We found over 10000 bugs in various open source projects
In order to promote a static analysis methodology at large and static analyzer PVS-Studio in particular we regularly verify various open source projects. The bugs we found demonstrate that nobody is immune from misprints, inattention or other mistakes. Absolutely nobody, and we find confirmations to this point in such projects as Microsoft Code Contracts, Qt, Linux kernel, CryEngine, VirtualBox, LibreOffice, Firefox, Boost, Tor and so on. At the moment we inspected 262 projects. It's official! We found and logged 10000 bugs!
As a rule, we write an article when we find fairly large number of issues in a project. You may refer to the list of our articles using the link. If we find just a few issues, we report them to contributors of a project and get engaged with other matters.
Of course, 10000 issues in 262 projects is not too much. It makes 38 issues per project at an average. I should notice that indeed this amount does not mean anything. Code base and quality may vary from project to project. For example, in some projects we find just one issue, while other projects contain hundreds of issues.
Another important point to note is that to promote static analysis and PVS-Studio we do not need to find as many bugs as possible. We need to find enough interesting issues to write an article. That is why we always suggest contributors of projects to examine their code more carefully. In fact, non-recurrent inspections are good for demonstration of analyzer capabilities, but in real development process they are of very little use. The whole point of the static analysis is to run it on a regular basis. In this case most of errors can be detected during code writing, and not after 50 hours of debugging or after user's complaints.
It is time now to share a link to the logged errors:
Errors detected in Open Source projects
This collection of issues can be used as a unique data for thinking of coding standards development, writing articles about programming rules, and assist in other research on improving software reliability, for example, "The Last Line Effect". Wish you interesting findings.