While communicating with people on forums, I noticed there are a few lasting misconceptions concerning the static analysis methodology. I decided to write a series of brief articles where I want to show you the real state of things.
The fifth myth: "You can easily evaluate capabilities of a static analyzer on a small test code".
This is how this statement looks in discussions on forums (this is a collective image):
I've written a special program, its size is 100 code lines. But the analyzer doesn't generate anything although all the warning levels are enabled. This [tool of yours] / [static analysis] in general is just rubbish.
It is not the static analysis methodology which is rubbish, but this approach to evaluating the usability of a particular tool. The incorrectness of this kind of tool studying consists of two aspects:
1.
Programmers think they don't make simple mistakes. This phenomenon was discussed in Myth 2. So they try to feed an analyzer with a tricky sample and feel happy secretly when the analyzer can't find the error. This game is interesting yet senseless.
You should understand that most errors are simple as hell, and static analyzers detect them very well. The paradox is that it's much more difficult to invent a simple mistake than a complicated one. Here you are an example. Can you ever guess to write a sample like this?
int threadcounts[] = { 1, kNumThreads };
for (size_t i = 0;
i < sizeof(threadcounts) / sizeof(threadcounts); i++) {
I doubt. I cannot imagine one can make such a silly mistake and write "sizeof(threadcounts) / sizeof(threadcounts)". So, such an example will never be created on purpose. By the way, this fragment is taken not from a student's lab work, but from the Chromium project. It is diagnosed by the PVS-Studio analyzer very easily, of course.
2.
Written samples are of random character, and they are few. So you may get very different results depending on chance. You may invent 5 errors that will be successfully found by one analyzer and not found by another analyzer. Or you may create a program with five errors, and two analyzers will give opposite results for it. The sampling for such an investigation is too small. To be able to compare and study tools with at least somewhat reliable results, you must write a program text with at least 500 different errors. An investigation based on 5-10 errors is not reliable.
Moreover, programmers expect to see diagnostic messages on errors of some particular type and forget about the rest. For example, almost all the programmers write one and the same sample with a memory release defect:
void Foo()
{
int *a = (int *)malloc(X);
int *b = (int *)malloc(Y);
//...
free(a);
}
Some analyzers detect this error, the others don't. For instance, PVS-Studio does not diagnose memory leaks currently.
Blast from the past: a lot has changed since 2017. You're welcome to check out the article "Yes, PVS-Studio Can Detect Memory Leaks".
But it can find the following stuff:
static int rr_cmp(uchar *a,uchar *b)
{
if (a[0] != b[0])
return (int) a[0] - (int) b[0];
if (a[1] != b[1])
return (int) a[1] - (int) b[1];
if (a[2] != b[2])
return (int) a[2] - (int) b[2];
if (a[3] != b[3])
return (int) a[3] - (int) b[3];
if (a[4] != b[4])
return (int) a[4] - (int) b[4];
if (a[5] != b[5])
return (int) a[1] - (int) b[5];
if (a[6] != b[6])
return (int) a[6] - (int) b[6];
return (int) a[7] - (int) b[7];
}
There must be "return (int) a[5] - (int) b[5];" instead of "return (int) a[1] - (int) b[5];".
Why does nobody write such examples? Note that PVS-Studio has found this error in the MySQL project.
The conclusion is, adequate investigation or comparison of tools can be carried out only with real projects. You take project A, test it with PC-Lint / Visual C++ / PVS-Studio / C++Test, study all the messages attentively, draw up a table of results (how many and which errors each analyzer has found). This is the only real investigation and comparison. For example: "Comparing the general static analysis in Visual Studio 2010 and PVS-Studio by examples of errors detected in five open source projects ".
0