Our website uses cookies to enhance your browsing experience.
Accept
to the top
close form

Fill out the form in 2 simple steps below:

Your contact information:

Step 1
Congratulations! This is your promo code!

Desired license type:

Step 2
Team license
Enterprise license
** By clicking this button you agree to our Privacy Policy statement
close form
Request our prices
New License
License Renewal
--Select currency--
USD
EUR
* By clicking this button you agree to our Privacy Policy statement

close form
Free PVS‑Studio license for Microsoft MVP specialists
* By clicking this button you agree to our Privacy Policy statement

close form
To get the licence for your open-source project, please fill out this form
* By clicking this button you agree to our Privacy Policy statement

close form
I am interested to try it on the platforms:
* By clicking this button you agree to our Privacy Policy statement

close form
check circle
Message submitted.

Your message has been sent. We will email you at


If you do not see the email in your inbox, please check if it is filtered to one of the following folders:

  • Promotion
  • Updates
  • Spam

>
>
>
False positives of the static code anal…

False positives of the static code analyzer

Jul 07 2021

One of the disadvantages of the static code analysis methodology is the presence of false positive warnings. The tool signals possible bugs where there are none.

Developers of static code analysis tools put a lot of effort into reducing the number of false positives. Someone does it better, someone does it worse. It's important to accept that the issue of false positives is unsolvable at the theoretical level. You can strive for the ideal, but you will never be able to create an analyzer that doesn't make mistakes at all.

The halting problem is the cause: it's a theorem proving the inability to develop a general algorithm that would determine from the source code of a program whether the program will loop indefinitely or be completed in a finite time. Rice's theorem extends this theorem and describes an algorithmically unsolvable problem. It states that for any non-trivial property of evaluated functions it's impossible to determine whether a random program evaluates a function with such a property or not.

However, even without going into theory it's easy to demonstrate the situation where it's not clear whether the code contains a bug or not. For example, let's take the V501 diagnostic implemented in the PVS-Studio analyzer.

The idea of the diagnostic is very simple. It is suspicious when the left and right operands of the operators ==, <, >, && and so on coincide. Example:

if (A == A)

It's almost always a typo. This is confirmed by a large number of bugs found by this diagnostic in real open projects. It would seem that such a simple and successful diagnostic cannot give false positives. Unfortunately, this is not the case. Here is the real correct code from a mathematical library:

__host__ __device__ inline int isnan(float x){
  return x != x;
}

Comparing a variable of the float type to itself, you can find out whether its value is Not-a-Number (NaN) or not.

NaN is not equal to any other value (even to itself). Due to this, one of the most common, but not obvious ways to check the result for NaN is to compare the obtained value with itself.

Many analyzers will issue a warning for this code, although the function works correctly. Of course, it's better to use the std::isnan function for such purposes. However, the code considered correct, and its analogues are found in a large number of applications. Therefore, issuing warnings for comparing two identical variables in this particular code is a false positive.

The PVS-Studio analyzer goes further and tries to guess if there is a function above that detects non-numbers. The V501 diagnostic will remain silent if the same float variables are compared and somewhere nearby there's a combination of letters 'NaN', 'nan', 'Not a Number', etc. That is, the analyzer will remain silent on the code shown above.

Unfortunately, while such empirical exceptions are extremely useful, they are unreliable. If the analyzer encounters a comparison of a float variable A == A somewhere in the program text and doesn't have extra clues, it will have to issue a warning. However, as we now know, such code can be correct if the programmer wants to detect the presence of NaN. Yes, that's not a really good piece of code, because it confuses not only the analyzer, but also other programmers. However, it can be correct and do exactly what it should do.

There are always a lot of such ambiguities, and code analyzers balance between the danger of not reporting a bug and the danger of issuing a large number of false positives.

A large number of false positives is bad because programmers begin to neglect the analyzer report. And if a programmer faces a warning that is not quite clear, they are predisposed to consider it false immediately. They are not going to dig deeper. That's sad, because code analyzers often find just inconspicuous bugs that, at first glance, looks fine. Here are the examples: 1, 2.

In order to compensate the problem of false positives, the tools offer a variety of auxiliary mechanisms that allow you to configure diagnostics, suppress false warnings and postpone insignificant technical debt for later. See also the article "How to introduce a static code analyzer in a legacy project and not to discourage the team".

If you have encountered false positives of PVS-Studio, which, in your opinion, could be programmed as an exception in diagnostics, we suggest sending us the relevant information and a synthetic code example. We will try our best to refine the analyzer.

Additional links:

Popular related articles


Comments (0)

Next comments next comments
close comment form