To get a trial key
fill out the form below
Team License (a basic version)
Enterprise License (an extended version)
* By clicking this button you agree to our Privacy Policy statement

Request our prices
New License
License Renewal
--Select currency--
USD
EUR
GBP
RUB
* By clicking this button you agree to our Privacy Policy statement

Free PVS-Studio license for Microsoft MVP specialists
* By clicking this button you agree to our Privacy Policy statement

To get the licence for your open-source project, please fill out this form
* By clicking this button you agree to our Privacy Policy statement

I am interested to try it on the platforms:
* By clicking this button you agree to our Privacy Policy statement

Message submitted.

Your message has been sent. We will email you at


If you haven't received our response, please do the following:
check your Spam/Junk folder and click the "Not Spam" button for our message.
This way, you won't miss messages from our team in the future.

>
>
>
False Positives of the Static Code Anal…

False Positives of the Static Code Analyzer

Jul 07 2021

One of the disadvantages of the static code analysis methodology is the presence of false positive warnings. The tool signals possible bugs where there are none.

Developers of static code analysis tools put a lot of effort into reducing the number of false positives. Someone does it better, someone does it worse. It's important to accept that the issue of false-positives is unsolvable at the theoretical level. You can strive for the ideal, but you will never be able to create an analyzer that doesn't make mistakes at all.

The halting problem is the cause: it's a theorem proving the inability to develop a general algorithm that would determine from the source code of a program whether the program will loop indefinitely or be completed in a finite time. The Rice's theorem extends this theorem and describes an algorithmically unsolvable problem. It states that for any non-trivial property of evaluated functions it's impossible to determine whether a random program evaluates a function with such a property or not.

However, even without going into theory it's easy to demonstrate the situation where it's not clear whether the code contains a bug or not. For example, let's take the V501 diagnostic implemented in the PVS-Studio analyzer.

The idea of the diagnostic is very simple. It is suspicious when the left and right operands of the operators ==, <, >, && and so on coincide. Example:

if (A == A)

It's almost always a typo. This is confirmed by a large number of bugs found by this diagnostic in real open projects. It would seem that such a simple and successful diagnostic cannot give false positives. Unfortunately, this is not the case. Here is the real correct code from a mathematical library:

__host__ __device__ inline int isnan(float x){
  return x != x;
}

Comparing a variable of the float type to itself, you can find out whether its value is Not-a-Number (NaN) or not.

NaN is not equal to any other value (even to itself). Due to this, one of the most common, but not obvious ways to check the result for NaN is to compare the obtained value with itself.

Many analyzers will issue a warning for this code, although the function works correctly. Of course, it's better to use the std::isnan function for such purposes. However, the code considered correct, and its analogues are found in a large number of applications. Therefore, issuing warnings for comparing two identical variables in this particular code is a false positive.

The PVS-Studio analyzer goes further and tries to guess if there is a function above that detects non-numbers. The V501 diagnostic will remain silent if the same float variables are compared and somewhere nearby there's a combination of letters 'NaN', 'nan', 'Not a Number', etc. That is, the analyzer will remain silent on the code shown above.

Unfortunately, while such empirical exceptions are extremely useful, they are unreliable. If the analyzer encounters a comparison of a float variable A == A somewhere in the program text and doesn't have extra clues, it will have to issue a warning. However, as we now know, such code can be correct if the programmer wants to detect the presence of NaN. Yes, that's not a really good piece of code, because it confuses not only the analyzer, but also other programmers. However, it can be correct and do exactly what it should do.

There are always a lot of such ambiguities, and code analyzers balance between the danger of not reporting a bug and the danger of issuing a large number of false positives.

A large number of false positives is bad because programmers begin to neglect the analyzer report. And if a programmer faces a warning that is not quite clear, they are predisposed to consider it false immediately. They are not going to dig deeper. That's sad, because code analyzers often find just inconspicuous bugs that, at first glance, looks fine. Here are the examples: 1, 2.

In order to compensate the problem of false positives, the tools offer a variety of auxiliary mechanisms that allow you to configure diagnostics, suppress false warnings and postpone insignificant technical debt for later. See also the article "How to introduce a static code analyzer in a legacy project and not to discourage the team".

If you have encountered false positives of PVS-Studio, which, in your opinion, could be programmed as an exception in diagnostics, we suggest sending us the relevant information and a synthetic code example. We will try our best to refine the analyzer.

Additional links:

Popular related articles
PVS-Studio for Java

Date: Jan 17 2019

Author: Andrey Karpov

In the seventh version of the PVS-Studio static analyzer, we added support of the Java language. It's time for a brief story of how we've started making support of the Java language, how far we've co…
Characteristics of PVS-Studio Analyzer by the Example of EFL Core Libraries, 10-15% of False Positives

Date: Jul 31 2017

Author: Andrey Karpov

After I wrote quite a big article about the analysis of the Tizen OS code, I received a large number of questions concerning the percentage of false positives and the density of errors (how many erro…
The Evil within the Comparison Functions

Date: May 19 2017

Author: Andrey Karpov

Perhaps, readers remember my article titled "Last line effect". It describes a pattern I've once noticed: in most cases programmers make an error in the last line of similar text blocks. Now I want t…
Technologies used in the PVS-Studio code analyzer for finding bugs and potential vulnerabilities

Date: Nov 21 2018

Author: Andrey Karpov

A brief description of technologies used in the PVS-Studio tool, which let us effectively detect a large number of error patterns and potential vulnerabilities. The article describes the implementati…
PVS-Studio ROI

Date: Jan 30 2019

Author: Andrey Karpov

Occasionally, we're asked a question, what monetary value the company will receive from using PVS-Studio. We decided to draw up a response in the form of an article and provide tables, which will sho…
The Last Line Effect

Date: May 31 2014

Author: Andrey Karpov

I have studied many errors caused by the use of the Copy-Paste method, and can assure you that programmers most often tend to make mistakes in the last fragment of a homogeneous code block. I have ne…
The Ultimate Question of Programming, Refactoring, and Everything

Date: Apr 14 2016

Author: Andrey Karpov

Yes, you've guessed correctly - the answer is "42". In this article you will find 42 recommendations about coding in C++ that can help a programmer avoid a lot of errors, save time and effort. The au…
Appreciate Static Code Analysis!

Date: Oct 16 2017

Author: Andrey Karpov

I am really astonished by the capabilities of static code analysis even though I am one of the developers of PVS-Studio analyzer myself. The tool surprised me the other day as it turned out to be sma…
Static analysis as part of the development process in Unreal Engine

Date: Jun 27 2017

Author: Andrey Karpov

Unreal Engine continues to develop as new code is added and previously written code is changed. What is the inevitable consequence of ongoing development in a project? The emergence of new bugs in th…
The way static analyzers fight against false positives, and why they do it

Date: Mar 20 2017

Author: Andrey Karpov

In my previous article I wrote that I don't like the approach of evaluating the efficiency of static analyzers with the help of synthetic tests. In that article, I give the example of a code fragment…

Comments (0)

Next comments
This website uses cookies and other technology to provide you a more personalized experience. By continuing the view of our web-pages you accept the terms of using these files. If you don't want your personal data to be processed, please, leave this site.
Learn More →
Accept