To get a trial key
fill out the form below
Team License (a basic version)
Enterprise License (extended version)
* By clicking this button you agree to our Privacy Policy statement

Request our prices
New License
License Renewal
--Select currency--
USD
EUR
GBP
RUB
* By clicking this button you agree to our Privacy Policy statement

Free PVS-Studio license for Microsoft MVP specialists
* By clicking this button you agree to our Privacy Policy statement

To get the licence for your open-source project, please fill out this form
* By clicking this button you agree to our Privacy Policy statement

I am interested to try it on the platforms:
* By clicking this button you agree to our Privacy Policy statement

Message submitted.

Your message has been sent. We will email you at


If you haven't received our response, please do the following:
check your Spam/Junk folder and click the "Not Spam" button for our message.
This way, you won't miss messages from our team in the future.

>
>
Why We Don't Write Articles Comparing P…

Why We Don't Write Articles Comparing PVS-Studio with Other Static Analyzers

Jul 02 2019
Author:

People ask us every now and then if we compared PVS-Studio with other static analyzers and wrote any articles about the results of such research. The answer is no, there's no such a comparison and it won't be possible to do one and write an article about it. It's not that we are lazy or afraid that our product will perform worse than other tools. There are three reasons why we don't do that, and there's nothing we can do about them. These three reasons will be discussed in this article. Let's clear things up.

0637_No_PVS_Studio_Compare/image1.png

Why we don't write articles showing how our analyzer compares to other analysis tools

1. People don't trust such articles

We tried doing some comparisons and writing articles about that many years ago (example), and we did make sure that it was done scrupulously. We would take a set of projects, check them with various analyzers, and see which would find the largest number of genuine errors. It's was a very tedious task to study all those reports and it took more than a hundred person-hours. After that, we would write an article presenting the results of our research. Were the readers grateful for the job we'd done? Not at all. We were heavily criticized each time, and the arguments basically came down to the following idea: we deliberately picked projects in such a way that our tool would perform best.

So, we never achieved our main goal: trust in comparisons. It was an especially bitter experience considering that sometimes comparison conditions were altered in favor of other tools rather than PVS-Studio. I remember, for instance, excluding two projects from the test set: one because of Cppcheck and the other because of Visual Studio's analyzer as they would end up with a hang when trying to analyze those projects.

Oddly, there seems to be no way to deal with the criticism. After all, how could one verify that our choice of the projects at GitHub was indeed random? I don't know how to answer this. Some suggested that such an article should be written independently by an outside person. It's a nice idea, but how to fulfill it? Where do we get an enthusiast who would voluntarily sit down and meticulously check project by project to see how the analyzers compared? Sure, we could hire such a person, but then again, could an article be trusted if we paid for it? We seem to end up right where we started :).

2. Vendors are unwilling to cooperate

When comparing proprietary products, you've got to get copies of these products in the first place, which is not as easy as it might appear. For instance, some vendors will simply go "radio silent" and ignore all my requests for a demo version once they learn who we are :). And we can't lie to them to get what we want, can we? I'm not even sure they would sell their products to us if we wanted to buy them.

But even if we bought or otherwise got our hands on their tools, there's no guarantee that the license terms would allow us to write comparison articles. And here's where we are getting to the third point, which has to do with legal issues.

3. The legal side of the matter

The legislation of some countries prohibits direct comparison of two products in order to prevent deceptive advertising. You are, in fact, allowed to describe the capabilities of each product and present the results in a table for the readers to compare them and draw a conclusion on their own. But you are not allowed to explicitly state that one product is good and the other is bad. But then how do I say that one tool has found 100 bugs and the other 10 bugs without saying that the first is better than the second? :).

Sure, we could handle that and write a properly worded article. But we are still a small company (with the staff of only 31 persons as of this writing) and we don't have a legal department to regulate these matters and make the material immune to any complaints. If so, we'd better keep away from direct comparisons of PVS-Studio with our competitors' products. It would be a better investment if we added even more sophisticated code analysis techniques to PVS-Studio instead.

What do we do then?

That said, the question still remains: how can you find out if PVS-Studio deserves to become part of your DevOps?

Well, that's easy. Just run PVS-Studio on your code and study the report.

Just make sure you don't have all the diagnostics enabled at once, or you'll get drowned in numerous warnings irrelevant to your project. Start with the first-level General (GA) warnings. This is explained in more detail here.

I hope that after launching PVS-Studio, you'll get enough interesting warnings to make the answer to the question if you should use it obvious.

Note that many of the warnings will refer to bugs in rarely used - if used at all - code. This observation must be interpreted correctly. The thing is that you have already fixed the most critical bugs in existing code. But there are other bugs that don't affect the program's behavior that much, and it's them that the analyzer will be reporting in large numbers the first few times. On the other hand, all those critical bugs that you have already fixed would have cost you much less if they had been found with static analysis at the earlier coding stage. Sure, not every bug can be diagnosed by static analysis, but still, if half of them were detected right away, fixing them and developing and maintaining your software product would take much less in terms of time and money. My colleague discusses this in more detail in the article "We Have 100 Developers, the Analyzer Found Few Bugs, Is Analyzer Useless?".

If your team is already using some code analyzer, things become even more interesting. That way, you will be able to see what bugs PVS-Studio has managed to find after the checks by other static analysis tools. By the way, in our articles, we sometimes deal with open-source projects that are already using some static analyzer. A few examples of these:

You could reasonably argue:

The fact that your tool has found some bugs in a project that has already been checked by the XXX tool doesn't necessarily mean your tool is better. Maybe if the project had been checked with PVS-Studio from the very beginning and then once with XXX, it, too, would have found bugs that PVS-Studio can't.

Exactly! That's how things are. Every analyzer is better at something. For instance, PVS-Studio is especially strong at typo search. Anyway, the best way to evaluate a tool is to actually use it on your project.

Here's my personal opinion. All contemporary paid analyzers are quite powerful and more or less equal in their diagnostic capabilities. Each of them, however, is in some way better than the others and can add to your development process in its own way. In fact, the choice is often determined by CI integration capabilities, the level of detail of the documentation, the price, etc. rather than available diagnostics. And I believe PVS-Studio offers a perfect combination of all these various criteria.

Download and try PVS-Studio: it's the best way to evaluate it and see if it fits into your development process. If you have any questions, don't hesitate to ask our support. We can get you started and customize the analyzer for you and, if necessary, demonstrate how to work with it. We also provide our customers with high-quality and prompt support (example).

To find out how effectively PVS-Studio can be used, see the article "PVS-Studio ROI". Thanks for reading!

Popular related articles
The way static analyzers fight against false positives, and why they do it

Date: Mar 20 2017

Author: Andrey Karpov

In my previous article I wrote that I don't like the approach of evaluating the efficiency of static analyzers with the help of synthetic tests. In that article, I give the example of a code fragment…
The Ultimate Question of Programming, Refactoring, and Everything

Date: Apr 14 2016

Author: Andrey Karpov

Yes, you've guessed correctly - the answer is "42". In this article you will find 42 recommendations about coding in C++ that can help a programmer avoid a lot of errors, save time and effort. The au…
Static analysis as part of the development process in Unreal Engine

Date: Jun 27 2017

Author: Andrey Karpov

Unreal Engine continues to develop as new code is added and previously written code is changed. What is the inevitable consequence of ongoing development in a project? The emergence of new bugs in th…
The Evil within the Comparison Functions

Date: May 19 2017

Author: Andrey Karpov

Perhaps, readers remember my article titled "Last line effect". It describes a pattern I've once noticed: in most cases programmers make an error in the last line of similar text blocks. Now I want t…
The Last Line Effect

Date: May 31 2014

Author: Andrey Karpov

I have studied many errors caused by the use of the Copy-Paste method, and can assure you that programmers most often tend to make mistakes in the last fragment of a homogeneous code block. I have ne…
Characteristics of PVS-Studio Analyzer by the Example of EFL Core Libraries, 10-15% of False Positives

Date: Jul 31 2017

Author: Andrey Karpov

After I wrote quite a big article about the analysis of the Tizen OS code, I received a large number of questions concerning the percentage of false positives and the density of errors (how many erro…
How PVS-Studio Proved to Be More Attentive Than Three and a Half Programmers

Date: Oct 22 2018

Author: Andrey Karpov

Just like other static analyzers, PVS-Studio often produces false positives. What you are about to read is a short story where I'll tell you how PVS-Studio proved, just one more time, to be more atte…
Appreciate Static Code Analysis!

Date: Oct 16 2017

Author: Andrey Karpov

I am really astonished by the capabilities of static code analysis even though I am one of the developers of PVS-Studio analyzer myself. The tool surprised me the other day as it turned out to be sma…
PVS-Studio ROI

Date: Jan 30 2019

Author: Andrey Karpov

Occasionally, we're asked a question, what monetary value the company will receive from using PVS-Studio. We decided to draw up a response in the form of an article and provide tables, which will sho…
Free PVS-Studio for those who develops open source projects

Date: Dec 22 2018

Author: Andrey Karpov

On the New 2019 year's eve, a PVS-Studio team decided to make a nice gift for all contributors of open-source projects hosted on GitHub, GitLab or Bitbucket. They are given free usage of PVS-Studio s…

Comments (0)

Next comments
This website uses cookies and other technology to provide you a more personalized experience. By continuing the view of our web-pages you accept the terms of using these files. If you don't want your personal data to be processed, please, leave this site.
Learn More →
Accept