To get a trial key
fill out the form below
Team License (a basic version)
Enterprise License (extended version)
* By clicking this button you agree to our Privacy Policy statement

Request our prices
New License
License Renewal
--Select currency--
USD
EUR
GBP
RUB
* By clicking this button you agree to our Privacy Policy statement

Free PVS-Studio license for Microsoft MVP specialists
* By clicking this button you agree to our Privacy Policy statement

To get the licence for your open-source project, please fill out this form
* By clicking this button you agree to our Privacy Policy statement

I am interested to try it on the platforms:
* By clicking this button you agree to our Privacy Policy statement

Message submitted.

Your message has been sent. We will email you at


If you haven't received our response, please do the following:
check your Spam/Junk folder and click the "Not Spam" button for our message.
This way, you won't miss messages from our team in the future.

>
>
Myths about static analysis. The second…

Myths about static analysis. The second myth - expert developers do not make silly mistakes

Nov 02 2011
Author:

While communicating with people on forums, I noticed there are a few lasting misconceptions concerning the static analysis methodology. I decided to write a series of brief articles where I want to show you the real state of things.

The second myth is: "Expert developers do not make silly mistakes that are mostly caught by static code analyzers".

This is how this statement looks in discussions on forums (this is a collective image):

I, a skilled developer, haven't had any problems with memory spoiling, object lifetime, etc. for N years. Static analysis is a tool for "McDonald's", and here (on a professional forum) there are only geeks. Now I mostly face problems with difficult-to-test algorithms and integration with other developers using implicit contracts on object states.

It sounds as if typos and inattentive mistakes were solely the work of amateurs. Professional developers have not made them for a long time and now mostly deal with such complex errors like synchronization issues or complex data processing algorithms.

It is not so. All the programmers make silly mistakes. I know that you haven't heard me, so I repeat this heretical thought once again: all the programmers make silly mistakes. It doesn't matter how skillful they are. To err is human. Errors they make are most often very simple.

Programmers take my statement about errors with great unfriendliness. In their opinion, they are those who haven't made such mistakes for many years. I think it is the effect of an interesting aspect of our psyche that sifts memories about unpleasant moments of the programming practice.

Let's digress from our subject a bit and recall why various horoscopes are so enduring. The first reason is very vague formulas that one can easily apply to oneself. But we are interested in the second point: people do not remember cases when a prophecy didn't come true. But they do remember and tell the others about those cases when some situation in their life coincided with one described in a horoscope. So it turns out that when we speak about and recall horoscopes, we find N evidences that they work and do not remember about N*10 cases when they did not work.

Something like that happens to a programmer when searching for errors in code. You remember complex and interesting errors very well and can discuss them with your colleagues or write a blog-post about them. But when you notice that you have written 'BA' variable instead of 'AB', you will simply fix the bug and the fact will slip from your memory. Freud noticed one peculiarity of our memory: one tends to remember positive statements about oneself and forget negative statements. If a person fights a complex error in an algorithmic task, when he finally fixes it, he considers himself a hero: this fact is worthy remembering and even telling the others. But when a programmer finds a silly bug, there is no reason and wish to remember it.

What proofs do I have? Although most misprints and bugs get fixed finally, some of them remain unnoticed in programs. A lot of examples of these you may find in this article. You will see that it were not novices who made the mistakes cited in the article, but skilled programmers.

The conclusion is: programmers spend much more time on fixing misprints than they think. Static analysis tools can help to significantly save developers' efforts, detecting some of these errors before the testing stage.

Popular related articles
Appreciate Static Code Analysis!

Date: Oct 16 2017

Author: Andrey Karpov

I am really astonished by the capabilities of static code analysis even though I am one of the developers of PVS-Studio analyzer myself. The tool surprised me the other day as it turned out to be sma…
The Evil within the Comparison Functions

Date: May 19 2017

Author: Andrey Karpov

Perhaps, readers remember my article titled "Last line effect". It describes a pattern I've once noticed: in most cases programmers make an error in the last line of similar text blocks. Now I want t…
Technologies used in the PVS-Studio code analyzer for finding bugs and potential vulnerabilities

Date: Nov 21 2018

Author: Andrey Karpov

A brief description of technologies used in the PVS-Studio tool, which let us effectively detect a large number of error patterns and potential vulnerabilities. The article describes the implementati…
The Ultimate Question of Programming, Refactoring, and Everything

Date: Apr 14 2016

Author: Andrey Karpov

Yes, you've guessed correctly - the answer is "42". In this article you will find 42 recommendations about coding in C++ that can help a programmer avoid a lot of errors, save time and effort. The au…
The Last Line Effect

Date: May 31 2014

Author: Andrey Karpov

I have studied many errors caused by the use of the Copy-Paste method, and can assure you that programmers most often tend to make mistakes in the last fragment of a homogeneous code block. I have ne…
Characteristics of PVS-Studio Analyzer by the Example of EFL Core Libraries, 10-15% of False Positives

Date: Jul 31 2017

Author: Andrey Karpov

After I wrote quite a big article about the analysis of the Tizen OS code, I received a large number of questions concerning the percentage of false positives and the density of errors (how many erro…
Static analysis as part of the development process in Unreal Engine

Date: Jun 27 2017

Author: Andrey Karpov

Unreal Engine continues to develop as new code is added and previously written code is changed. What is the inevitable consequence of ongoing development in a project? The emergence of new bugs in th…
The way static analyzers fight against false positives, and why they do it

Date: Mar 20 2017

Author: Andrey Karpov

In my previous article I wrote that I don't like the approach of evaluating the efficiency of static analyzers with the help of synthetic tests. In that article, I give the example of a code fragment…
PVS-Studio ROI

Date: Jan 30 2019

Author: Andrey Karpov

Occasionally, we're asked a question, what monetary value the company will receive from using PVS-Studio. We decided to draw up a response in the form of an article and provide tables, which will sho…
Free PVS-Studio for those who develops open source projects

Date: Dec 22 2018

Author: Andrey Karpov

On the New 2019 year's eve, a PVS-Studio team decided to make a nice gift for all contributors of open-source projects hosted on GitHub, GitLab or Bitbucket. They are given free usage of PVS-Studio s…

Comments (0)

Next comments
This website uses cookies and other technology to provide you a more personalized experience. By continuing the view of our web-pages you accept the terms of using these files. If you don't want your personal data to be processed, please, leave this site.
Learn More →
Accept