Our website uses cookies to enhance your browsing experience.
to the top
close form

Fill out the form in 2 simple steps below:

Your contact information:

Step 1
Congratulations! This is your promo code!

Desired license type:

Step 2
Team license
Enterprise license
** By clicking this button you agree to our Privacy Policy statement
close form
Request our prices
New License
License Renewal
--Select currency--
* By clicking this button you agree to our Privacy Policy statement

close form
Free PVS‑Studio license for Microsoft MVP specialists
* By clicking this button you agree to our Privacy Policy statement

close form
To get the licence for your open-source project, please fill out this form
* By clicking this button you agree to our Privacy Policy statement

close form
I am interested to try it on the platforms:
* By clicking this button you agree to our Privacy Policy statement

close form
check circle
Message submitted.

Your message has been sent. We will email you at

If you haven't received our response, please do the following:
check your Spam/Junk folder and click the "Not Spam" button for our message.
This way, you won't miss messages from our team in the future.

If the coding bug is banal, it doesn't …

If the coding bug is banal, it doesn't mean it's not crucial

Apr 19 2017

Spreading the word about PVS-Studio static analyzer, we usually write articles for programmers. However, some things are seen by programmers quite one-sided. That is why there are project managers who can help manage the process of the project development and guide it to the right direction. I decided to write a series of articles, whose target audience is project managers. These articles will help better understand the use of static code analysis methodology. Today we are going to consider a false postulate: "coding errors are insignificant".


Recently I've written an article "A post about static analysis for project managers, not recommended for the programmers". It was quite expected that people started commenting that there is no use in a tool that find simple errors. Here is one of such comments:

The reason is simple: the main bugs are in algorithms. In the work of analysts, mathematicians, there are not that many bugs in the coding.

Nothing new, I should say. Again, we see a myth that "expert developers do not make silly mistakes". Even if they make, it's nothing serious: such bugs are easy to find and they aren't crucial, as a rule.

I don't see the point in discussing the idea that professionals don't make banal errors. This topic was already covered in the articles several times. If everything is that simple, why have these professionals made so many errors in the well-known projects? By this moment, we have found more than 11000 errors, although we have never had a goal to find as many errors as possible: this was just our byproduct of writing articles.

It would be much more interesting to discuss this topic: a lot of programmers think that it's possible to make really serious errors only when writing algorithms. So I want to warn the managers that it is not so - any bug can be critical. I do not deny that errors in algorithms are extremely important, but we should not underestimate the importance of typos and common blunders.

Some programmers claim that if their analyzer cannot find bugs in complex algorithms, it is not needed. Yes, the analyzer is not capable of finding complicated algorithmic errors, but it requires artificial intelligence, which is not created yet. Nevertheless, it is equally important and necessary to look for simple errors, as well as for algorithmic ones.

I suggest having a look at three examples, so that I don't sound unfounded.

For a start, I ask you to recall a critical vulnerability in iOS that appeared due to a double goto.

if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
  goto fail;
if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
  goto fail;
  goto fail;
if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
  goto fail;

Details can be found in the article Apple's SSL/TLS bug. It's not important, if this error appeared because of a typo or unsuccessful merge. It is obvious, that it is a "mechanical" error that has nothing to do with mathematics or algorithms. Still, this error can be detected by the analyzer PVS-Studio.

Now, here is a vulnerability in MySQL:

char foo(...) {
  return memcmp(...);

The error appears because of implicit type casting (int -> char), during which the values of the higher bits are ignored. Again, this error has no relation to the complex algorithms, and was easily detected by PVS-Studio. Despite it simplicity, this error leads to the fact that in one out of 256 cases, on some platforms, the procedure of comparing a hash with an expected value always will return 'true' regardless of the hash.

The third example. Once I took part in the development of the package of numerical simulation of gas-dynamic processes. There was a lot of mathematics, algorithms and so on. Of course, there were math issues. But I remember that were much more problems related to the migration of the code to the 64-bit system. By the way, it was that moment when there appeared an idea to create Viva64 analyzer, that later evolved in PVS-Studio (story: "PVS-Studio project - 10 years of failures and successes").

One of the errors was caused by improper file positioning in the file with the help of _fseeki64 function. When the modeling package became 64-bit, it could handle large amounts of data, and as a result, write large size of data on the disk. But then, it could not read it correctly. I cannot say that the code wasn't really badly written. It had something like this:

unsigned long W, H, D, DensityPos;
unsigned long offset = W * H * D * DensityPos;
res = _fseeki64(f, offset * sizeof(float), SEEK_SET);

We have an overflow when the variables are multiplied. Of course, when the programmer was writing this code, he couldn't assume that the size of the long type will remain 32-bit in Win64 (ILP32LL). We spent a lot of time looking for this bug. When you see such pseudocode, everything seems very clear and simple. In practice it was very hard to understand, why strange bugs appear when exceeding a certain threshold of the size of processed data. The week of debugging could be easily avoided if the code was checked by PVS-Studio that could easily find the described bug. The algorithms and mathematics didn't cause any troubles when porting to the 64-bit system.

As you can see, simple mistakes can lead to serious consequences. It's better to find as many of them as possible with the help of static analyzer without spending hours and days debugging. And even more so, it is better to find the error yourself. The worst case scenario: it turns out that your application has a vulnerability, but it is already installed on tens of thousands computers.

It is also useful to find as many simple errors as possible using several tools, so that you could spend more time looking for defects in algorithms and creating a new functionality.

By the way, I suggest the managers, reading this article to use our services of the project check. We can conclude a small contract, in the scope of which we can examine the project and fix all the errors, that we will be able to find. Firstly, it can be useful in any case, secondly, if you are pleased with the result, it will open the way for the further cooperation. If necessary, we are ready to sign an NDA. I suggest discuss details by mail.

Additional links:

Comments (0)

Next comments next comments
close comment form