Webinar: Evaluation - 05.12
Talking to people at conferences and in comments to articles, we face the following objection: static analysis reduces the time to detect errors, but takes up programmers' time, which negates the benefits of using it and even slows down the development process. Let's get this objection straightened out and try to show that it's groundless.
The statement "static analysis will take away part of the working time" taken out of the context is correct. It definitely takes some time to regularly review analyzer warnings, issued for new or modified code. However, we should continue the idea: but the time spent on this is much less than the time needed to find errors by other methods. It's even worse to find out about errors from users.
Unit tests can be a very good analogy here. Unit tests also take time from developers, but it's not the reason not to use them. The benefits of safer code of better quality when using unit tests outweigh the cost of writing them.
Another analogy: compiler warnings. This is generally a very close topic, as warnings of static analysis tools can be considered as an extension of compiler warnings to some extent. Naturally, when a programmer sees a compiler warning, he spends some time dealing with it. He has to either change the code or clearly spend some time suppressing warnings, for example, using #pragma. However, this time commitment has never been the reason to disable compiler warnings. And if someone does that, it will be unequivocally interpreted by others as professional unfitness.
However, where does the fear of wasting time for warnings of static code analyzers come from?
The answer is very simple. Programmers who are not yet familiar with this methodology confuse trial launches and regular usage. At the first run, any analyzers give a huge list of warnings, which is even scary to look at. The reason is that the analyzer isn't configured yet. Whereas a configured analyzer issues a small number of false positives being used regularly. In other words, most warnings indicate real defects or code smells. It's just important to do this configuration. This is the trick that turns a static analyzer from a time-wasting evil into a friend and an assistant.
Any static analyzer will first issue many false positives. There are many reasons for this, and this topic deserves a separate article. Naturally, both developers of other analyzers and our team are fighting against false positives. But still there will be many warnings, if you just take and run the analyzer on a project for the first time. By the way, the same situation is with compiler warnings. Let's suppose, you have a large project that you've always been building it, for example, with the Visual C++ compiler. Let's say, the project miraculously turned out to be portable and compiled using GCC. Even so, you will get a bunch of warnings from GCC. People who have experienced change of compilers in a large project understand what I'm talking about.
However, no one is forcing you to constantly dig in the piles of warnings after changing the compiler or after running the analyzer. The obvious next step is to set up a compiler or analyzer. Those who say "warnings analysis is time consuming" assess the complexity of adopting the tool, thinking only about all these warnings that need to be overcome in the beginning, but don't think about unhurried regular use.
Setting up analyzers, as well as compilers, is not as difficult and labor-intensive as programmers like to creep out. If you're a manager, don't listen to them. They're just being lazy. The programmer can proudly tell how he was searching for a bug found by the tester/client for 3 days. And that's fine for him. However, from his point of view, it is not acceptable to spend one day setting up the tool, after which such an error will be detected before it gets into the version control system.
Yes, false positives will be present after the setup. But their number is exaggerated. It is quite possible to set up an analyzer so that the percentage of false positives would be 10%-15%. That is, for 9 defects found, only 1 warning will require suppression as a false one. So where is the "waste of time" here? At the same time, 15% is a very real value; you can read more about it in this article.
There's one more thing left. A programmer may object:
Well, let's say regular static analysis runs are really effective. But what to do with the noise I get in the first time? On our big project, we will not be able to set up the tool for 1 promised day. Just recompilation to check the next batch of settings takes several hours. We're not ready to spend a couple of weeks on this.
And this is not a problem, but an attempt to find a reason not to introduce something new. Of course, in a big project, everything is always difficult. But first, we provide support and we help to integrate PVS-Studio into the development process. And secondly, it is not necessary to immediately start sorting out all warnings.
If your app works, then the bugs that exist there, are not so critical and probably live in rarely used code. Serious obvious errors have already been found and fixed using slower and more expensive methods. I'll write about it below in the note. That's not what is of great importance now. There is no point in making massive edits in the code, correcting a lot of insignificant errors. With such a large refactoring it is easy to break something and the harm will be more than good.
It is better to consider existing warnings technical debt. You can return to the debt later and gradually work with the old warnings. By using the mass warnings suppression mechanism, you can start using PVS-Studio quickly in a large project. Here's a brief description of what's happening:
By the way, the storage system for uninteresting warnings is quite smart. Hashes are saved for the line with a potential error, as well as for the previous and the next line. As a result, if you add a line at the beginning of one of the files, nothing will be gone astray and the analyzer will keep silent for the code, that is considered to be technical debt.
I hope, I managed to dispel one of the preconceptions about static analysis. Come, download and try our PVS-Studio static code analyzer. It will detect many errors in the early stages and make your code generally more reliable and better.
Note
While developing any project, new errors are constantly appearing and getting fixed. Unfound errors "settle" in the code for a long time, and then many of them can be detected when static code analysis is applied. This sometimes gives the false impression that static analyzers find only some uninteresting errors in rarely used pieces of code. Well, it's true - in case if you use the analyzer incorrectly and run it only from time to time, for example, shortly before the release. More on this topic is written here. Yes, we ourselves do one-time checks of open source projects when writing articles. But we have a different purpose. We focus on demonstrating abilities of the code analyzer to detect defects. Generally speaking, this task has little to do with improving the quality of the project code and reducing the cost of fixing errors.
Additional Links
0