Our website uses cookies to enhance your browsing experience.
Accept
to the top
close form

Fill out the form in 2 simple steps below:

Your contact information:

Step 1
Congratulations! This is your promo code!

Desired license type:

Step 2
Team license
Enterprise license
** By clicking this button you agree to our Privacy Policy statement
close form
Request our prices
New License
License Renewal
--Select currency--
USD
EUR
* By clicking this button you agree to our Privacy Policy statement

close form
Free PVS‑Studio license for Microsoft MVP specialists
* By clicking this button you agree to our Privacy Policy statement

close form
To get the licence for your open-source project, please fill out this form
* By clicking this button you agree to our Privacy Policy statement

close form
I am interested to try it on the platforms:
* By clicking this button you agree to our Privacy Policy statement

close form
check circle
Message submitted.

Your message has been sent. We will email you at


If you do not see the email in your inbox, please check if it is filtered to one of the following folders:

  • Promotion
  • Updates
  • Spam

Webinar: Evaluation - 05.12

>
>
PVS-Studio Team Expanding Their Horizon…

PVS-Studio Team Expanding Their Horizons Through Outsourcing

Jun 23 2014

As you know, our main activity is development of the code analyzer PVS-Studio. Although we have been doing this for a long time now and - as we believe - quite successfully, an unusual idea struck us recently. You see, we do not use our own tools in exactly the same way our customers do. Well, we analyze the code of PVS-Studio by PVS-Studio of course, but, honestly, the PVS-Studio project is far from large. Also, the manner of working with PVS-Studio's code is different from that of working with Chromium's or LLVM's code, for example.

0264_Outsourcing/image1.png

We felt like putting ourselves in our customers' shoes to see how our tool is used in long-term projects. You see, project checks we regularly do and report about in our numerous articles are done just the way we would never want our analyzer to be used. Running the tool on a project once, fixing a bunch of bugs, and repeating it all again just one year later is totally incorrect. The routine of coding implies that the analyzer ought to be used regularly - daily.

OK, what's the purpose of all that talk? Our theoretical wishes about trying ourselves in third-party projects have coincided with practical opportunities we started to be offered not so long ago. Last year we decided to allocate a separate team in our company to take up - ugh! - outsourcing; that is, take part in third-party projects as a developer team. Moreover, we were interested in long-term and rather large projects, i.e. requiring not less than 2-3 developers and not less than 6 months of development. We had two goals to accomplish:

  • try an alternative kind of business (custom development as opposed to own product development);
  • see with our own eyes how PVS-Studio is used in long-term projects.

We have successfully solved both tasks. But this article is not about the outsourcing business; it is about our experience. We don't mean the organizational experience - there are plenty of articles about that. We mean the experience of working with the code of third-party projects. This is what we want to tell you about.

Interesting observations

Third-party projects have much to teach us. I think we'll write a series of many articles on this topic in the future. But now let's speak about some interesting observations. We have noticed 3 striking peculiarities of large and old projects' code. I'm sure we will publish a few more articles about other observations too.

We have published quite a lot of articles about how to port code from a 32-bit platform to a 64-bit one. The software migration task involves piles of errors that start to reveal themselves in 64-bit versions of applications. These we call 64-bit errors.

But besides 64-bit migration, program code has undergone a few other, more subtle, changes imposed by the evolution of compilers and libraries, and maturation of projects themselves. The consequences of these changes can be easily traced in code with a long history. And it is these consequences that we are going to discuss in this article. Sure you will find it interesting and useful; perhaps it will prompt some of you to review your old code to reveal the issues like those described below.

The error patterns described in this article were caught by the PVS-Studio analyzer (see functional capabilities of the analyzers). Many of the errors are hidden, the code running almost well only due to sheer luck. But each of these errors is a small time-bomb that may explode at any moment.

Note. To avoid NDA restrictions, we have changed the names and edited the code samples. In fact, every code sample below is far from the original version. But it all is still "based on actual events".

A change of the new operator's behavior

Long, long time ago, the 'new' operator used to return NULL in case of a memory allocation error. Then compilers started supporting the modern C++ standard and throw std::bad_alloc exceptions instead. You can get the 'new' operator to return NULL, but we are not talking about it now.

The programmer community seems to have only scarcely noticed these changes. They took note of the fact and started writing code with consideration for the new behavior. Of course, you can meet programmers still unaware that the 'new' operator throws the exception nowadays. But we are talking about normal, adequate programmers; we are not interested in those strange guys that don't want to learn anything new and keep writing their code just the way it was done 20 years ago.

However, even those who know that the 'new' operator has changed its behavior have not reviewed and fixed their old code. Some simply didn't think about that; others did but were short of time at the moment and forgot all about it later. It resulted in a huge number of programs still inhabited by incorrect handlers of memory allocation errors.

Some code fragments are harmless:

int *x = new int[N];
if (!x) throw MyMemoryException(); // 1
int *y = new int[M];
if (!y) return false; // 2

In the first case, the std::bad_alloc exception will be generated instead of MyMemoryException. Both exceptions seem to be handled in the same way, so there are no problems here.

In the second case, the check won't work: the function will fail to return 'false'. Instead, an exception will be thrown that will be somehow handled later. This an error as the program's behavior has changed. In practice, however, real troubles are almost impossible - it's just that the program will react a bit differently to memory shortage issues.

What's much more important to warn the programmer about, are dramatic changes of the program's behavior. The number of such situations in large, old projects is really enormous.

Here are a couple of examples when the program should do certain actions in case of memory shortage:

image->SetBuffer(new unsigned char[size]);
if (!image->GetBuffer())
{
  CloseImage(image);
  return NULL;
}

FormatAttribute *pAttrib = new FormatAttribute(sName, value, false);
if (pAttrib )
{
    m_attributes.Insert(pAttrib);
}
else
{
  TDocument* pDoc = m_state.GetDocument();
  if (pDoc)
     pDoc->SetErrorState(OUT_OF_MEMORY_ERROR, true);
}

This code is much more dangerous. For instance, the user risks losing the contents of his document if the program lacks memory, though he could have well been allowed to save the document into a file.

Recommendation. Examine all the 'new' operators in your program. In each case, check if the program is about to do something when the pointer is nullptr. Fix all such fragments.

The PVS-Studio analyzer use the V668 diagnostic to detect meaningless checks.

Replacing char * with std::string

Following the migration from C to C++ and the growth of the standard library's popularity, programmers started to widely use string classes such as std::string. There are sensible reasons behind it: handling a full-fledged string instead of a "char *" pointer is much easier and safer.

Programmers not only started to use string classes in new code but adapted some old fragments for this purpose too. And this is where you may face troubles. Brief carelessness may result in your code becoming dangerous or totally incorrect.

But before telling horror stories, let's have some fun. Sometimes you may come across inefficient loops like this:

for (i = 0; i < strlen(str.c_str()); ++i)

The 'str' variable obviously used to be an ordinary pointer once. Calling the strlen() function at each loop iteration is a bad idea: it's extremely inefficient with long strings. However, after turning the pointer into std::string, the code became even sillier.

It is perfectly clear from this code that type replacement was done without much thinking. Such mindless replacement may lead to inefficient code like the fragment above or even errors. And now let's speak about the latter:

wstring databaseName;
....
wprintf("Error: cannot open database %s", databaseName); // 1

CString s;
....
delete [] s; // 2

In the first case, the 'wchar_t *' pointer has turned into 'wstring'. Troubles will occur if the database cannot be opened and the program needs to print a message. The code compiles well, but you will get some abracadabra printed on the screen or even a program crash. The reason is a missing call of the c_str() function. The correct code should look like this:

wprintf("Error: cannot open database %s", databaseName.c_str());

The second case is even more epic. Amazing as it may be, the code still compiles successfully. The programmer uses a very popular string class CString that can implicitly be cast to a pointer. And this is exactly what happens here, resulting in double clearing of the buffer.

Recommendation. If you are replacing pointers with a string class, please be careful. Don't use mass replacement without personally investigating each case. The code compiling successfully doesn't mean at all that it will work successfully. Wherever you don't urgently need to change them, you'd better leave such code with pointers in peace. Let the code work well with pointers rather than work badly with classes.

The PVS-Studio analyzer can help to detect some of the errors related to replacement of pointers with classes. These errors can be diagnosed by the diagnostic rules V510, V515, and so on. However, don't rely solely on the analyzers because you may stumble across highly artistic code with an error that not only an analyzer but even a skillful programmer will hardly find.

Replacing char with wchar_t

The project is developing and the authors feel like implementing a multi-language interface. And one day they do a mass replacement of char with wchar_t, fix compiler-generated warnings - and here we go, the unicode version of the application is "ready".

In practice, an application turns into a sieve after such replacement. Errors that have been given birth by it may inhabit the code for decades and stay difficult to reproduce.

How could that happen? Very easy. Many code fragments are not ready for the change of the character size. The code compiles without errors and warnings but is only "50%" efficient. I'll explain you what I mean.

Note. Keep in mind that we are not trying to scare you by poor code written by students. We are telling you about the harsh reality of a programmer's life. Large, old projects inevitably contain such errors - hundreds of them. I do mean hundreds, no kidding.

Examples:

wchar_t tmpBuffer[500];
memset(tmpBuffer, 0, 500); // 1

TCHAR *message = new TCHAR[MAX_MSG_LENGTH];
memset(charArray, 0, MAX_MSG_LENGTH*sizeof(char)); // 2

LPCTSTR sSource = ...;
LPTSTR sDestination = (LPTSTR) malloc (_tcslen(sSource) + 12); // 3

wchar_t *name = ...;
fprintf(fp, "%i : %s", i, name); // 4

In the first case, the programmer didn't even stop to think that the character size would change in time. That's why he cleared only half of the buffer - remember my words about 50% efficiency?

In the second case, the programmer was suspecting that the character size would change, but the hint "* sizeof(char)" failed him when carrying out the mass type replacement, for he had implemented it incorrectly. The correct code should have looked like this:

memset(charArray, 0, MAX_MSG_LENGTH * sizeof(charArray[0]));

In the third example, types are OK. The _tcslen() function is used to calculate the string length. Everything looks alright at the first glance, but when the characters have changed their type to 'wchar_t', the program became 50% efficient, too.

2 times less memory is allocated than the program needs. In fact, it will keep running well until the message length exceeds half of the maximum possible length. That's quite an unpleasant error that lingered in the code for quite a while.

The fourth example: the functions printf(), sprintf(), etc. are fixed alright, but the programmer forgot to check frintf(). It results in writing some garbage into a file. The correct code should look something like this:

fwprintf(fp, L"%i : %s", i, name);

or like this in case it is an ordinary ASCII-file:

fprintf(fp, "%i : %S", i, name);

Note. It has just occurred to me that the idea about 50% efficiency holds true for Windows. In Linux, the wchar_t type occupies 4 bytes instead of 2. So in the Linux world, the program will be even 25% efficient :).

Recommendation. If you have already done the mass replacement of char with wchar_t, I don't know how to help you. You have probably populated your code with many interesting errors. Attentively examining all the fragments where wchar_t is used is unreal. Static code analyzers can only help you to some extent; they will catch some of the errors. Errors like the ones shown above can be detected by the PVS-Studio analyzer. The consequences of replacing char with wchar_t are very diverse and are diagnosed through different diagnostic rules. Enumerating them all here won't make any sense. Just for example - V512, V576, V635, and so on and so forth.

If you haven't done the replacement yet but are planning to do so, please take it most seriously. Replacing types automatically and fixing compilation errors would take, say, 2 days. Replacing them manually, through code review, will take you ten times more - let's say, 2 weeks odd. But it is much better than catching endless errors that would be constantly appearing during the next 2 years:

  • incorrect string format;
  • overflows in allocated memory buffers:
  • handling half-cleared buffers;
  • handling half-copied buffers;
  • and so on.

Conclusion

We are highly satisfied with our outsourcing experience as it allowed us to take quite a different view at the programming world. We will continue this cooperation with third-party projects as developers, so if anyone of you is interested in it, welcome to email us. If you are afraid to be unmasked in articles (about errors found in your code), still feel free to contact us - may NDA help us all!

Popular related articles


Comments (0)

Next comments next comments
close comment form