Webinar: Evaluation - 05.12
There is no fragment in program code where you cannot make mistakes. You may actually make them in very simple fragments. While programmers have worked out the habit of testing algorithms, data exchange mechanisms and interfaces, it's much worse concerning security testing. It is often implemented on the leftover principle. A programmer is thinking: "I just write a couple of lines now, and everything will be ok. And I don't even need to test it. The code is too simple to make a mistake there!". That's not right. Since you're working on security and writing some code for this purpose, test it as carefully!
When and where is security important? In many applications. But let's not discuss it in abstracto. Take, for instance, the source codes of the Tor application. This is a system intended to enable online anonymity. Tor client software directs internet traffic through a worldwide volunteer network of servers to conceal a user's location or usage from anyone conducting network surveillance or traffic analysis. To know more what it is and where it is used, see the Wikipedia article.
Everyone will agree that programmers should pay maximum attention to data security in such an application. And even more than that! Let's put it this way, you should develop this application being in a state of paranoia and persecution mania.
Indeed, much is done in the TOR program to conceal and protect information. But when I study the code, I'm starting to feel sad. Many protection mechanisms simply stay idle because of trivial slip-ups and misprints.
One of the protection mechanisms is intended to clear buffers which are not used anymore. These buffers may contain passwords, IP addresses, and other user data. If you don't destroy these data, they may be sent to the Internet in the form of trash. It's not a fairy-tale - it's a real-life situation. To find out how exactly it may happen, see the article "Overwriting memory - why?".
The TOR developers know of this danger and try to clear buffer contents using the memset() function. This is an Epic Fail. The compiler has the right to remove calls of the memset() function from the code, if the buffer it clears is not used anywhere.
Consider a code fragment taken from TOR:
int
crypto_pk_private_sign_digest(....)
{
char digest[DIGEST_LEN];
....
memset(digest, 0, sizeof(digest));
return r;
}
Now let's find out how it works. The 'digest' buffer is created on the stack. It is used somewhere later. It doesn't matter how exactly it is used, the point is that we want to clear it after that. The programmer has written a memset() function call for this purpose. However, the 'digest' buffer is not used in any way in the function after that. The compiler will notice it when performing optimization and remove the function call. It won't change the program logic, but it will make it dangerous from the viewpoint of data privacy.
Those interested in details, please look here - you'll see the assembler listing showing how the memset() function call disappears. Visual C++ 2010 is used as compiler together with the "/O2" switch.
You should use such functions as RtlSecureZeroMemory() to certainly clear the memory. These functions are created specially for such cases and cannot be deleted by the compiler.
You may say that I'm making a mountain out of a molehill, that no important data will get anywhere. Maybe. But can you be sure? Since the developers have implemented the array clearing mechanism, they must be concerned about something. And they did it not in one or two places in the code - there are many such fragments. It's a pity their efforts were spent in vain in most cases. Not to sound unfounded, I will give you a list of fragments containing errors.
This is the list of files and lines where the PVS-Studio analyzer has generated the warning "V597 The compiler could delete the 'memset' function call, which is used to flush '...' buffer. The RtlSecureZeroMemory() function should be used to erase the private data":
I've cited such a long list deliberately. I want you to feel the huge depth of the problem of missing checks for code which is responsible for security. How on earth can one make a mistake using memset()? Well, quite easily, as it turns out.
This is not only the problem of TOR. This is a common problem for many applications and libraries. We don't need to go far for an example. What libraries does TOR use? For instance, it uses OpenSSL. This is an open-source cryptographic package intended for SSL/TLS handling. Let's see how the OpenSSL developers clear memory.
The OpenSSL developers know that memset() cannot be used to clear memory buffers. That's why they have created their own function. Here it is:
unsigned char cleanse_ctr = 0;
void OPENSSL_cleanse(void *ptr, size_t len)
{
unsigned char *p = ptr;
size_t loop = len, ctr = cleanse_ctr;
while(loop--)
{
*(p++) = (unsigned char)ctr;
ctr += (17 + ((size_t)p & 0xF));
}
p=memchr(ptr, (unsigned char)ctr, len);
if(p)
ctr += (63 + (size_t)p);
cleanse_ctr = (unsigned char)ctr;
}
A perfect paranoid code. Everything's ok with it. It will clear memory indeed. What's more, it will fill it not just with zeroes, but with random numbers.
But there are errors in the code that make this function useless: the private data will remain there. Have a look at this code:
void usage(void)
{
static unsigned char *buf=NULL,*obuf=NULL;
....
OPENSSL_cleanse(buf,sizeof(buf));
OPENSSL_cleanse(obuf,sizeof(obuf));
....
}
So many efforts spent on writing the OPENSSL_cleanse() function - all in vain.
Look close. Don't you see anything bad?
The expressions sizeof(buf) and sizeof(obuf) calculate the pointer size instead of the buffer size. As a result, only the first 4 bytes will be cleared in a 32-bit program, while all the rest private data won't.
There are other errors of this type to be found in OpenSSL (see V597):
Conclusions:
0