Potential vulnerability (security weakness) is a defect in software code that under certain circumstances can affect a program behaviour. The defect becomes a vulnerability when an attacker finds a way to exploit it.
Software error is an error in code that can either manifest itself as a program malfunction, or remain unnoticed. Vulnerabilities most often arise from common software errors, not from high-level security failures.
The National Institute of Standards and Technology (NIST) reports that 64% of software vulnerabilities stem from programming errors and not a lack of security features.
Potential vulnerability is a code defect that could theoretically become an exploit. Other terms: a security defect, a security weakness. There is a classification system for errors that lead to vulnerabilities: Common Weakness Enumeration (CWE). So, if you have an error that fits the CWE list, you are dealing with a potential vulnerability.
Vulnerability is a defect in a system that can be used for malicious purposes. The Common Vulnerabilities and Exposures (CVE) system provides a database of publicly known security vulnerabilities.
Zero-day vulnerability is a term denoting unknown flaws and vulnerabilities that developers have made, but have not yet discovered. Until the vulnerability is fixed, hackers can take over a system to get access to networks, use a remote desktop control, get access to your data, and so on. It's called zero-day because developers have zero days to fix the vulnerability: it is disclosed and can be exploited before developers issue a patch.
SAST (Static Application Security Testing) is a set of technologies designed to analyze the source code of software in the field of security. The analysis consists in looking for code fragments that contain potential vulnerabilities.
Authors of articles and documentation on static code analyzers often confuse potential vulnerability and vulnerability. Perhaps writers just don't know the difference, or maybe they intentionally make the description more intimidating.
For example, you may come across this kind of text:
The static analyzer found a vulnerability. A buffer overflow may occur here...
But this is not a vulnerability yet. It is very likely that the error does cause the program to run incorrectly, but an attacker can't exploit such a bug. For example, an error may cause discoloration of some element on the screen.
A vulnerability occurs only when an exploit is created that allows the error to be used for some purpose. At first, we need to understand whether we have a harmless bug or a vulnerability. Until then, we are dealing with a potential vulnerability.
We don't want to exaggerate and brag that the PVS-Studio analyzer has found hundreds of vulnerabilities in some application. The tool finds precisely the potential vulnerabilities. The probability that they can be exploited in any way is very low.
Note. If you use PVS-Studio as a plugin for SonarQube, some warnings get into the "Vulnerabilities" section. In fact, these are not vulnerabilities but potential vulnerabilities. SonarQube developers use the term vulnerability for significant defects.
Now we know why there's no need to call each error a vulnerability. However, this does not mean that the presence of such errors in the application is acceptable. It makes no sense to think about whether to fix this or that potential vulnerability. It is better to fix any problem in code right away. This will reduce the risks and the likelihood of an exploit.
The CWE database describes an "unreachable code" error as CWE-561, which means it is a potential vulnerability. A potential vulnerability may well be a harmless bug from security standpoint. But let's check its chances of becoming a critical one.
Let's look at the code fragment from the Vangers: One For The Road game (click for details).
void uvsVanger::break_harvest(void){
....
pg = Pworld -> escT[0] -> Pbunch
-> cycleTable[Pworld -> escT[0] -> Pbunch -> currentStage].Pgame;
if (!pg) {
return;
ErrH.Abort("uvsVanger::break_harvest : don't know where to go ");
}
....
}
PVS-Studio warning: V779 CWE-561 Unreachable code detected. It is possible that an error is present.
If an error occurs, the break_harvest function must write a message to the log and finish its work. The logging operation happens to be located in the code after the return operator by accident. The debug warning does not get into the log. It is clearly an error that should be fixed. However, you can't call it a vulnerability.
Now let's take a look at the error that caused the vulnerability in iOS.
The CVE-2014-1266 vulnerability description: The SSLVerifySignedServerKeyExchange function in libsecurity_ssl/lib/sslKeyExchange.c in the Secure Transport feature in the Data Security component in Apple iOS 6.x before 6.1.6 and 7.x before 7.0.6, Apple TV 6.x before 6.0.2, and Apple OS X 10.9.x before 10.9.2 does not check the signature in a TLS Server Key Exchange message, which allows man-in-the-middle attackers to spoof SSL servers by using an arbitrary private key for the signing step or omitting the signing step.
static OSStatus
SSLVerifySignedServerKeyExchange(SSLContext *ctx,
bool isRsa,
SSLBuffer signedParams,
uint8_t *signature,
UInt16 signatureLen)
{
OSStatus err;
....
if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
goto fail;
if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
goto fail;
goto fail;
if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
goto fail;
....
fail:
SSLFreeBuffer(&signedHashes);
SSLFreeBuffer(&hashCtx);
return err;
}
Please note that PVS-Studio issues the same warning: V779 CWE-561 Unreachable code detected. It is possible that an error is present.
The double goto made a part of the code unreachable. Just like in the previous example. Even if the err variable is null, the jump to the fail label occurs. That's why a signature verification does not happen. The function returns 0, which indicates that everything is fine with the signature. Next, the program gets the server key, even if there are problems with the signature. This key encrypts data during transmission.
So, here the same potential vulnerability turns out to be a real severe vulnerability.
SAST solutions (Static Application Security Testing) are used to search for potential vulnerabilities. They perform static code analysis to detect security defects. Such tools search for security defects described, for example, by CWE and OWASP Top 10.
The PVS-Studio analyzer is an example of a SAST solution for detecting potential vulnerabilities.
0