Our website uses cookies to enhance your browsing experience.
Accept
to the top
close form

Fill out the form in 2 simple steps below:

Your contact information:

Step 1
Congratulations! This is your promo code!

Desired license type:

Step 2
Team license
Enterprise license
** By clicking this button you agree to our Privacy Policy statement
close form
Request our prices
New License
License Renewal
--Select currency--
USD
EUR
* By clicking this button you agree to our Privacy Policy statement

close form
Free PVS‑Studio license for Microsoft MVP specialists
* By clicking this button you agree to our Privacy Policy statement

close form
To get the licence for your open-source project, please fill out this form
* By clicking this button you agree to our Privacy Policy statement

close form
I am interested to try it on the platforms:
* By clicking this button you agree to our Privacy Policy statement

close form
check circle
Message submitted.

Your message has been sent. We will email you at


If you do not see the email in your inbox, please check if it is filtered to one of the following folders:

  • Promotion
  • Updates
  • Spam

>
>
>
C++ programmer's guide to undefined beh…

C++ programmer's guide to undefined behavior: part 9 of 11

Nov 11 2024

Your attention is invited to the ninth part of an e-book on undefined behavior. This is not a textbook, as it's intended for those who are already familiar with C++ programming. It's a kind of C++ programmer's guide to undefined behavior and to its most secret and exotic corners. The book was written by Dmitry Sviridkin and edited by Andrey Karpov.

1182_book_pt_9/image1.png

Program execution: (N)RVO vs RAII

C++ is a fascinating language. It's full of idioms and concepts, each with its own wonderful, sometimes unpronounceable acronym! Well, the best thing about them is that they sometimes have conflicts that make developers suffer. Sometimes they form a symbiotic relationship and inflict even more suffering.

C++ has constructors, destructors, and the RAII concept that comes with them: capture and initialize a resource in the constructor, clean up and release it in the destructor, and everything will be okay.

Well, let's give it a shot!

We'll make some simple class that performs buffered write:

struct Writer {
public:
  static const size_t BufferLimit = 10;

  // We capture the device where the write operation will be performed.
  Writer(std::string& dev) : device_(dev) {
    buffer_.reserve(BufferLimit);
  }

  // In the destructor, we release it, writing everything
  // we've buffered.
  ~Writer() {
    Flush();
  }

  void Dump(int x) {
    if (buffer_.size() == BufferLimit){
      Flush();
    }
    buffer_.push_back(x);
  }
private:
  void Flush() {
    for (auto x : buffer_) {
      device_.append(std::to_string(x));
    }
    buffer_.clear();
  }

  std::string& device_;
  std::vector<int> buffer_;
};

Let's try to use it nicely:

const auto text = []{
  std::string out;
  Writer writer(out);
  writer.Dump(1);
  writer.Dump(2);
  writer.Dump(3);
  return out;
}();
std::cout << text;

It works! The code displays 123, just as we expected. How beautiful the language has become!

Yeah, but it works purely because we got lucky. Since C++17, NRVO (named return value optimization) and copy elision are guaranteed here. The program actually contains a really nasty bug in it. So, if we take MSVC, which often fails to fully comply with the latest standards, the result will suddenly be different. That is, the program won't display anything.

If we modify it a little bit:

int x = 0; std::cin >> x;

const auto text = [x]{
  if (x < 1000) {
    std::string out;
    Writer writer(out);
    writer.Dump(1);
    writer.Dump(2);
    writer.Dump(3);
    return out;
  } else {
    return std::string("hello\n");
  }
}();
std::cout << text;

Then it still works under Clang, but it doesn't under GCC.

The best thing about all this ugliness is that it's not undefined behavior at all!

Do you remember when we were discussing the not working move operation? We found out that C++ doesn't have the destructive move operation. However, sometimes it does: when the return value optimization and the removal of unnecessary copy/move constructor call are triggered at the same time.

The programs above are all incorrect. They assume that the Writer destructor is called before the value is returned from the function, which is impossible. Object destructors are always called after returning from a function, otherwise the values would just die, and the caller code would always get a dead object.

So, how does it sometimes work and hide such an unfortunate error? Here's the answer:

const auto text = []{
  std::string out;    
  Writer writer(out);  // (2) Addresses of the 'out' and 'text' 
                       // are the same.
                       // So, they're basically the same object 
  writer.Dump(1);
  writer.Dump(2);
  writer.Dump(3);
  return out;    // (1) This is the only return point
                 // from the function. NRVO enables us to substitute
                 // the variable address
                 // to which we will write the result – 'text' -
                 // as the address of the 'out' temporary variable.
}(); // (3) The Writer destructor writes directly to text.

Without all the crafty optimizations, this is what happens:

const auto text = []{
  std::string out;     // (0) The string is empty
  Writer writer(out);  // (1) Addresses of the 'out' and 'text' 
                       // are different.
                       // They're different objects.
  writer.Dump(1);
  writer.Dump(2);
  writer.Dump(3);      // (2) No writing occurred,
                       // the buffer wasn't filled.
  return out;  // (3) We return a copy of out, which is an empty string.
}(); // (3) The Writer destructor writes to out,
     // it dies and goes to no one, text is empty.

Again, there's no undefined behavior here. The thing is, any destructor/constructor with side effects is kind of "broken" because of the allowed and described in the standard (and sometimes even guaranteed) optimizations.

Well, in Rust, for example, you can't write such nonsense. That's just how it is.

You can fix the issue either by pulling out Flush and explicitly calling it, or by adding another nested scope:

const auto text = []{
  std::string out;
  {
    Writer writer(out);
    writer.Dump(1);
    writer.Dump(2);
    writer.Dump(3);
  } // The Writer destructor is called here.
  return out;
}();
std::cout << text;

Don't forget to leave a comment, so that your teammates don't accidentally delete those "extra" parentheses. Also, check that your code autoformatter doesn't delete them.

Useful links

Program execution: null pointer dereferencing

The coolest error with the worst consequences: they call null a "billion-dollar error". So much code in a wide range of programming languages is affected by it. However, if in Java you get an exception with quite predictable consequences (never mind, a crash is a crash) when you call the null reference, then in the almighty C++, as well as in C, undefined behavior comes after you. And it really is undefined!

1182_book_pt_9/image2.png

First of all, of course, I'd like to point out that after all the discussion about the vague wording of the standard, there was some agreement before July 2024 that it wasn't the *p construct, where p is a null pointer causing undefined behaviour, but the lvalue-to-rvalue conversion. In other words, less formal, short, and not quite correct: as long as there's no reading or writing of a value at that null address, it's OK.

So, you could legally call the static member functions of a class using nullptr:

struct S {
  static void foo() {};
};

S *p = nullptr;
p->foo();

You could also write nonsense like this:

S* p = nullptr;
*p;

And you could write it only in C++. In C, however, this mess has been forbidden (see 6.5.3.2, note 104). Also, you can't use the dereference operator on invalid and null pointers anywhere in C. Meanwhile, C++ has its own, special way of doing things. These weird examples were built in constexpr context (let me remind you that UB is forbidden there, and the compiler checks for it).

However, recently, they decided to stop this mess. All of the above examples now contain undefined behavior.

Yet nobody forbids nullptr dereferencing in a non-evaluated context (within decltype):

#define LVALUE(T) (*static_cast<T*>(nullptr))

struct S {
  int foo() { return 1; };
};

using val_t = decltype(LVALUE(S).foo());

However, even though you can do this, it absolutely doesn't mean you should. This is because dereferencing nullptr where it's forbidden can have unfortunate consequences. The blade is so thin and sharp that programmers can easily trip over it and blow something up.

If you dereference nullptr, the code that wasn't called in any way may be executed:

#include <cstdlib>

typedef int (*Function)();

static Function Do = nullptr;

static int EraseAll() {
  return system("rm -rf /");
}

void NeverCalled() {
  Do = EraseAll;  
}

int main() {
  return Do();
}

The compiler detects the nullptr dereferencing (the Do function call), which is undefined behavior. This can't happen. The compiler detects a code fragment where a non-null value is assigned to this pointer. So, since there can't be null, it uses that value. As a result, the code of a function we didn't call is executed.

Here's a really wicked program:

void run(int* ptr) {
  int x = *ptr;
  if (!ptr) {
    printf("Null!\n");
    return;
  }
  *ptr = x;
}

int main() {
  int x = 0;
  scanf("%d", &x);  
  run(x == 0 ? nullptr : &x);
}

Due to the ptr pointer dereferencing, the check for nullptr after dereferencing may be deleted. It can be reproduced, for example, when building via GCC 14.2 (-O1 -std=c++17). Output:

Null!

Of course, you'll probably never write such strange code. However, what if pointer dereferencing hides behind a function call?

void run(int* ptr) {
  try_do_something(ptr); // If the function dereferences the pointer, 
                         // and the optimizer detects this,
                         // the check below can be removed.
  if (!ptr) {
    printf("Null!\n");
    return;
  }
  *ptr = x;
}

This case seems way more real.

For example, the standard C library has functions that an inexperienced programmer might expect to check for nullptr, but they don't.

Calling the strlen, strcmp, other string functions, and the std::string(const char*) constructor in C++ via nullptr as an argument leads to undefined behavior (and causes the removal of downstream checks, if you're unlucky).

There are also memcpy and memmove, which are particularly nasty in this regard, but despite the buffer sizes accepted in arguments, they still lead to undefined behavior if you pass nullptr and zero size to them! It can also emerge in the removal of your checks.

int main(int argc, char **argv) {
  char *string = NULL;
  int length = 0;
  if (argc > 1) {
    string = argv[1];
    length = strlen(string);
    if (length >= LENGTH) exit(1);
  }

  char buffer[LENGTH];
  memcpy(buffer, string, length); // Passing nullptr and
                                  // a length equal to zero
                                  // doesn't save the code
                                  // from UB.
  buffer[length] = 0;

  if (string == NULL) {
    printf("String is null, so cancel the launch.\n");
  } else {
    printf("String is not null, so launch the missiles!\n");
  }
}

This code execution completes with different results on the same input data (or rather its absence), depending on the compiler and optimization level.

If you're not scared enough, here's another great story about a funny and funky crash of the following function:

void refresh(int* frameCount)
{
  if (frameCount != nullptr) {
    ++(*frameCount); // Right here, it was crashing
                     // because of the nullptr dereference.
  }
  ....
}

It was happening because somewhere in a completely unrelated class someone wrote this:

class refarray {
public:
  refarray(int length)
  {
    m_array = new int*[length];
    for (int i = 0; i < length; i++) {
      m_array[i] = nullptr;
    }
  }

  int& operator[](int i)
  {
    // Pointer dereferencing without checking for null.
    return *m_array[i];
  }
private:
  int** m_array;
};

This is how developers called the function:

refresh(&(some_refarray[0]));

So, a clever compiler, knowing that references can't be null, has inlined and removed the check. Isn't it great?

You might think that cases of pointer dereferencing before the check are rather a theoretical concern than a real-world issue. The PVS-Studio team has some bad news for you: this is one of the most common errors. At the time of writing the book, the team has already discovered 1,822 such cases while checking various open-source projects. We have carefully compiled them in a "collection of errors", where you can learn more about them and chew upon null pointers.

Don't forget to check for nullptr, otherwise everything will explode.

Useful links

Program execution: static initialization order fiasco

Issues of using objects before their full initialization is complete exist in many programming languages. Virtually anywhere, one can implement questionable design that involves a break in declaration, construction, and initialization. However, it usually takes some effort. With C and C++, you can get into it by accident and fail to notice it for a very long time.

In C and C++, you can split the program code into different independent translation units (into different .c/.cpp files). They can compile simultaneously. The build speed increases. Everything could've been fine.

However, as soon as a global variable appears in one "module" that is used in another module, issues arise. These issues don't just stem from the fact that global variables are a sign of bad design. The issue is that modules aren't connected (header files don't connect anything). So, after merging modules, the code with global variable initialization may come after the code where it's used.

The C and C++ standards ensure that global variables are initialized in their declaration order within the translation unit. However, in different translation units, the order of their initialization isn't defined. So, the program behavior is undefined.

// module.h
extern int global_value;

// module.cpp
#include "module.h"

int init_func() {
  return 5 * 5;
}
int global_value = init_func(); 

// main.cpp
#include "module.h"

#include <iostream>

static int use_global = global_value * 5;

int main() {
  std::cout << use_global;
}

The result depends on the order in which main.cpp and module.cpp are processed.

Prior to C++11, the following simple example contained undefined behavior due to a possible incorrect order of static object initialization:

#include <iostream>

struct Init {
  Init() {
    std::cout << "Init!\n"; 
  }
} init; // Prior to C++11, it wasn't guaranteed
        // that std::cout had been constructed by now.

int main() {
  return 0;
}

You can fight against incorrect initialization order, for example, by arranging the access to a global variable via a function call.

// module.h
int global_variable();

// module.cpp
int global_variable() {
  static int glob_var = init_func();
  return glob_var;
}

In this case, initialization is guaranteed during the first access.

In addition to undefined behavior due to incorrect initialization order, deinitialization order can also cause issues!

The C++ standard ensures that object destructors are always called in the reverse order to the one in which constructors are terminated.

#include <iostream>
#include <string>

const std::string& static_name() {
  static const std::string name = "Hello! Hello! long long string!";        
  return name;
}

struct TestStatic {
  TestStatic() {
    std::cout << "ctor: " << "ok" << "\n";
  }
  ~TestStatic() {
    std::cout << "dctor: " << static_name() << "\n";
  }
} test;


int main() {
  std::cout << static_name() << "\n";
}

The TestStatic constructor is executed first. Then, main calls static_name and constructs a string. Once the program is terminated, the string is destroyed first, and then the TestStatic destructor accesses the already destroyed string.

To avoid this, you can call the static_name function in the TestStatic constructor. The string constructor then will terminate before the TestStatic constructor terminates. So, the order of object destruction will be different.

Otherwise (sometimes developers do this), you can prevent static string removal by creating it in the heap.

const std::string& static_name() {
  static const std::string* name 
    = new std::string("Hello! Hello! long long string!");        
  return *name;
}

In that case, however, you sign up for a memory leak. Of course, there won't really be any leak because the static object will die when the program terminates. The memory will be released anyway. However, utilities used to detect leaks will definitely point to your static object in the heap, and you'll need to filter them out so they don't interfere with the search for real leaks.

Initialization order fiasco and unused headers

To speed up the build process, a good practice in C++ is to reduce the number of header files to be included. Programmers try to include only the things that they actually use. If the structure size in a particular file is irrelevant (e.g., only references and pointers are used), you can include a separate small header with forward declarations (for example, iosfwd instead of iostream). There are linters (cpplint, for example) that can tell you which header files you don't use at all. Anything you don't use should go in the trash!

If you follow such advice and approaches, the source files get smaller after preprocessing. This way you'll have fewer unused and repeating characters, which means less work for the linker. Wonderful! A win-win for everybody... it would seem.

In reality, there are pitfalls that one can fall into. They're related to the order of static object initialization (many thanks to Egor Suvorov for the example concept).

Let's say you're writing a logging library with a modest interface:

// logger.h

#include <string_view>
void log(std::string_view message);

The interface uses only the minimum required header.

In the first implementation, you decide to log to stdout via the standard I/O threads library:

// logger.cpp
#include "logger.h"

#include <iostream>

void log(std::string_view message) {
  std::cout << "INFO: " << message << std::endl;
}

You've debugged your logger and distributed it to a slightly wider range of users. One of them, for example, who likes to create plugins with self-registering factories without expecting any tricks, uses your logger for their favorite thing:

// main.cpp
#include "logger.h"

struct StaticFactory {
  StaticFactory() {
    log("factory created");
  }
} factory;


int main() {
  log("start main");
  return 0;
}

With the GCC 10.3.0 compiler (Ubuntu 10.3.0-1ubuntu1) at their disposal, they build an application using the following command:

g++ -std=c++17 -o test main.cpp logger.cpp

They run the app, and it immediately crashes with a segmentation error. The puzzled user then disables your library, reverts to using the tried and tested iostream, and sends you a bug report that for some reason includes only the source code, but not the compilation command.

You try to reproduce the crash on the same build toolchain and use the compilation string:

g++ -std=c++17 -o test2 logger.cpp main.cpp

You run it. Holy cow, no crushes! Time to close the bug report?

This example contains a very nasty error that relates to violating the order of static object initialization. C++11 ensures that std::cin, std::cout, std::cerr, and their "wide" analogs are initialized before any static object declared in your file only if the <iostream> header is included before your objects are declared. This is achieved in the depths of <iostream> by creating the std::ios_base::Init static object. Before C++11, there were no guarantees. Those were dark times.

Concerned about minimizing dependencies and the size of preprocessed sources (or just following the linter advice), you didn't include iostream in the library interface header, but used it in the implementation. The user who doesn't know this gets in trouble. This isn't a good solution.

Standard stream objects aren't the only source of such errors. Any library that uses global static objects without taking care to initialize them before any user action is a potential troublemaker. If you're a library author, be careful when designing its interface. In C++, it's not limited to function signatures and class descriptions.

Program execution: static inline

C++ is famous for being incredibly context-dependent in almost all of its constructs. Just by looking at a random piece of code, one can't confidently say what it does. We have overloaded operators, context-sensitive keyword values, ADL, auto, auto, and auto again!

One of the most value-overloaded keywords in C++ is static:

  • static is a visibility modifier that affects linking;
  • static is a storage modifier that determines where and for how long the variable is stored;
  • static is also a modifier that affects how a variable or method associated with a class or structure interact with objects of those types.

C++23 will also have static overloads for operator()! It will be something new, delightful, and beautiful.

Most importantly, don't mix it up with the static modifier when overloading other operators outside the class, because it's already a visibility modifier! So, if you write something like this in different translation units:

/// TU1.cpp
static Monoid operator + (Monoid a, Monoid b) {
  return {
    a.value + b.value
  };
}

Monoid sum(Monoid a, Monoid b) {
  return a + b;
}

/// TU2.cpp
static Monoid operator + (Monoid a, Monoid b) {
  return {
    a.value * b.value
  };
}

Monoid mult(Monoid a, Monoid b) {
  return a + b;
}

/// main.cpp
int main(int argc, char **argv) {
  auto v1 = sum({5}, {6}).value;
  auto v2 = mult({5}, {6}).value;
  std::cout << v1 << " " << v2 << "\n";
}

Then it will even work as expected, because there's no issue: the definitions in translation units are internal.

In C++17, the inline keyword has also got additional definitions.

It used to be just a hint to the compiler that the body of a function should be "embedded" rather than called. In other words, you can use function instructions instead of making a relatively expensive call with saving the return address, registers, or something else... However, this hint doesn't always work for various reasons. It's mainly because programmers have been and still are writing it everywhere, even where they shouldn't, in order not to bloat the code they get too much. This isn't our case, though. Our story is different.

In modern C++, inline is most often used only to put the function definition in the header file. It works in C too but in a different way: instead of the multiple definition error that occurs because of placing non-inline functions in the header file and which we wanted to avoid, we get an undefined reference.

In C, the inline definitions from the header need to be paired with the static modifier. You may get code bloating, because you'll get a function copy in every translation unit, and a not-smart-enough linker will treat all of them as separate units.

Alternatively, you can still provide one non-inline definition somewhere, like this nasty trick:

// square.h
#ifdef DEFINE_STUB
#define INLINE 
#else 
#define INLINE inline
#endif

INLINE int square(int num) {
  return num * num;
}

// square.c
#define DEFINE_STUB 
#include "square.h"

// main.c
#include "square.h"

int main() {
  return square(5);
}

Otherwise, you can refer to the function declaration somewhere with the extern specifier (it may work without it):

// square.h
inline int square(int num) {
  return num * num;
}

// square.c
#include "square.h"
extern int square(int num);

// main.c
#include "square.h"

int main() {
  return square(5);
}

Or you can use GCC and always build the C code with optimizations enabled. Only release builds! I've seen developers like this, too. However, the solution doesn't always work:

// square.h
inline int square(int num) {
  return num * num;
}

inline int cube(int num) {
  return num * num * num;
}

// main.c
#include "square.h"
#include <stdlib.h>

typedef int (*fn) (int);

int main() {
  fn f;
  if (rand() % 2) {
    f = square;
  } else {
    f = cube;
  }
  // The inline function addresses are unknown ->
  //   undefined reference
  return f(5);
}

Let's get back to C++, though. In addition to functions, sometimes developers really want to define variables in headers. In fancy projects, of course, there are mostly constants. However, the development process is complex, vague, and full of horrors and out-of-the-box creative decisions that had to be made "here and now". So, you can encounter more than just constants.

Unfortunately, before C++17, you can't always just put a constant definition in a header file. And even if you can, it results in interesting special effects.

// my_class.hpp
struct MyClass {
  static const int max_limit = 5000;
};

// main.cpp
#include "my_class.hpp"

#include <algorithm>

int main() {
  int limit = MyClass::max_limit; // OK
  return std::min(5, MyClass::max_limit); // Compilation error!
    // std::min wants to take a reference,
    // but the linker doesn't know the address of this constant!
}

You can write this:

// my_class.hpp
struct MyClass {
  static constexpr int max_limit = 5000;
};

It's gonna work.

But constexpr isn't always possible, and then one still has to put the definition into a separate translation unit...

C++17 has arrived, and our torment has ended! Now you can declare a variable as inline, and the compiler will take it and generate a proper annotation for the symbol in the object file, so the linker won't complain about multiple definitions. Let it take any definition, we guarantee that they are all the same, otherwise it's undefined behavior.

// my_class.hpp
#include <unordered_map>
#include <string>

struct MyClass {
  static const inline
  std::unordered_map<std::string, int> supported_types_versions =
  {
    {"int", 5},
    {"string", 10}
  };
};

inline const
std::unordered_map<std::string, int> another_useful_map = {
  {"int", 5},
  {"string", 6}
};

void test();

// my_class.cpp
#include "my_class.hpp"
#include <iostream>

void test() {
  std::cout << another_useful_map.size() << "\n";
}

// main.cpp
#include "my_class.hpp"
#include <algorithm>
#include <iostream>

int main() {
  std::cout << MyClass::supported_types_versions.size() << "\n";
  test();
}

Everything works fine, there are no multiple definitions and no undefined references! The 17th standard has incredibly enhanced C++!

The observant reader will have felt and even spotted the catch by now.

Here's a code fragment:

DEFINE_NAMESPACE(details)
{
  class Impl { .... };

  static int process(Impl);

  static inline const
    std::vector<std::string> type_list = { .... };
};

Can something go wrong?

Of course it can— this is C++!

DEFINE_NAMESPACE(name) can be defined like this:

#define DEFINE_NAMESPACE(name) namespace name

Or like this:

#define DEFINE_NAMESPACE(name) struct name

What?! Yes! What if one day, out of good intentions, the mad genius of the library author came up with a solution, which uses a single macro for enabling and disabling, to hide access to the process function overload from the omnipresent ADL!

In such cases, type_list is actually a different thing.

In the case of namespace, this is the static inline global variable. The inline keyword is kind of useless here, because static modifies the visibility of the global variable (linkage). Each translation unit, where such a header is included, will have its own copy of the type_list variable.

In the case of class or struct, however, this static inline is a data member associated with the class, and it will be one for all.

Okay, whatever! They are constants declared in the same way! In reality, no one will notice anything, of course...

And now we remember that sometimes it's not constants that we need. For example, we can go back to the oldy-moldy system of auto-registering plugins when loading libraries, or another type auto-registering system.

Now it works. It's beautiful and predictable.

// plugin_storage.h
#include <vector>
#include <string>
using PluginName = std::string;
struct PluginStorage {
  static inline std::vector<PluginName> registered_plugins;
};

// plugin.cpp
#include "plugin_storage.h"

namespace {
struct Registrator {
  Registrator() {
    PluginStorage::registered_plugins.push_back("plugin");
  }
} static registrator_;
}

// main.cpp
#include "plugin_storage.h"
#include <iostream>
int main() {
  // Displays only one element.
  for (auto&& p : PluginStorage::registered_plugins) {
    std::cout << p << "\n";
  }
}

If you change struct PluginStorage to namespace PluginStorage, everything will compile, but it won't work anymore. The PluginStorage variable is different in each translation unit, so you'll see an empty list in main. Just remove static before inline and you'll get the behavior you want again.

Summing up

Modifiable global static variables are always difficult to work with. In Rust, for example, accessing them requires unsafe. C++ doesn't require anything. Just remember about the multiple syntactic rituals that need to be performed:

  • hide them in a function to avoid static initialization order fiasco;
  • don't write redundant static;
  • don't put them in a header file by accident;
  • limit their access as much as possible.

Oh, and don't forget about multithreaded access.

C++17 has introduced static inline variables. They're handy when unchangeable, although not unproblematic. The comparison/diff tools may not show the entire file, but only the parts with additions. If you see static inline, remember to check its context. If you ignore this, your executables will be heavy at best, and you may end up with hours of hopeless debugging after some minor change at worst: for example, someone put a variable declaration with global state in the header or vice versa, but logically nothing has changed...

Mutable statics are true evil. Not only average developers have issues with them. For example, for over a year, Clang had a bug related to the order of statics initialization in one translation unit due to incorrect sorting of static global variables and static inline data members.

Useful links

Program execution: ODR (One definition rule) violation

Calling a function that shouldn't be called, messing up the stack, breaking the time-tested third-party library, driving crazy a programmer who tries to find an issue under the debugger—the ODR violation can do it all!

This is a quite common and understandable rule in many programming languages: the same entity shouldn't have more than one definition. Let's take functions as an example. Their implementations may differ. Having two or more definitions of a function leads to the following issue: which one to use?

Some languages don't have indefiniteness. In Python, for example, each next definition overrides the previous one:

# hello.py
def hello():
    print("hello world")
    
hello() # hello world 

def hello():
    print("Hello ODR!")
    
hello() # Hello ODR!

In other languages, multiple definitions simply result in a compilation error.

fun x y = x + y

gun x y = x - y

fun x y = x * y

main = print $ "Hello, world!" ++ (show $ fun 5 6)

--    Multiple declarations of 'fun'
--    Declared at: 1124215805/source.hs:3:1
--                 1124215805/source.hs:7:1

C and C++ are no exceptions: in them, function, class, and template redefinition are also detected and result in a compilation error.

int fun() {
  return 5;
}

int fun() { // CE: redefinition
  return 6;
}

Everything seems to be fine. It's an expected, excellent solution, but there are some twists.

Of course, it's very useful for static analysis if all your code resides in a single file. In reality, however, the code is usually divided into separate "modules", each with its own independent logic. It's quite common for two different modules to contain types or functions with the same name. It shouldn't cause any trouble, it should work out-of-the-box... However, this isn't true for C and C++.

Those familiar with Python probably know that each file (module) is a separate namespace in the language. Class names and functions from different files don't interfere until you import them.

C has never had modules and probably never will. Instead, it has a separate compilation, which relies on the capability to leave entities declared (e.g. in header files) but not defined (the definition is placed in a separate translation unit, which is compiled independently). The final build and resolution of all undefined names are postponed until the linking phase.

There are no namespaces either, so defining two functions with the same name in different translation units violates ODR and... will almost certainly not be caught at compile time. Perhaps, if you're lucky and you remember to configure the linking options, you'll detect the issue in the next step. Whereas if you're unlucky, you will fall into the tenacious grip of undefined behavior.

The biggest annoyance is that the problem isn't just limited to building your code. After all, you may accidentally use some name found in a third-party library! Then you may break the library both in your own project and in someone else's if they use your code as a dependency. Moreover, it's enough to randomly guess the function name: there are no function overloads in C, and defining a function with the same name but with different arguments is an ODR violation.

Due to all these issues, the C and C++ standards even impose restrictions on the names you can use in your code, so that you don't accidentally break a standard library!

What can you do?

In the pure C world, they deal with this in a set of these methods:

1. Using manual implementation of the namespace mechanism. Each function and structure in the project are prefixed with the project name.

2. By configuring the character visibility:

  • static makes a function or global variable "invisible" outside the translation unit;
  • __attribute__((visibility("hidden"))) for private structures and functions;
  • the -fvisibility=hidden, -fvisibility-inlines-hidden flags and setting attributes are only for a public interface.

3. By writing scripts for the linker if the previous step left out something extra in the final binary.

All this may save you when integrating with other libraries. However, it almost doesn't protect you from redefining your functions and structures in your own project.

Things are a bit better in C++.

First of all, there are function overloads: argument types are involved in forming the names used in linking. So, just guessing the name isn't enough to cause trouble, one needs to guess the arguments (but not the return value type!).

Secondly, there are namespaces, and one doesn't need to manually assign prefixes to each declared function.

Finally, there are anonymous namespaces, which enable anything defined within the translation unit to be invisible outside it.

// A.cpp

namespace {
  struct S {
    S() {
      std::cout << "Hello A!\n";
    }
  };
}

void fun_A() {
  S{};
}

// B.cpp

namespace {
  struct S {
    S() {
      std::cout << "Hello B!\n";
    }
  };
}

void fun_B() {
  S{};
}

The S structures are in different anonymous namespaces, there's no ODR violation.

I had two definitions of a private auxiliary structure of the prefix tree in my project for a long time, but they weren't put in an anonymous namespace. Everything worked fine until one day we changed the file compilation order. SEGFAULT came immediately: declarations had different data members and the testing process was insane. The good thing is that we caught it before it got into the release build.

Finally, C++ 20 introduced modules. Private, explicitly unexported names within one module don't interfere with names from other modules. However, all the issues remain for exported names: one has to declare the namespace and manually check for overlaps.

Besides ways of violating the ODR a little less frequently, C++ has additional ways to implicitly violate the ODR in the form of templates.

Templates are instantiated in each translation unit, and, in order to not violate the ODR, should be expanded to the same code when using the same parameters.

In C++, we can define functions that belong to any namespace in any translation unit. Templates are compiled in two passes with ADL (argument dependent lookup). And woe betide you if one of the passes pulls different functions!

struct A {};
struct B{};
struct D : B {};

// demo_1.cpp
bool operator<(A, B) { std::cout << "demo_1\n"; return true; }
void demo_1() { 
  A a; D d;
  std::less<void> comparator; 
  comparator(a, d); // The operator () template
                    // looks for a suitable definition for <.
}

// demo_2.cpp
bool operator<(A, D) { std::cout << "demo_2\n"; return true; }
void demo_2() {
  A a; D d;
  std::less<void> comparator;
  comparator(a, d);
}

int main() {
  demo_1();
  demo_2();
  return 0;
}

In this example (thanks to LDVSOFT), different compilation orders give different results:

The interesting thing is that, due to the peculiarities and difficulties of implementing two-stage template compilation, different compilers would produce different results if you put this example in a single translation unit! And nobody will report the issue!

To simplify the analysis, the string print has been replaced by printing the numbers 1 and 2.

GCC:

demo_1():
  mov   esi, 1
  mov   edi, OFFSET FLAT:_ZSt4cout
  jmp   std::basic_ostream<char, std::char_traits<char> >::operator<<(int)

demo_2():
  mov   esi, 1
  mov   edi, OFFSET FLAT:_ZSt4cout
  jmp   std::basic_ostream<char, std::char_traits<char> >::operator<<(int)

MSVC:

void demo_1(void) PROC                           ; demo_1, COMDAT
  push   2
  mov    ecx,
         OFFSET
         std::basic_ostream<char,std::char_traits<char> > std::cout
           ; std::cout
  call   std::basic_ostream<char,std::char_traits<char> > &
         std::basic_ostream<char,std::char_traits<char> >::operator<<(int)
           ; std::basic_ostream<char,std::char_traits<char> >::operator<<
  ret    0
void demo_1(void) ENDP  

void demo_2(void) PROC                           ; demo_2, COMDAT
  push   2
  mov    ecx,
         OFFSET
         std::basic_ostream<char,std::char_traits<char> > std::cout 
           ; std::cout
  call   std::basic_ostream<char,std::char_traits<char> > &
         std::basic_ostream<char,std::char_traits<char> >::operator<<(int)
           ; std::basic_ostream<char,std::char_traits<char> >::operator<<
  ret    0

The code built using GCC prints 11; whereas one built using MSVC prints 22.

Are you scared? Don't be! If the < operator was really supposed to be private in this example, then wrapping it in an anonymous namespace would solve the problem. Inside std::less<void>::operator(), the < operator would be missing, and you'd get a compilation error (you wouldn't like it). You'd need to explicitly use a comparison, and that's where everything is defined.

Just use modules or put private inner workings to anonymous namespaces, and you are good to go. Probably.

ODR violation almost always goes along with the update and ABI break issues.

You've updated the library, and now your code depends on its newer version. Make sure the other code that depends on yours is also using a newer version of this library or at least a binary compatible one. Otherwise, you'll get the ODR violation, stack break, call convention violation... well, you know the drill.

The ABI break and potential ODR violation are some of the most acute reasons why migrating to new versions of the standard, compilers, and libraries takes many years in the C++ world. Everything has to be rebuilt, re-tested, checked that nobody has put in the wrong names.

It's a kind of a paradox that the capability to violate the ODR is sometimes useful. Undefined behavior associated with it is somewhat definite and controllable: which of the definitions will be used is given by the order, which can be influenced. GCC, for example, supports __attribute__((weak)) to mark functions that are expected to be replaced by alternative definitions (with more efficient implementation and without debugging instructions, for example). There's also the symbol hooking technique that uses LD_PRELOAD to replace certain functions from dynamic libraries for debugging with an instrumented allocator or to intercept calls and collect statistics.

Useful links

Program execution: reserved names

This topic is closely related to the ODR violation.

C and C++ have an incredibly large number of identifiers that are forbidden to be used for their variables and types due potential undefined behavior.

The C and C++ standards forbid some names, the POSIX standards forbid others, and platform-specific libraries forbid a few more. In the latter case, you're usually safe as long as the library isn't included.

For example, you can't use function names from the C library in the global scope neither in C, nor in C++! Otherwise, you may encounter not only ODR violations but also surprising behavior of compilers that know how to optimize common constructs.

So, if you define your own memset like this:

void *memset (void *destination, int c, unsigned long n) {
  for (unsigned long i = 0; i < n; ++i) {
    ((char*)(destination))[i] = c;
  }
  return destination;
}

A thoughtful optimizing compiler can easily turn it into that:

void *memset (void* destination, int c, unsigned long n) {
  return memset(destination, c, n);
}

In C++, due to the default naming convention, recursion doesn't occur: the default memset is called instead of the custom one.

However, naming convention doesn't save you if you declare global variables rather than functions:

#include <iostream>
int read;
int main(){
  std::ios_base::sync_with_stdio(false);
  std::cin >> read;
}

When building such code with a statically linked standard C library, the program will crash (SIGSEGV) because the global read variable address was substituted instead of the standard read function address. As an exercise, the reader is invited to implement a similar example using the write name.

There are a lot of forbidden names. For example, anything beginning with is*, to*, or _* is forbidden in the global space. _[A-Z]* is forbidden everywhere. POSIX reserves names that end with _t. There's also much more unexpected stuff going on.

You can extend the std or POSIX namespaces. Even though such a program compiles and executes successfully, modifying these namespaces may result in undefined program behavior unless the standard specifies otherwise.

Only the ISO committee defines the contents of the std namespace, and the standard prohibits adding the following to it:

  • variable declarations;
  • function declarations;
  • class/struct/union declarations;
  • enumeration declarations;
  • function/class/variable template declarations (C++14).

The standard allows adding the following template specializations, defined in the std namespace, if they depend on at least one program-defined type:

  • full or partial specialization of a class template;
  • full specialization of a function template (up to C++20);
  • full or partial specialization of a variable template not located in the '<type_traits>' header (up to C++20).

However, specializations of templates located inside classes or class templates are prohibited.

Unlike the std namespace, any modification of the POSIX namespace is completely forbidden.

If you use forbidden names, your code may work today but not tomorrow.

To avoid living in fear, it's often enough to use the static or anonymous namespaces. Or just stop using C and C++.

Useful links

Author: Dmitry Sviridkin

Dmitry has over eight years of experience in high-performance software development in C and C++. From 2019 to 2021, Dmitry Sviridkin has been teaching Linux system programming at SPbU and C++ hands-on courses at HSE. Currently works on system and embedded development in Rust and C++ for edge servers as a Software Engineer at AWS (Cloudfront). His main area of interest is software security.

Editor: Andrey Karpov

Andrey has over 15 years of experience with static code analysis and software quality. The author of numerous articles on writing high-quality code in C++. Andrey Karpov has been honored with the Microsoft MVP award in the Developer Technologies category from 2011 to 2021. Andrey is a co-founder of the PVS-Studio project. He has long been the company's CTO and was involved in the development of the C++ analyzer core. Andrey is currently responsible for team management, personnel training, and DevRel activities.

Posts: articles

Poll:



Comments (0)

Next comments next comments
close comment form