To get a trial key
fill out the form below
Team license
Enterprise license
** By clicking this button you agree to our Privacy Policy statement

Request our prices
New License
License Renewal
--Select currency--
USD
EUR
* By clicking this button you agree to our Privacy Policy statement

Free PVS-Studio license for Microsoft MVP specialists
** By clicking this button you agree to our Privacy Policy statement

To get the licence for your open-source project, please fill out this form
** By clicking this button you agree to our Privacy Policy statement

I am interested to try it on the platforms:
** By clicking this button you agree to our Privacy Policy statement

Message submitted.

Your message has been sent. We will email you at


If you haven't received our response, please do the following:
check your Spam/Junk folder and click the "Not Spam" button for our message.
This way, you won't miss messages from our team in the future.

>
>
>
Design and evolution of constexpr in C++

Design and evolution of constexpr in C++

Jan 13 2022
Author:

constexpr is one of the magic keywords in modern C++. You can use it to create code, that is then executed before the compilation process ends. This is the absolute upper limit for software performance.

We published and translated this article with the copyright holder's permission. The author is Evgeny Shulgin, email - izaronplatz@gmail.com. The article was originally published on Habr. We'd also like to invite you to read other theoretical articles that have a hashtag #Knowledge.

constexpr gets new features every year. At this time, you can involve almost the entire standard library in compile-time evaluations. Take a look at this code: it calculates the number under 1000 that has the largest number of divisors.

constexpr has a long history that starts with the earliest versions of C++. Examining standard proposals and compilers' source code helps understand how, layer by layer, that part of the language was created. Why it looks the way it does. How constexpr expressions are evaluated. Which features we expect in the future. And what could have been a part of constexpr - but was not approved to become part of the standard.

This article is for those who do not know about constexpr yet - and for those who've been using it for a long time.

0909_constexpr/image1.png

C++98 and C++03: Ranks among const variables

In C++, sometimes it's necessary to use integer constants, whose values must be available at compile time. The standard allows you to write constants in the form of simple expressions, as in the code below:

enum EPlants
{
  APRICOT = 1 << 0,
  LIME = 1 << 1,
  PAPAYA = 1 << 2,
  TOMATO = 1 << 3,
  PEPPER = 1 << 4,
  FRUIT = APRICOT | LIME | PAPAYA,
  VEGETABLE = TOMATO | PEPPER,
};

template<int V> int foo();
int foo6 = foo<1+2+3>();
int foo110 = foo<(1 < 2) ? 10*11 : VEGETABLE>();

int v;
switch (v)
{
case 1 + 4 + 7:
case 1 << (5 | sizeof(int)):
case (12 & 15) + PEPPER:
  break;
}

These expressions are described in the [expr.const] section and are called constant expressions. They can contain only the following:

  • Literals (this includes integers, these are integral types);
  • enum values;
  • An enum or integral non-type template parameter (for example, the V value from template <int V>);
  • The sizeof expression;
  • const variables initialized by a constant expressionthis is the interesting point.

All the points except the last one are obvious – they are known and can be accessed at compile time. The case with variables is more intriguing.

For variables with static storage duration, in most cases, memory is filled with zeros and is changed at runtime. However, it is too late for the variables from the list above – their values need to be evaluated before compilation is finished.

There are two types of static initialization in the C++98/03 standards:

  • zero-initialization, when memory is filled with zeros and the value changes at runtime;
  • initialization with a constant expression, when an evaluated value is written to the memory at once (if needed).

Note. All other initializations are called dynamic initialization, we do not review them here.

Note. A variable that was zero-initialized, can be initialized again the "normal" way. This will already be dynamic initialization (even if it happens before the main method call).

Let's review this example with both types of variable initialization:

int foo()
{
  return 13;
}

const int test1 = 1 + 2 + 3 + 4;  // initialization with a const. expr.
const int test2 = 15 * test1 + 8; // initialization with a const. expr.
const int test3 = foo() + 5;      // zero-initialization
const int test4 = (1 < 2) ? 10 * test3 : 12345; // zero-initialization
const int test5 = (1 > 2) ? 10 * test3 : 12345; // initialization with
                                                // a const. expr.

You can use variables test1, test2, test5 as a template parameter, as an expression to the right of case in switch, etc. You cannot do this with variables test3 and test4.

As you can see from requirements for constant expressions and from the example, there is transitivity. If some part of an expression is not a constant expression, then the entire expression is not a constant expression. Note that only those expression parts, that are evaluated, matter – which is why test4 and test5 fall into different groups.

If there's nowhere for a constant expression variable to get its address, the compiled program is allowed to skip reserving memory for the variable – so we will force the program to reserve the memory anyway. Let's output variable values and their addresses:

int main()
{
  std::cout << test1 << std::endl;
  std::cout << test2 << std::endl;
  std::cout << test3 << std::endl;
  std::cout << test4 << std::endl;
  std::cout << test5 << std::endl;

  std::cout << &test1 << std::endl;
  std::cout << &test2 << std::endl;
  std::cout << &test3 << std::endl;
  std::cout << &test4 << std::endl;
  std::cout << &test5 << std::endl;
}

izaron@izaron:~/cpp$ clang++ --std=c++98 a.cpp 
izaron@izaron:~/cpp$ ./a.out 
10
158
18
180
12345
0x402004
0x402008
0x404198
0x40419c
0x40200c

Now let's compile an object file and look at the table of symbols:

izaron@izaron:~/cpp$ clang++ --std=c++98 a.cpp -c
izaron@izaron:~/cpp$ objdump -t -C a.o

a.o:     file format elf64-x86-64

SYMBOL TABLE:
0000000000000000 l    df *ABS*  0000000000000000 a.cpp
0000000000000080 l     F .text.startup  0000000000000015 _GLOBAL__sub_I_a.cpp
0000000000000000 l     O .rodata        0000000000000004 test1
0000000000000004 l     O .rodata        0000000000000004 test2
0000000000000004 l     O .bss   0000000000000004 test3
0000000000000008 l     O .bss   0000000000000004 test4
0000000000000008 l     O .rodata        0000000000000004 test5

The compiler – its specific version for a specific architecture – placed a specific program's zero-initialized variables into the .bss section, and the remaining variables into the .rodata section.

Before the launch, the bootloader loads the program in a way that the .rodata section ends up in the read-only segment. The segment is write-protected at the OS level.

Let's try to use const_cast to edit data stored at the variables' address. The standard is not clear as to when using const_cast to write the result can cause undefined behavior. At least, this does not happen when we remove const from an object/a pointer to an object that is not fundamentally constant initially. I.e. it's important to see a difference between physical constancy and logical constancy.

The UB sanitizer catches UB (the program crashes) if we try to edit the .rodata variable. There is no UB if we write to .bss or automatic variables.

const int &ref = testX;
const_cast<int&>(ref) = 13; // OK for test3, test4;
                            // SEGV for test1, test2, test5
std::cout << ref << std::endl;

Thus, some constant variables are "more constant" than others. As far as we know, at that time, there was no simple way to check or monitor that a variable had been initialized with a const. expr.

0-∞: Constant evaluator in compiler

To understand how constant expressions are evaluated during compilation, first you need to understand how the compiler is structured.

Compilers are ideologically similar to each other. I'll describe how Clang/LLVM evaluates constant expressions. I copied basic information about this compiler from my previous article:

[SPOILER BLOCK BEGINS]

Clang and LLVM

Many articles talk about Clang and LLVM. To learn more about their history and general structure, you can read this article at Habr.

The number of compilation stages depends on who explains the compiler's design. The compiler's anatomy is multilevel. At the most abstract level, the compiler looks like a fusion of three programs:

  • Front-end: converts the source code from C/C++/Ada/Rust/Haskell/... into LLVM IR – a special intermediate representation. Clang is the front-end for the C language family.
  • Middle-end: LLVM IR is optimized depending on the settings.
  • Back-end: LLVM IR is converted into machine code for the required platform - x86/Arm/PowerPC/...

For simple languages, one can easily write a compiler whose source code consists of 1000 lines - and get all the power of LLVM - for this, you need to implement the front-end.

At a less abstract level is Clang's front-end that performs the following actions (not including the preprocessor and other "micro" steps):

  • Lexical analysis: converting characters into tokens, for example []() { return 13 + 37; } are converted to (l_square) (r_square) (l_paren) (r_paren) (l_brace) (return) (numeric_constant:13) (plus) (numeric_constant:37) (semi) (r_brace).
  • Syntactic analysis: creating an AST (Abstract Syntax Tree) - that is, translating tokens from the previous paragraph into the following form: (lambda-expr (body (return-expr (plus-expr (number 13) (number 37))))).
  • Code generation: creating LLVM IR for specific AST.

[SPOILER BLOCK ENDS]

So, evaluating constant expressions (and entities that are closely related to them, like template instantiation) takes place strictly in the C++ compiler's (Clang's in our case) front-end. LLVM does not do such things.

Let's tentatively call the micro-service that evaluates constant expressions (from the simplest ones in C++98 to the most complicated ones in C++23) the constant evaluator.

If, according to the standard, at some location in the code we expect a constant expression; and the expression that is there meets the requirements for a constant expression – Clang must be able to evaluate it in 100% of cases, right then and there.

Constant expression restrictions have been constantly softened over the years, while Clang's constant evaluator kept getting more advanced – reaching the ability to manage the memory model.

Nine-year-old documentation describes how to evaluate constants in C++98/03. Since constant expressions were very simple then, they were evaluated with the conventional constant folding, through the abstract syntax tree (AST) analysis. Since, in syntax trees, all arithmetic expressions are already broken apart into sub-trees, evaluating a constant is a simple traversal of a sub-tree.

The constant evaluator's source code is located in lib/AST/ExprConstant.cpp and had reached almost 16 thousand lines by the moment I was writing this article. Over the years, it learned to interpret a lot of things, for example, loops (EvaluateLoopBody) – all of this based on the syntax tree.

The big difference of constant expressions from code executed in runtime - they are required to not allow undefined behavior. If the constant evaluator stumbles upon UB, compilation fails.

c.cpp:15:19: error: constexpr variable 'foo' must be initialized by a
                    constant expression
    constexpr int foo = 13 + 2147483647;
                  ^     ~~~~~~~~~~~~~~~

The constant evaluator is used not only for constant expressions, but also to look for potential bugs in the rest of the code. This is a side benefit from this technology. Here's how one can detect overflow in non-constant code (you can get a warning):

c.cpp:15:18: warning: overflow in expression; result is -2147483636
                      with type 'int' [-Winteger-overflow]
    int foo = 13 + 2147483647;
                 ^

2003: No need for macros

Changes to the standard occur through proposals.

[SPOILER BLOCK BEGINS]

Where are proposals located and what do they consist of?

All proposals to the standard are located at open-std.org. Most of them have detailed descriptions and are easy to read. Usually, proposals contain the following:

  • A short review of the area with links to standard sections;
  • Current problems;
  • The proposed solution to the problems;
  • Suggested changes to the standard's text;
  • Links to previous precursor proposals and previous revisions of the proposal;
  • In advanced proposals – links to their implementation in a compiler's fork. For the proposals that I saw, the authors implemented the proposal in Clang's fork.

One can use the links to precursor proposals to track how each piece of C++ evolved.

Not all proposals from the archive were eventually accepted (although some of them were used as a base for accepted proposals), so it's important to understand that they describe some alternative version of C++ of the time, and not a piece of modern C++.

Anyone can participate in the C++ evolution – Russian-speaking experts can use the stdcpp.ru website.

[SPOILER BLOCK ENDS]

[N1521] Generalized Constant Expressions was proposed in 2003. It points to a problem that if part of an expression is evaluated using a method call, then the expression is not considered a constant expression. This forces developers – when they need a more or less complex constant expression – to overuse macros:

#define SQUARE(X) ((X) * (X))
inline int square(int x) { return x * x; }
// ^^^ the macro and method definition
square(9)
std::numeric_limits<int>::max()
// ^^^ cannot be a part of a constant expression
SQUARE(9)
INT_MAX
// ^^^ theoretically can be a part of a constant expression

This is why the proposal suggests introducing a concept of constant-valued methods that would be allowed as part of a constant expression. A method is considered constant-valued if this method is inline, non-recursive, does not return void, and its body consists of a single return expr; expression. After substituting arguments (that also include constant expressions), the developer gets a constant expression.

Note. Looking ahead, the term constant-valued didn't catch on.

int square(int x) { return x * x; }         // constant-valued
long long_max(int x) { return 2147483647; } // constant-valued
int abs(int x) { return x < 0 ? -x : x; }   // constant-valued
int next(int x) { return ++x; }             // NOT constant-valued

Thus, all variables from the previous section (test1-5) would become "fundamentally" constant, with no changes in code.

The proposal believes that it's possible to go even further. For example, this code should also compile:

struct cayley
{
  const int value;
  cayley(int a, int b)
    : value(square(a) + square(b)) {}
  operator int() const { return value; }
};

std::bitset<cayley(98, -23)> s; // eq. to bitset<10133>

The reason for this is, the value variable is "fundamentally constant", because it was initialized in a constructor through a constant expression with two calls of the constant valued method. Consequently, according to the proposal's general logic, the code above can be transformed to something like this (by taking variables and methods outside of the structure):

// imitating constructor calls: cayley::cayley(98, -23) and operator int()
const int cayley_98_m23_value = square(98) + square(-23);

int cayley_98_m23_operator_int()
{
  return cayley_98_m23_value;
}

// creating a bitset
std::bitset<cayley_98_m23_operator_int()> s; // eq. to bitset<10133>

Proposals do not usually focus deeply on the details of how compilers can implement these proposals. This proposal says that there should not be any difficulties in implementing it - on just needs to slightly alter constant folding, which exists in most compilers.

Note. However, proposals cannot exist in isolation from compilers – proposals impossible to be implemented in a reasonable time are unlikely to be approved.

As with variables, a developer cannot check whether a method is constant-valued.

2006-2007: When it all becomes clear

Luckily, in three years, over the next revisions of this proposal ([N2235]), it became clear that the feature would have brought too much unclarity and this was not good. Then one more item was added to the list of problems - the inability to monitor initialization:

struct S
{
  static const int size;
};

const int limit = 2 * S::size; // dynamic initialization
const int S::size = 256; // constant expression initialization
const int z = std::numeric_limits<int>::max(); // dynamic initialization

The programmer intended limit to be initialized by a constant expression, but this does not happen, because S::size is defined "too late", after limit. If it were possible to request the required initialization type, the compiler would have produced an error.

Same with methods. Constant-valued methods were renamed to constant-expression methods. The requirements for them remained the same, but now, in order to use these methods in a constant expression, it was necessary to declare them with the constexpr keyword. The compilation would fail if the method body is not the correct return expr;.

The compilation would also fail and produce the constexpr function never produces a constant expression error if a consexpr method cannot be used in a constant expression. This is necessary to help the developer make sure that a method can be potentially used in a constant expression.

The proposal suggests to tag some methods from the standard library (for example, from std::numeric_limits) as constexpr, if they meet the requirements for constexpr methods.

Variables or class members can also be declared as constexpr - then the compilation will fail if a variable is not initialized through a constant expression.

At that time, it was decided to keep the new word's compatibility with variables, implicitly initialized through a constant expression, but without the constexpr word. Which means the code below worked (looking ahead, this code with --std=c++11 does not compile – and it is possible that this code never started to work at all):

const double mass = 9.8;
constexpr double energy = mass * square(56.6); // OK, although mass 
                                               // was not defined 
                                               // with constexpr
extern const int side;
constexpr int area = square(side); // error: square(side) is not
                                   // a constant expression

Constant-expression constructors for user-defined types were also legalized. This constructor must have an empty body and initialize its members with constexpr expressions if a developer creates a constexpr object of this class.

The implicitly-defined constructor is marked as constexpr whenever possible. Destructors for constexpr objects must be trivial, since non-trivial ones usually change something in the context of a running program that does not exist as such in constexpr evaluations.

Example of a class with constexpr members, from the proposal:

struct complex
{
  constexpr complex(double r, double i) : re(r), im(i) { }

  constexpr double real() { return re; }
  constexpr double imag() { return im; }

private:
  double re;
  double im;
};

constexpr complex I(0, 1); // OK -- literal complex

The proposal called objects like the I object user-defined literals. A "literal" is something like a basic entity in C++. "Simple" literals (numbers, characters, etc) are passed as they are into assembler commands. String literals are stored in a section similar to .rodata. Similarly, user-defined literals also have their own place somewhere there.

Now, aside from numbers and enumerations, constexpr variables could be represented by literal types introduced in this proposal (so far without reference types). A literal type is a type that can be passed to a constexpr function, and/or modified and/or returned from it. These types are fairly simple. Compilers can easily support them in the constant evaluator.

The constexpr keyword became a specifier that compilers require – similarly to override in classes. After the proposal was discussed, it was decided to avoid creating a new storage class (although that would have made sense) and a new type qualifier. Using it with function arguments was not allowed so as not to overcomplicate the rules for overload resolution.

2007: First constexpr for data structures

That year, the [N2349] Constant Expressions in the Standard Library proposal was submitted. It tagged as constexpr some functions and constants, as well as some container functions, for example:

template<size_t N>
class bitset
{
  // ...
  constexpr bitset();
  constexpr bitset(unsigned long);
  // ...
  constexpr size_t size();
  // ...
  constexpr bool operator[](size_t) const;
};

Constructors initialize class members through a constant expression, other methods contain return expr; in their body. This return expression meets the current requirements.

Over half of the proposals about constexpr talk about tagging some functions from the standard library as constexpr. There are always more proposals like this after each new step of the constexpr evolution. And almost always they are not very interesting.

2008: Recursive constexpr methods

constexpr methods were not initially intended to be made recursive, mainly because there were no convincing arguments in favor of recursion. Then the restriction was lifted, which was noted in [N2826] Issues with Constexpr.

constexpr unsigned int factorial( unsigned int n )
{
  return n==0 ? 1 : n * factorial( n-1 );
}

Compilers have a certain limit of nested calls. Clang, for example, can process a maximum of 512 nested calls. If this number is exceeded, the compiler won't evaluate the expression.

Similar limits exist for template instantiation (for example, if we used templates instead of constexpr to do compile-time evaluations).

2010: "const T&" as arguments in constexpr methods

At this time, many functions cannot be tagged as constexpr because of references to constants in the arguments. Parameters are passed by value – i.e. are copied – to all constexpr methods.

template< class T >
constexpr const T& max( const T& a, const T& b ); // does not compile

constexpr pair(); // can use constexpr
pair(const T1& x, const T2& y); // cannot use constexpr

Proposal [N3039] Constexpr functions with const reference parameters (a summary) allows constant references in function arguments and as a return value.

This is a dangerous change: before that, the constant evaluator dealt with simple expressions and constexpr variables (a literal-class object – essentially, a set of constexpr variables); but the introduction of references breaks through the "fourth wall", because this concept refers to the memory model that the evaluator does not have.

Overall, working with references or pointers in constant expressions turns a C++ compiler into a C++ interpreter, so various limitations are set.

If the constant evaluator can process a function with a type T argument, processing this function with the const T& is also possible - if the constant evaluator "imagines" that a "temporary object" is created for this argument.

Compilers cannot compile code that requires more or less complicated work or that tries to break something.

template<typename T> constexpr T self(const T& a) { return *(&a); }
template<typename T> constexpr const T* self_ptr(const T& a) { return &a; }

template<typename T> constexpr const T& self_ref(const T& a)
{
  return *(&a);
}

template<typename T> constexpr const T& near_ref(const T& a)
{
  return *(&a + 1);
}

constexpr auto test1 = self(123);     // OK
constexpr auto test2 = self_ptr(123); // FAIL, pointer to temporary is not
                                      // a constant expression
constexpr auto test3 = self_ref(123); // OK
constexpr auto tets4 = near_ref(123); // FAIL, read of dereferenced
                                      // one-past-the-end pointer is not
                                      // allowed in a constant expression

2011: static_assert in constexpr methods

Proposal [N3268] static_assert and list-initialization in constexpr functions introduces the ability to write "static" declarations that do not affect how function operate: typedefusingstatic_assert. This slightly untightens the nuts for constexpr functions.

2012: (Almost) any code in constexpr functions

In 2012, there was a big leap forward with the proposal [N3444] Relaxing syntactic constraints on constexpr functions. There are many simple functions that are preferable to be executed at compile-time, for example, the a^n power:

// Compute a to the power of n
int pow(int a, int n)
{
  if (n < 0)
    throw std::range_error("negative exponent for integer power");
  if (n == 0)
    return 1;
  int sqrt = pow(a, n/2);
  int result = sqrt * sqrt;
  if (n % 2)
    return result * a;
  return result;
}

However, in order to make its constexpr variant, developers have to go out of their way and write in a functional style (remove local variables and if-statements):

constexpr int pow_helper(int a, int n, int sqrt)
{
  return sqrt * sqrt * ((n % 2) ? a : 1);
}

// Compute a to the power of n
constexpr int pow(int a, int n)
{
  return (n < 0)
    ? throw std::range_error("negative exponent for integer power")
    : (n == 0) ? 1 : pow_helper(a, n, pow(a, n/2));
}

This is why the proposal wants to allow adding any code to constexpr functions - with some restrictions:

  • It's impossible to use loops (for/while/do/range-based for), because variable changes are not allowed in constant expressions;
  • switch and goto are forbidden so that the constant evaluator does not simulate complex control flows;
  • As with the old restrictions, functions should theoretically have a set of arguments that enable you to use these functions in constant expressions. Otherwise, the compiler assumes a function was marked as constexpr accidentally, and the compilation will fail with constexpr function never produces a constant expression.

Local variables - if they have the literal type - can be declared within these functions. If these variables are initialized with a constructor, it must be a constexpr constructor. This way, when processing a constexpr function with specific arguments, the constant evaluator can create a "background" constexpr variable for each local variable, and then use these "background" variables to evaluate other variables that depend on the variables that have just been created.

Note. There can't be too many of such variables because of a strict limitation on the depth of the nested calls.

You can declare static variables in methods. These variables may have a non-literal type (in order to, for example, return references to them from a method; the references are, however, of the literal type). However, these variables should not have the dynamic realization (i.e. at least one initialization should be a zero initialization). The sentence gives an example where this feature could be useful (getting a link to a necessary object at compile-time):

constexpr mutex &get_mutex(bool which)
{
  static mutex m1, m2; // non-const, non-literal, ok
  if (which)
    return m1;
  else
    return m2;
}

Declaring types (class, enum, etc.) and returning void was also allowed.

2013: (Almost) any code allowed in constexpr functions ver 2.0 Mutable Edition

However, the Committee decided that supporting loops (at least for) in constexpr methods is a must-have. In 2013 an amended version of the [N3597] Relaxing constraints on constexpr functions proposal came out.

It described four ways to implement the "constexpr for" feature.

One of the choices was very far from the "general C++". It involved creating a completely new construction for iterations that would the constexpr code's functional style of the time. But that would have created a new sub language - the functional style constexpr C++.

The choice closest to the "general C++" was not to replace quality with quantity. Instead, the idea was to try to support in constexpr a broad subset of C++ (ideally, all of it). This option was selected. This significantly affected constexpr's subsequent history.

This is why there was a need for object mutability within constexpr evaluations. According to the proposal, an object created within a constexpr expression, can now be changed during the evaluation process - until the evaluation process or the object's lifetime ends.

These evaluations still take place inside their "sandbox", nothing from the outside affects them. So, in theory, evaluating a constexpr expression with the same arguments will produce the same result (not counting the float- and double- calculation errors).

For a better understanding I copied a code snippet from the proposal:

constexpr int f(int a)
{
  int n = a;
  ++n;                  // '++n' is not a constant expression
  return n * a;
}

int k = f(4);           // OK, this is a constant expression.
                        // 'n' in 'f' can be modified because its lifetime
                        // began during the evaluation of the expression.

constexpr int k2 = ++k; // error, not a constant expression, cannot modify
                        // 'k' because its lifetime did not begin within
                        // this expression.

struct X
{
  constexpr X() : n(5)
  {
    n *= 2;             // not a constant expression
  }
  int n;
};

constexpr int g()
{
  X x;                  // initialization of 'x' is a constant expression
  return x.n;
}

constexpr int k3 = g(); // OK, this is a constant expression.
                        // 'x.n' can be modified because the lifetime of
                        // 'x' began during the evaluation of 'g()'.

Let me note here, that at the time being the code below is compiled:

constexpr void add(X& x)
{
  x.n++;
}

constexpr int g()
{
  X x;
  add(x);
  return x.n;
}

Right now, a significant part of C++ can work within constexpr functions. Side effects are also allowed - if they are local within a constexpr evaluation. The constant evaluator became more complex, but still could handle the task.

2013: Legendary const methods and popular constexpr methods

The constexpr class member functions are currently automatically marked as const functions.

Proposal [N3598] constexpr member functions and implicit const notices that it's not necessary to implicitly make the constexpr class member functions const ones.

This has become more relevant with mutability in constexpr evaluations. However, even before, this had been limiting the use of the same function in the constexpr and non-constexpr code:

struct B
{
  constexpr B() : a() {}
  constexpr const A &getA() const /*implicit*/ { return a; }
  A &getA() { return a; } // code duplication
  A a;
};

Interestingly, the proposal gave a choice of three options. The second option was chosen in the end:

  • Status quo. Cons: code duplication.
  • constexpr will not implicitly mean const. Cons: it breaks ABI — const is a part of the mangled method name.
  • Adding a new qualifier and writing constexpr A &getA() mutable { return a; }. Cons: a new buzzword at the end of the declaration.

2015-2016: Syntactic sugar for templates

In template metaprogramming, functions are usually overloaded if the body requires different logic depending on a type's properties. Example of scary code:

template <class T, class... Args> 
enable_if_t<is_constructible_v<T, Args...>, unique_ptr<T>> 
make_unique(Args&&... args) 
{
    return unique_ptr<T>(new T(forward<Args>(args)...));
}  

template <class T, class... Args>  
enable_if_t<!is_constructible_v<T, Args...>, unique_ptr<T>>
make_unique(Args&&... args) 
{
    return unique_ptr<T>(new T{forward<Args>(args)...});
}

Proposal [N4461] Static if resurrected introduces the static_if expression (borrowed from the D language) to make code less scary:

template <class T, class... Args> 
unique_ptr<T>
make_unique(Args&&... args) 
{
  static_if (is_constructible_v<T, Args...>)
  {
    return unique_ptr<T>(new T(forward<Args>(args)...));
  }
  else
  {
    return unique_ptr<T>(new T{forward<Args>(args)...});
  }
}

This C++ fragment has a rather mediocre relation to constexpr expressions and works in a different scenario. But static_if in further revisions was renamed:

constexpr_if (is_constructible_v<T, Args...>)
{
  return unique_ptr<T>(new T(forward<Args>(args)...));
}
constexpr_else
{
  return unique_ptr<T>(new T{forward<Args>(args)...});
}

Then some more renaming:

constexpr if (is_constructible_v<T, Args...>)
{
  return unique_ptr<T>(new T(forward<Args>(args)...));
}
constexpr_else
{
  return unique_ptr<T>(new T{forward<Args>(args)...});
}

And the final version:

if constexpr (is_constructible_v<T, Args...>)
{
  return unique_ptr<T>(new T(forward<Args>(args)...));
}
else
{
  return unique_ptr<T>(new T{forward<Args>(args)...});
}

2015: Constexpr lambdas

A very good proposal, [N4487] Constexpr Lambda, works scrupulously through the use of the closure type in constexpr evaluations (and supported the forked Clang).

If you want to understand how it's possible to have constexpr lambdas, you need to understand how they work from the inside. There is an article about the history of lambdas that describes how proto-lambdas already existed in C++03. Today's lambda expressions have a similar class hidden deep inside the compiler.

[SPOILER BLOCK BEGINS]

Proto-lambda for [](int x) { std::cout << x << std::endl; }

#include <iostream>
#include <algorithm>
#include <vector>

struct PrintFunctor
{
  void operator()(int x) const
  {
    std::cout << x << std::endl;
  }
};

int main()
{
  std::vector<int> v;
  v.push_back(1);
  v.push_back(2);
  std::for_each(v.begin(), v.end(), PrintFunctor());
}

[SPOILER BLOCK ENDS]

If all the captured variables are literal types, then closure type is also proposed to be considered a literal type, and operator() is marked constexpr. The working example of constexpr lambdas:

constexpr auto add = [] (int n, int m)
{
  auto L = [=] { return n; };
  auto R = [=] { return m; };
  return [=] { return L() + R(); };
};

static_assert(add(3, 4)() == 7, "");

2017-2019: Double standards

Proposal [P0595] The constexpr Operator considers the possibility of "knowing" inside the function where the function is being executed now - in a constant evaluator or in runtime. The author proposed calling constexpr() for this, and it will return true or false.

constexpr double hard_math_function(double b, int x)
{
  if (constexpr() && x >= 0)
  {
    // slow formula, more accurate (compile-time)
  }
  else
  {
    // quick formula, less accurate (run-time)
  }
}

Then the operator was replaced with the "magic" function std::is_constant_evaluated() ([P0595R2]) and was adopted by the C++20 standard in this form.

If the proposal has been developed for a long time, then the authors sometimes do its "rebase" (similar to projects in git/svn), bringing it in line with the updated state.

Same thing here — the authors of [P1938] if consteval (I'll talk about consteval later) found that it's better to create a new entry:

if consteval { }
if (std::is_constant_evaluated()) { }
// ^^^ similar entries

This decision was made in C++23 — link to the vote.

2017-2019: We need to go deeper

In the constexpr functions during the constexpr evaluations we cannot yet use the debugger and output logs. Proposal [P0596] std::constexpr_trace and std::constexpr_assert considers the introduction of special functions for these purposes.

The proposal was favorably accepted (link to the vote) but has not yet been finalized.

2017: The evil twin of the standard library

At this moment, std::vector (which is desirable to have in compile-time), cannot work in constexpr evaluations, It's mainly due to the unavailability of new/delete operators there.

The idea of allowing the new and delete operators into the constant evaluator looked too ambitious. Thus, a rather strange proposal [P0597] std::constexpr_vector considers introducing the magic std::constexpr_vector<T>.

It is the opposite of std::vector<T> — can be created and modified only during constexpr evaluations.

constexpr constexpr_vector<int> x;           // Okay.
constexpr constexpr_vector<int> y{ 1, 2, 3 };// Okay.
const constexpr_vector<int> xe;              // Invalid: not constexpr

It is not described how the constant evaluator should work with memory. @antoshkka and @ZaMaZaN4iK (the authors of many proposals) in [P0639R0] Changing attack vector of the constexpr_vector detected many cons of this approach. They proposed changing the work direction towards an abstract magic constexpr allocator that doesn't duplicate the entire standard library.

2017-2019: Constexpr gains memory

The Constexpr ALL the thing! presentation demonstrates an example of a constexpr library to work with JSON objects. The same thing, but in paper form, is in [P0810] constexpr in practice:

constexpr auto jsv
    = R"({
          "feature-x-enabled": true,
          "value-of-y": 1729,
          "z-options": {"a": null,
                        "b": "220 and 284",
                        "c": [6, 28, 496]}
         })"_json;

if constexpr (jsv["feature-x-enabled"])
{
  // code for feature x
}
else
{
  // code when feature x turned off
}

The authors suffered greatly from the inability to use STL containers and wrote the std::vector and std::map analogues. Inside, these analogues have std::array that can work in constexpr.

Proposal [P0784] Standard containers and constexpr studies the possibility of inputting STL containers in constexpr evaluations.

Note. It's important to know what an allocator is. STL containers work with memory through it. What kind of an allocator — is specified through the tempte argument. If you want to get into the topic, read this article.

What's stopping us from allowing STL containers to be in constexpr evaluations? There are three problems:

  • Destructors cannot be declared constexpr. For constexpr objects it must be trivial.
  • Dynamic memory allocation/deallocation is not available.
  • placement-new is not available for calling the constructor in the allocated memory.

First problem. It was quickly fixed — the proposal authors discussed this problem with the developers of the MSVC++ frontend, GCC, Clang, EDG. The developers confirmed that the restriction can be relaxed. Now we can require from literal types to have a constexpr destructor, not the strictly trivial one.

Second problem. Working with memory is not very easy. The constant evaluator is obliged to catch undefined behavior in any form. If the constant evaluator finds undefined behavior, it should stop compilation.

This means that we should track not only objects, but also their "metadata" that keep everything in check and don't let us crash the program. A couple of examples of such metadata:

  • Information about which field in union is active ([P1330]). An example of undefined behavior: writing to a member of inactive field.
  • A rigid connection between a pointer or a reference and a corresponding previously created object. An example of undefined behavior: infinite set.

Because of this, it's pointless to use such methods:

void* operator new(std::size_t);

The reason is, there's no justification to bring void* to T*. In short, a new reference/pointer can either start pointing to an existing object or be created "simultaneously" with it.

That's why there are two options for working with memory that are acceptable in constexpr evaluations:

  • Simple new and delete expressions: int* i = new int(42);
  • Using a standard allocator: std::allocator (it was slightly filed).

Third problem. Standard containers separate memory allocations and the construction of objects in this memory. We figured out the problem with allocations — it is possible to provide it with a condition for metadata.

Containers rely on std::allocator_traits, for construction — on its construct method. Before the proposal it has the following form:

template< class T, class... Args >
static void construct( Alloc& a, T* p, Args&&... args )
{
  ::new (static_cast<void*>(p)) T(std::forward<Args>(args)...);
  // ^^^ placement-new forbidden in constexpr evaluations
}

It cannot be used due to casting to void* and placement-new (forbidden in constexpr in general form). In the proposal it was transformed into

template< class T, class... Args >
static constexpr void construct( Alloc& a, T* p, Args&&... args )
{
  std::construct_at(p, std::forward<Args>(args)...);
}

std::construct_at is a function that works similarly to the old code in runtime (with a cast to void*). In constexpr evaluations:

.∧_∧

( ・ω・。)つ━☆・*。

⊂  ノ    ・゜+.

しーJ   °。+ *´¨)

         .· ´¸.·*´¨) ¸.·*¨)

          (¸.·´ (¸.·'* ☆ Whoosh – and it just works! ☆

The compiler constant evaluator will process it in a special way: apparently, by calling constructor from object connected to T*p.

It's enough to make it possible to use containers in constexpr evaluations.

At first, there were some restrictions on allocated memory. It should have been deallocated within the same constexpr evaluation without going beyond the "sandbox".

This new type of memory allocation is called transient constexpr allocations. Transient also means "temporal" or "short-lived".

The proposal also had a piece about non-transient allocation. It proposed releasing not all allocated memory. The unallocated memory "falls out" of the sandbox and would be converted to static storage — i.e. in the .rodata section. However, the committee considered this possibility "too brittle" for many reasons and has not accepted it yet.

The rest of the proposal was accepted.

2018: Catch me if you can

Proposal [P1002] Try-catch blocks in constexpr functions brings try-catch blocks into constexpr evaluations.

This proposal is a bit confusing — throw was banned in constexpr evaluations at that moment. This means the catch code fragment never runs.

Judging by the document, this was introduced to mark all the std::vector functions as constexpr. In libc++ (STL implementation) a try-catch block is used in the vector::insert method.

2018: I said constexpr!

From personal experience I know the duality of the constexpr functions (can be executed at compile-time and runtime) leads to the fact that evaluations fall into runtime when you least expect it — code example. If you want to guarantee the right stage, you have to be creative — code example.

Proposal [P1073] constexpr! functions introduces new keyword constexpr! for functions that should work only at compile-time. These functions are called immediate methods.

constexpr! int sqr(int n)
{
  return n*n;
}

constexpr int r = sqr(100);  // Okay.
int x = 100;
int r2 = sqr(x);             // Error: Call does not produce
                             // a constant.

If there's a possibility that variables unknown at the compilation stage may get into constexpr! (which is normal for constexpr functions), then the program won't compile:

constexpr! int sqrsqr(int n)
{
  return sqr(sqr(n)); // Not a constant expression at this point,
}                     // but that's okay.

constexpr int dblsqr(int n)
{
  return 2 * sqr(n); // Error: Enclosing function is not
}                    // constexpr!.

You cannot take a pointer/link to a constexpr! function. The compiler backend does not necessarily (and does not need to) know about the existence of such functions, put them in symbol tables, etc.

In further revisions of this proposal, constexpr! was replaced by consteval.

The difference between constexpr! and consteval is obvious. In the second case there's no fallbacks into runtime — example with constexpr; example with consteval.

2018: Too radical constexpr

At that moment a lot of proposals were about adding the constexpr specifier to various parts of the standard library. We do not discuss them in this article since it's the same template.

Proposal [P1235] Implicit constexpr suggests marking all functions, that have a definition, as constexpr. But we can ban executing a function in compile-time:

  • <no specifier> — a method is marked by constexpr, if possible.
  • constexpr — works as it works now;
  • constexpr(false) — cannot be called at compile-time;
  • constexpr(true) — can be called only at compile-time, i.e. similar to constexpr!/consteval.

This proposal wasn't accepted — link to the vote.

2020: Long-lasting constexpr memory

As already discussed, after accepting proposal [P0784] Standard containers and constexpr, it became possible to allocate memory in constexpr evaluations. However, the memory must be freed before the end of a constexpr evaluation. These are so-called transient constexpr allocations.

Thus, you cannot create top-level constexpr objects of almost all STL containers and many other classes.

By "top-level object" I mean the result of the whole constexpr evaluation, for example:

constexpr TFoo CalcFoo();
constexpr TFoo FooObj = CalcFoo();

Here the CalcFoo() call starts a constexpr evaluation, and FooObj - its result and a top-level constexpr object.

Proposal [P1974] Non-transient constexpr allocation using propconst finds a way to solve the problem. To my mind, this is the most interesting proposal of all I gave in this article. It deserves a separate article. This proposal was given a green light and it's developing — a link to the ticket. I'll retell it here in an understandable form.

What's stopping us from having non-transient allocations? Actually, the problem is not to stuff chunks of memory into static storage (.bss/.rodata/their analogues), but to check that the whole scheme has a clear consistency.

Let's assume that we have a certain constexpr object. Its construction (more precisely, "evaluation") was provoked by non-transient allocations. This means that theoretical deconstruction of this object (i.e. calling its destructor) should release all non-transient memory. If calling the destructor would not release memory, then this is bad. There's no consistency, and a compilation error needs to be issued.

In other words, here's what a constant evaluator should do:

  • After seeing a request for a constexpr evaluation, execute it;
  • As a result of the evaluation, get an object that hides a bundle of constexpr variables of a literal type. Also get a certain amount of unallocated memory (non-transient allocations);
  • Imitate a destructor call on this object (without actually calling it). Check that this call would release all non-transient memory;
  • If all checks were successful, then consistency proven. Non-transient allocations can be moved to static storage.

This seems logical and let's assume that it all was implemented. But then we'd get a problem with similar code with non-transient memory. The standard won't prohibit changing the memory and then checking for a destructor call will be pointless:

constexpr unique_ptr<unique_ptr<int>> uui
    = make_unique<unique_ptr<int>>(make_unique<int>());

int main()
{
  unique_ptr<int>& ui = *uui;
  ui.reset();
}

Note. In reality, such code would be rebuffed by the OS for trying to write to a read-only RAM segment, but this is physical constancy. Code should have logical constancy.

Marking constexpr for objects entails marking them as const. All their members also become const.

However, if an object has a member of pointer type, it's bad — you won't be able to make it point to another object. But you can change the object to which it points.

Pointer types have two orthogonal constancy parameters:

  • Is it possible to start pointing to another object?
  • Is it possible to change the object pointed to?

In the end, we get 4 variants with different properties. OK — the string compiles, FAIL - it doesn't:

int dummy = 13;

int *test1 { nullptr };
test1 = &dummy; // OK
*test1 = dummy; // OK

int const *test2 { nullptr };
test2 = &dummy; // OK
*test2 = dummy; // FAIL

int * const test3 { nullptr };
test3 = &dummy; // FAIL
*test3 = dummy; // OK

int const * const test4 { nullptr };
test4 = &dummy; // FAIL
*test4 = dummy; // FAIL

"Normal" const leads to the third option, but constexpr needs the fourth one! I.e. it needs so-called deep-const.

The proposal based on a couple of old proposals suggests introducing new cv-qualifier propconst (propagating const).

This qualifier will be used with pointer/reference types:

T propconst *
T propconst &

Depending on the T type, the compiler will either convert this word into const or delete it. The first case is if T is constant, the second if it's not.

int propconst * ---> int *
int propconst * const ---> int const * const

The proposal contains a table of propconst conversion in different cases:

Thus, the constexpr objects could acquire full logical consistency (deep-const):

constexpr unique_ptr<unique_ptr<int propconst> propconst> uui =
  make_unique<unique_ptr<int propconst> propconst>(
    make_unique<int propconst>()
  );

int main()
{
  // the two lines below won't compile
  unique_ptr<int propconst>& ui1 = *uui;
  ui1.reset();

  // the line below compiles
  const unique_ptr<int propconst>& ui2 = *uui;
  // the line below won't compile
  ui2.reset();
}

// P.S. This entry has not yet been adopted by the Committee.
// I hope they'll do better

2021: Constexpr classes

With the advent of fully constexpr classes, including std::vector, std::string, std::unique_ptr (in which all functions are marked as constexpr) there is a desire to say "mark all functions of the class as constexpr".

This makes proposal [P2350] constexpr class:

class SomeType
{
public:
  constexpr bool empty() const { /* */ }
  constexpr auto size() const { /* */ }
  constexpr void clear() { /* */ }
  // ...
};
// ^^^ BEFORE

class SomeType constexpr
{
public:
  bool empty() const { /* */ }
  auto size() const { /* */ }
  void clear() { /* */ }
  // ...
};
// ^^^ AFTER

I have an interesting story about this proposal. I didn't know about its existence and had an idea on stdcpp.ru to propose the same thing: a link to the ticket [RU] (which is not needed now).

Many almost identical proposals to the standard may appear almost simultaneously. This speaks in favor of the concept of multiple discovery: ideas are floating in the air and it doesn't matter who proposes them. If the community is big enough, the natural evolution occurs.

2019-∞: Constant interpreter in the compiler

constexpr evaluations can be very slow, because the constant evaluator on the syntax tree has evolved iteratively (starting with constant folding). Now the constant evaluator is doing a lot of unnecessary things that could be done more efficiently.

Since 2019, Clang has been developing ConstantInterpeter. In future it may replace constant evaluator in the syntax tree. It is quite interesting and deserves a separate article.

The idea of ConstantInterpeter is that you can generate bytecode on the base of a syntax tree and execute it on the interpreter. Interpreter supports the stack, call frames and a memory model (with metadata mentioned above).

The documentation for ConstantInterpeter is good. There are also a lot of interesting things in the video of the interpreter creator at the LLVM developers conference.

What else to look?

If you want to expand your understanding further, you can watch these wonderful talks from the experts. In each talk authors go beyond the story about constexpr. This may be constructing a constexpr library; a story about the use of constexpr in the future reflexpr; or the story about the essence of a constant evaluator and a constant interpreter.

And here's also a link to a talk about a killer feature (in my opinion) [P1040] std::embed, which would work great in tandem with constexpr. But, judging by the ticket, they plan to implement it in C++ something.

Popular related articles
Is there life without RTTI or How we wrote our own dynamic_cast

Date: Oct 13 2022

Author: Vladislav Stolyarov

There aren't many things left in modern C++ that don't fit the "Don't pay for what you don't use" paradigm. One of them is dynamic_cast. In this article, we'll find out what's wrong with it, and afte…
"Our legacy of the past" or why we divided the V512

Date: Aug 12 2022

Author: Mikhail Gelvih

As the saying goes, the first step is always the hardest. That's exactly what happened in our case – after delaying it for so long, we have finally split the V512 diagnostic rule. You can read more a…
Why do arrays have to be deleted via delete[] in C++

Date: Jul 27 2022

Author: Mikhail Gelvih

This note is for C++ beginner programmers who are wondering why everyone keeps telling them to use delete[] for arrays. But, instead of a clear explanation, senior developers just keep hiding behind …
Intermodular analysis of C and C++ projects in detail. Part 2

Date: Jul 14 2022

Author: Oleg Lisiy

In part 1 we discussed the basics of C and C++ projects compiling. We also talked over linking and optimizations. In part 2 we are going to delve deeper into intermodular analysis and discuss its ano…
Intermodular analysis of C and C++ projects in detail. Part 1

Date: Jul 08 2022

Author: Oleg Lisiy

Starting from PVS-Studio 7.14, the C and C++ analyzer has been supporting intermodular analysis. In this two-part article, we'll describe how similar mechanisms are arranged in compilers and reveal s…

Comments (0)

Next comments
Unicorn with delicious cookie
Our website uses cookies to enhance your browsing experience.
Accept