Our website uses cookies to enhance your browsing experience.
Accept
to the top
>
>
>
C++ programmer's guide to undefined beh…

C++ programmer's guide to undefined behavior: part 11 of 11

Dec 17 2024

Your attention is invited to the 11th part of an e-book on undefined behavior. This is not a textbook, as it's intended for those who are already familiar with C++ programming. It's a kind of C++ programmer's guide to undefined behavior and to its most secret and exotic corners. The book was written by Dmitry Sviridkin and edited by Andrey Karpov.

1199_book_pt_11/image1.png

Pointer provenance: invalid pointers

What are pointers? When trying to explain them to C++ beginners, experienced developers often say that a pointer is a number that stores a memory address pointing to any object.

This is somewhat true in a very low-level programming—in assembly or machine code. But in C and C++, pointers aren't just addresses—it's far more than just a number used in some special way. Moreover, C++ (not C) has pointers that aren't memory addresses at all—specifically, pointers to class data members, and functions. However, we won't discuss them here.

A pointer is a reference data type that allows access to other objects. Unlike C++ references, pointer objects are real objects, not just bizarre aliases for existing values. Pointers relate to memory numbers and addresses only via implementation details.

The C++ standard details the provenance of pointers. In brief, they can arise from:

  • using the address-of operations like &x or std::addressof(x);
  • calling the operator new or placement new;
  • the implicit conversion of an array or function name to a pointer;
  • executing a valid operation on another pointer;
  • copying an existing pointer—in particular nullptr.

All other pointer sources are implementation-defined or undefined.

The primary operation on pointers is dereference, which involves accessing the object that the pointer refers to. The key issue with this operation is that we can't use it for all pointers. There are other operations that don't apply to every pointer, either. However, there's one operation that's almost always valid—checking for equality (or inequality).

In an ideal light world, the set of allowed operations on the object would depend on its type. Unfortunately, the applicability of pointers depends not on their value but also on their provenance—and the provenances of other pointers too.

int x = 5;
auto x_ptr  = &x; // valid pointer, we CAN deference it

auto x_end_ptr  = (&x) + 1; // valid pointer,
                            // we CAN'T dereference it

auto x_invalid_ptr = (&x) + 2; // invalid pointer,
                               // it shouldn't exist.

Pointer comparison via the operator> or operator< is specified only for pointers to elements within the same array. The comparison is unspecified for arbitrary pointers.

Pointer arithmetic is only enabled within the bounds of the same array: from the first element pointer to one-past-the-last-element pointer. Otherwise, it leads to undefined behavior. A notable exception is (&x) + 1, where any object is treated as an array of a single element.

It's hard to find a code example that crashes due to UB from pointer arithmetic. However, we can give an example with iterators, turning into pointers.

std::string str = "hell";
str.erase(str.begin() + 4 + 1 - 3);

This code will crash in the debug build on MSVC. Here, str.begin() + 4 is the one-past-the-last-element pointer; adding +1 accesses the code beyond the string. This is UB, even if further subtraction brings the pointer back to the internal string bounds.

There's no need to overcomplicate pointer arithmetic. It's always better to add to or subtract from pointers the final numeric result. In the example, we need to separately evaluate the offset of (4 + 1 - 3) by placing parenthesis—plus, we can store it in a separate variable to enhance code security and clearness.

In addition to out-of-bound objects, invalid pointers may be generated after some functions are executed. Nick Lewycky presented the most notable example of such UB in the Undefined Behavioral Consequences Contest. A slightly modified C++ version (with only one instance of UB instead of two) looks like this:

int* p = (int*)malloc(sizeof(int));
int* q = (int*)realloc(p, sizeof(int));
if (p == q) {
  new(p) int (1);
  new(q) int (2);
  std::cout << *p << *q << "\n"; // print 12
}

This code, built with Clang 18.1.0 (-O3 -std=c++20), outputs 12, which defies logic—unless we know that UB is in the code fragment! The same example underscores that pointers aren't just a number as an address.

The pointer passed to the C-based realloc function becomes invalid after successful reallocation. We can only overwrite it—and then use it.

This example is synthetic, indeed, but it's easy to stumble upon this issue in practice. For instance, we might attempt to write our custom vector using realloc and try to "optimize" it:

template <class T>
struct Vector {
  static_assert(std::is_trivially_copyable_v<T>);

  size_t size() const {
    return end_ - data_;
  }

private:
  T* data_;
  T* end_;
  size_t capacity_;

  void reallocate(size_t new_cap){
    auto ndata = realloc(data_, new_cap * sizeof(T));
    if (!ndata) {
      throw std::bad_alloc();
    }
    capacity_ = new_cap;
    if (ndata != data_) {
      const auto old_size = size(); // !access to invalidated data_!
      data_ = ndata;
      end_ = data_ + old_size;
    } // else — "ok", noop
  }
}

Here's the code with undefined behavior. It probably won't immediately reveal itself, but that means it may do it eventually. The call to reallocate might inline in the bad place and everything would go upside down.

However, if we try to implement the vector—the standard one may well not suffice for some, especially because it initializes memory by default—we should consider the following sad fact: we can't implement it without undefined behavior (whether formal or real). The main reason is pointer arithmetic within raw memory. In raw memory, there are no C++ arrays, for which there is a concept of "arithmetic".

Pointer provenance: placement new for arrays

You're lucky enough to obtain a new super-efficient library for memory management, weren't you? Would you like to use it in C++ without grappling with tricky UB due to lifetime issues?

Congrats, you're lucky! Just allocate memory using the library, create objects in the allocated buffer via placement new and no worries!

void* buffer = my_external_malloc(sizeof(T), alignof(T));
auto pobj = new (buffer) T();

Beautiful, simple, great!

What if you want to allocate memory and place an array within it?

Nothing could be easier!

void* buffer = my_external_malloc(n * sizeof(T), alignof(T));
auto pobjarr = new (buffer) T[n];

Yay! Let's take a coffee break. Task closed. Nice! C++ has come a long way since the C++11 standard!

But it couldn't be that simple, could it?

Of course not! Prior to C++20, placement new for arrays used to have right to corrupt our memory.

The new (buffer) T[n]; construct, according to the examples (§ 8.5.2.4 (15.4)) in the C++17 standard, translates to:

operator new[](sizeof(T) * n + x, buffer);
// or operator new[](sizeof(T) * n + x,
//                   std::align_val_t(alignof(T)), buffer);

Here, x is an unspecified non-negative value, typically used to reserve a memory block for some metadata. For instance, it might store the element count at the beginning of the memory block, add start or end markers, or handle other allocator-specific tasks.

In other words, placement new for an array can easily violate user-provided buffer bounds. Really handy!

In C++20, the admirable wording has been changed.

Now, if new (arg1, arg2...) T[n]; corresponds to the call to the standard void* operator new[](std::size_t count, void* ptr);, everything will be fine—no magical +x shifts.

But if some well-wisher has defined their own the placement new operator... That's another story.

I haven't encountered any compiler or standard library implementation in which the standard placement new moved the pointer to a user-provided buffer. A real threat of hard-to-detect UB is mostly user-defined versions of placement new.

To safeguard against this and ensure we call the standard placement new, we should use ::new and cast the buffer pointer to void*, or rely on std::uninitialized_default_construct_n and similar algorithms.

Note that C++ doesn't have the placement delete syntax. We can only explicitly call the operator delete[](void* ptr, void* place), the standard version of which does nothing.

Here, of course, it's important to understand the difference between the operator delete and the syntactic constructs, delete p and delete [] p. The first one handles only memory; the latter also calls destructors.

C++ doesn't provide the separate syntax constructs to call destructors of array elements created via placement new. We should do it manually or use the std::destroy algorithm.

Never use delete [] against the pointer obtained with placement new []. It's going to be bad.

Parallelism: multithreading and data race

Developing multithreaded applications is always challenging. The problem of synchronizing access to shared data is a perennial headache. It'd be ideal if we had a well-tested, reliable library of containers, high-level primitives, and parallel algorithms that managed all invariants. It'd be ideal if static compiler checks prevented us from misusing all these things. How nice it would be...

Before C++11 and the standardized memory model, we could use threads but only at our risk. Starting with C++11, there are some pretty low-level primitives in the standard library. Since C++17, there are still various parallel versions of algorithms, but we can't even fine-tune the number of threads or their priorities.

So why not take some off-the-shelf solid library like Boost or Abseil and not worry? I bet smart library authors have spent countless hours creating those convenient and safe tools.

Unfortunately, it doesn't work this way. We still have to manually control the correct usage of these tools in C++, scrutinizing every code line. Even with neat mutexes and atomic variables, access synchronization remains a problem.

A data race occurs when one thread modifies an object while another thread simultaneously reads from it, or when two threads try to change the same object at the same time. It's clearly incorrect. The result might yield a weird intermediate object, while a simultaneous writing might spawn a mutated value. And all this regardless of the programming language.

But in C++, it's not just the error. It's undefined behavior and "opportunity" for optimization.

int func(const std::vector<int>& v) {
  int sum = 0;
  for (size_t i = 0; i < v.size(); ++i) {
    sum += v[i];
  }
  // The data race is forbidden, UB "protects" us
  // from modifying v in a parallel thread.
    
  // We can optimize the size evaluation.
  // const size_t v_size = v.size();
  // for (size_t i = 0; i < v_size; ++i) { ... }
  return sum;   
}

Now it's almost a multithreaded hello world:

int main() {
  bool terminated = false;
  using namespace std::literals::chrono_literals;

  int add_ms = 0;
  std::cin >> add_ms;

  std::jthread t1 { [&] {
    std::size_t cnt = 0;
    while (!terminated) {
      ++cnt;
    }
    std::cout << "count: " << cnt << "\n";
  } };

  std::jthread t2 { [&] {
    std::this_thread::sleep_for(500ms + 
                                std::chrono::milliseconds(add_ms));
    terminated = true;
  } };
}

We haven't synchronized access to bool. It's no big deal, right? Everything works in the debug build.

However, if we enable optimizations, the loop in the first thread won't either execute a single iteration (Clang) or will never stop (GCC)!

Both compilers detect the unsynchronized accesses. Since the data race is forbidden, there's no need to synchronize, either. This means that the loop header should always have the same value when the terminate variable is accessed. According to the GCC output, it'll always be false. Clang identifies the terminated = true assignment in the other thread and moves it outside the loop, before it begins.

Indeed, the error is intentional and can be easily fixed by replacing bool with std::atomic<bool>. In a real codebase, it's easy to accidentally allow the data race—but it's difficult to fix it.

Once I wrote something like that:

enum Task {
  done,
  hello
};
std::queue<Task> task_queue;
std::mutex mutex;

std::jthread t1 { [&] {
  std::size_t cnt_miss = 0;
  while (true) {
    if (!task_queue.empty()) {
      auto task = [&] {
        std::scoped_lock lock{mutex};
        auto t = task_queue.front();
        task_queue.pop();
        return t;
      }();
      if (task == done) {
        break;
      } else {
        std::cout << "hello\n";
      }
    } else {
      ++cnt_miss;
    }
  }
  std::cout << "count miss: " << cnt_miss << "\n";
} };

std::jthread t2 { [&] {
  std::this_thread::sleep_for(500ms);
  {
    std::scoped_lock lock{mutex};
    task_queue.push(done);
  }
} };

As long as the code has been tested and built by one compiler, it worked fine. When ported to another platform with another compiler, it all crashed.

My congrats, if you managed to get the reason for the error straight away! If no, consider the innocuous empty function that "absolutely doesn't change anything" and "come on, how could the data consistency be broken there?"

Static analyzers and sanitizers can help detect issues with object access across different threads, for example, TSan for GCC/Clang (-fsanitize=thread). Keep in mind that due to the implementation specifics of sanitizers, ASan and TSan can't run together. As a result, it won't be possible to detect both race conditions and common memory access errors with lifetime violations.

In Rust, we can't cause a data race and undefined behavior within a safe language subset. However, sloppy use of unsafe can cause troubles and undefined behavior. That's why it's unsafe.

Useful links

Parallelism: is std::shared_ptr thread-safe?

This is likely one of the most common interview questions for a C++ developer. Here is the reason why. This elegant smart pointer is so simple to use—especially compared to std::unique_ptr—that it's easy to miss the potential pitfalls. With shared in its name, it's designed to be shared between threads. Can something go wrong?

Everything.

The beginners quite quickly encounter the first line of crutch-and-rake defense of the shared_ptr bastion. Even if the access to the shared_ptr<T> pointer is deceptively safe, it still has to be synchronized to the T object. Obvious, noticeable, understandable, yeah. From here on, it's smooth sailing, right?

No.

Farther away, wolf pits with poisoned spears lurk. The shared_ptr pointer object isn't thread-safe. Accessing to the pointer requires synchronization too!

How's that? We never synchronized it and program worked fine.

Congratulations! You've got one of two cases:

1. All accesses to the pointer from different threads are read-only. Then there's really no problem.

2. The program operates by free will.

using namespace std::literals::chrono_literals;
std::shared_ptr<std::string> str = nullptr;

std::jthread t1 { [&]{
  std::size_t cnt_miss = 0;
  while (!str) {
    ++cnt_miss;
  }
  std::cout << "count miss: " << cnt_miss << "\n";
  std::cout << *str << "\n";
} };

std::jthread t2 { [&] {
      std::this_thread::sleep_for(500ms);
      str = std::make_shared<std::string>("Hello World");
  }
};

The code above stops working when the optimization level is changed—it's much like the other race condition examples.

But you should have noticed something: there's something thread-safe in shared_ptr after all...

Yes, there's the reference counter. There's nothing inherently thread-safe within std::shared_ptr. The atomic reference counter enables us to easily copy the same pointer to different threads, increasing the counters. It also prevents us from manual synchronizing destructor calls in different threads, decreasing the counters.

If you need to change a pointer from multiple threads, you can use std::atomic<std::shared_ptr<T>> (C++20), std::atomic_load, std::atomic_store, or other functions, which have special overloads for shared_ptr.

The same works for std::weak_ptr.

Useful links

Parallelism: threads joining

Noticed that I used std::jthread from C++20 instead of std::thread from the previous chapters? Wondering why?

The std::thread destructor is simply terrible.

Anywhere the std::thread destructor might be called, it's better to use:

// std::thread t1;
if (t1.joinable()) { // If we're not sure about
                     // the t1 object,
                     // be sure to run this check.
  t1.join(); // or t1.detach()
}

It helps developers indicate whether the thread must stop execution or not. Otherwise, the thread destructor will crash our program by calling std::terminate. Very convenient and very RAII-like, isn't it?

Indeed, it's not essential to use the above code fragment everywhere. It's redundant if we know that:

  • someone else has already casted this spell;
  • the content of the std::thread object has been moved to another object, t2 = std::move(t1).

Especially, avoid accessing the same std::thread object from multiple threads, otherwise this will result in the race condition. Let's synchronize it.

Make sure that the code is never executed together with the call to the t1 destructor. The destructor also invokes joinable—and here we go again to the race condition.

Want to wrap std::thread to call join in its destructor? Not so fast: join or detach can throw exceptions, bringing along all the entailed problems.

Cool, yeah? That's why the examples use std::jthread and will continue to do so. Its destructor calls join and removes at least part of the headache.

If join doesn't meet your needs and you don't want to wait, turn to use detach... Well, it's your right. Let's remember that all threads will be terminated when main ends.

Useful links

Parallelism: mutex deadlock

Deadlock is a tragic thing. The system is tied in a knot and will never untie itself. How many mutexes does it take to trigger a deadlock?

With a little thought, we might conclude that one is enough—just lock it twice in a row in the same thread, without releasing it.

This might be true on some platforms. But in C++, it's undefined behavior, and we need two mutexes for the nice and demonstrative deadlock. Our trick will backfire and land us in the world of undefined behavior.

struct Test{
  std::mutex mutex;
  std::vector<int> v = { 1,2,3,4,5};
    
  auto fun(int n){
    mutex.lock();  // Lock it.
    return
      std::shared_ptr<int>(v.data() + n, 
                           [this](auto...){mutex.unlock();});
                           // Let's release the moribund pointer.
  }
};
    
    
int main(){
    
  Test tt;
  auto a = tt.fun(1); // First lock.
  std::cout << *a << std::endl;
  // The pointer is alive.
  auto b = tt.fun(2); // Second lock. UB.
  std::cout << *b << std::endl;
   
  return 0;
}

This example gives different results—even using the same compiler, platform, and optimization level. It all depends on whether pthread is enabled or not.

Who in their right mind would ever do something like that? It's not like anyone ever locks the same mutex twice.

I don't even know... For some reason, there are recursive mutexes that can be locked multiple times.

And some developers may also like to reduce a problem to a solved one and reuse the written code:

template <class T>
struct ThreadSafeQueue<T> {

bool empty() const {
  std::scoped_lock lock { mutex_ };
  ....
}

void push(T x) {
  std::scoped_lock lock { mutex_ };
  ....
}

std::optional<T> pop() {
  std::scoped_lock lock { mutex_ };
  if (empty()) { // ! DEADLOCK !
    return std::nullopt;
  }
  ....
}

....
std::mutex mutex_;
};

To fix this code, we need to either rack your brains or use the recursive mutex. The first one is a better option, I suppose.

The object can have many functions—and many developers that can forget whether the lock is. They might lock a certain function and forget about others. So, no one is immune to the risk of mutex deadlock within the same thread.

Parallelism: signal (un)safety

Developers of any reliable application need to address the program behavior in scenarios like early termination requests, unexpected terminal closures, or handling rare error states. In many of these cases, they rely on a rather basic form of inter-process communication: signal handling.

Developers register handlers for required signals and don't know any worries. However, it can be a very serious error—they run code in a signal handler that's unsafe for memory allocation, I/O operations, and capturing locks...

Signals interrupt normal program execution and can be handled in the arbitrary thread. The thread could start allocating memory, acquire a lock in an allocator, and at that point, be interrupted by the signal. If the signal handler requests memory allocation, it'll cause deadlock in the same thread. Oh no, undefined behavior!

The result is unexpected. For example, in 2006, developers detected a critical vulnerability in OpenSSH—attackers gained the access to the system root running the SSHD server. This bug occurred due to the code that called malloc and free while handling signals. The vulnerability had been fixed, but in 2020, 14 years later, it was accidentally reintroduced. Developers found and fixed it again in 2024. Who knows how many times it occurred and who might have exploited this RegreSSHion over the past four years!

Here's a simplified example of unsafe signal handling:

std::mutex global_lock;

int main() {
  std::signal(SIGINT, [](int){
    std::scoped_lock lock {global_lock};
    printf("SIGINT!\n");
  });

  {
    std::scoped_lock lock {global_lock};
    printf("start long job\n");
    sleep(10);
    printf("end long job\n");
  }
  sleep(10);    
}

On Linux, if we compile this program (with -pthread), run it, and press Ctrl+C, it'll hang forever due to the mutex deadlock within the same thread. If we forget -pthread, the program won't hang and will operate as "intended".

On Windows, this program also operates as " intended" because of the signal handling specifics. A new thread is always implicitly created there for SIGINT/SIGTERM handling.

In any case, this code is incorrect due to using a signal-unsafe function within the signal handler.

Signal handling is a highly platform-dependent issue that needs a specific task and architecture of an application. It's a quite complex issue because, while handling one signal, a program can be interrupted to handle another one.

The most common usage scenario of signal handling is to correctly terminate the application by clearing resources and closing connections—in short, Graceful Shutdown. In such a case, signal handling is usually reduced to setting and checking a global flag.

The C and C++ standards describe sig_atomic_t, a special integer type that ensures signal-safe variable access. In practice, it can be just an alias for int or long. Well, volatile sig_atomic_t can be used as the global flag within the signal handler, but only in the single-threaded environment. Here, volatile is used only to prevent unintended optimizations, as the compiler can't assume anything about possible signal handling and interruptions in the normal program flow.

It's important to remember that volatile doesn't guarantee thread safety. For multithreaded environment, we need to use true atomic types supported on our platform. For example, std::atomic<int>. Of course, if std::atomic<T>::is_lock_free is true.

How to fight unsafety?

  • Keep signal handlers as simple as possible.
  • Disable automatic signal reception and handle them as part of normal program execution (for example, sigprocmask and sigwait).
  • Check the documentation to see if it's safe to use a function in the context of the signal handler.
  • Use atomic variables, lock-free structures, or, if the application is single-threaded, volatile sig_atomic_t for signal handling flags.

Useful links

Parallelism: condition variable, or how to do everything right and trigger the deadlock

Thread synchronization is difficult, even with synchronization primitives. It's a bit of a pun. It's good if there are ready-made high-level abstractions like queues or channels. But sometimes we have to manually build them using low-level constructs: mutexes, atomic variables, and wrappers.

A condition_variable is a synchronization primitive that enables one or more threads to passively wait for messages from other threads without wasting CPU time on checks within the loop. The thread execution is simply suspended, placed in an OS-managed queue, and is waked up when a specific event (notification) from another thread occurs. It's efficient and convenient.

The condition_variable primitive doesn't transfer any data; it only serves to wake up or suspend threads. Moreover, due to the specifics of lock implementations, notifications can occur spuriously, and not just by the direct command via the condition_variable.

Therefore, typical usage requires an additional condition check, which usually looks like this:

std::condition_variable cv;
std::mutex event_mutex;
bool event_happened = false;

// single-threaded execution
void task1() {
  std::unique_lock lock { event_mutex };
  // The predicate is safely checked via
  // the acquired lock
  cv.wait(lock, [&] { return event_happened; });
  // The predicate-free wait version only waits for notification,
  // but there could be a spurious wakeup
  // (usually if someone releases the same mutex)
  ....
  // Here's our event.
  // Execute necessary operations.
}

// Executed in another thread.
void task2() {
  ....
  {
    std::lock_guard lock {event_mutex};
    event_happened = true;
  }
  // The call to notify doesn't have to be
  // via the acquired lock. However, in the early
  // MSVC versions, as well as in a very legacy version from the
  // boost library, there were bugs that required to lock mutex
  // during the call to notify().
  // Call the notify via lock
  // if another terminated thread can call
  // for example, 
  // the destructor of the cv object.
  cv.notify_one(); // notify_all()
}

An attentive reader might realize that in the task2 function, the mutex is used only to protect the boolean flag. Unprecedented wastefulness! A whole two system calls in the worst case—let's make the flag atomic instead!

std::atomic_bool event_happened = false;
std::condition_variable cv;
std::mutex event_mutex;

void task1() {
  std::unique_lock lock { event_mutex };
  cv.wait(lock, [&] { return event_happened; });
  ....
}

void task2() {
  ....
  event_happened = true;
  cv.notify_one(); // notify_all()
  ....
}

We compile, run, and it works. Cool, let's push it to release!

But one day, a user comes and says they run task1 and task2 (as usual) simultaneously. Unexpectedly, task1 doesn't end, even though task2 is completed! We go to the user, take a look—program hangs. We restart it—it doesn't hang. We restart it again—still no hanging. We restart it 50 times—still no hanging. We think it might be some kind of hardware failure, a one-time issue.

We leave the issue. A month later, the user comes again with the same problem. Again, it doesn't reproduce. It must be a hardware failure, some cosmic radiation flipping a bit in the local thread cache. No worry, just no worry...

In reality, there's a bug in the program that leads to the deadlock due to the rare coincidence in the instruction order. Let's take a closer look at how the wait function with the predicate operates:

// thread a
std::unique_lock lock {event_mutex};           // a1
// cv.wait(lock, [&] { return event_happened; }) this
while (![&] { return event_happened; }()) {    // a2
  cv.wait(lock);                               // a3
}

// -------------------------------
// thread b
event_happened = true;     // b1
cv.notify_one();           // b2

Let's consider the following sequence of execution steps across two threads:

  • a1: thread 1 captures the lock;
  • a2: thread 1 checks the condition—it's true, the event hasn't occurred yet;
  • b1: thread 2 sets the event;
  • b2: notifies about it, but thread 1 hasn't started waiting yet! The notification is lost!
  • a3: thread 1 starts waiting and will never wake up!

Playing around with optimization? Return the mutex lock back.

Locking the mutex in the notification thread ensures that the waiting thread either hasn't started waiting and checking the event yet, or is already waiting. If the mutex isn't locked, we risk ending up in the intermediate state.

Be careful with primitives!

Useful links

Author: Dmitry Sviridkin

Dmitry has over eight years of experience in high-performance software development in C and C++. From 2019 to 2021, Dmitry Sviridkin has been teaching Linux system programming at SPbU and C++ hands-on courses at HSE. Currently works on system and embedded development in Rust and C++ for edge servers as a Software Engineer at AWS (Cloudfront). His main area of interest is software security.

Editor: Andrey Karpov

Andrey has over 15 years of experience with static code analysis and software quality. The author of numerous articles on writing high-quality code in C++. Andrey Karpov has been honored with the Microsoft MVP award in the Developer Technologies category from 2011 to 2021. Andrey is a co-founder of the PVS-Studio project. He has long been the company's CTO and was involved in the development of the C++ analyzer core. Andrey is currently responsible for team management, personnel training, and DevRel activities.



Comments (0)

Next comments next comments
close comment form
close form

Fill out the form in 2 simple steps below:

Your contact information:

Step 1
Congratulations! This is your promo code!

Desired license type:

Step 2
Team license
Enterprise license
** By clicking this button you agree to our Privacy Policy statement
close form
Request our prices
New License
License Renewal
--Select currency--
USD
EUR
* By clicking this button you agree to our Privacy Policy statement

close form
Free PVS‑Studio license for Microsoft MVP specialists
* By clicking this button you agree to our Privacy Policy statement

close form
To get the licence for your open-source project, please fill out this form
* By clicking this button you agree to our Privacy Policy statement

close form
I am interested to try it on the platforms:
* By clicking this button you agree to our Privacy Policy statement

close form
check circle
Message submitted.

Your message has been sent. We will email you at


If you do not see the email in your inbox, please check if it is filtered to one of the following folders:

  • Promotion
  • Updates
  • Spam