Our website uses cookies to enhance your browsing experience.
Accept
to the top
close form

Fill out the form in 2 simple steps below:

Your contact information:

Step 1
Congratulations! This is your promo code!

Desired license type:

Step 2
Team license
Enterprise license
** By clicking this button you agree to our Privacy Policy statement
close form
Request our prices
New License
License Renewal
--Select currency--
USD
EUR
* By clicking this button you agree to our Privacy Policy statement

close form
Free PVS‑Studio license for Microsoft MVP specialists
* By clicking this button you agree to our Privacy Policy statement

close form
To get the licence for your open-source project, please fill out this form
* By clicking this button you agree to our Privacy Policy statement

close form
I am interested to try it on the platforms:
* By clicking this button you agree to our Privacy Policy statement

close form
check circle
Message submitted.

Your message has been sent. We will email you at


If you haven't received our response, please do the following:
check your Spam/Junk folder and click the "Not Spam" button for our message.
This way, you won't miss messages from our team in the future.

>
>
Are 64-bit errors real?

Are 64-bit errors real?

Nov 08 2009
Author:

I often hear in various interpretations the phrase: "The given examples show not the code incorrect from the viewpoint of porting to x64 systems, but the code incorrect in itself". I would like to discuss and theorize a bit on this point in the blog. Please, take this note with a bit of humor.

First, let's begin with saying that any code written in C++ is incorrect by itself. Only that code will be correct which consists of the empty function main, yet I'm not sure about it. It is impossible to write an ideal correct program in C/C++. For you should consider that the program should work on a 12-, 16-, 32-, 64-, ...-bit system. The program, if possible, shouldn't allocate memory dynamically because somewhere it is missing. Also, it shouldn't use functions like scanf for you may need to place the program into a controller where there is no input device. The program mustn't use type conversions. Any type conversion is a potential error on some platform. And perhaps it is better to write the program with the help of trigraphs - you never know... :)

Well, I mean that there are no ideally correct programs in C/C++. You can seek to create such a program but you will never create it. In reality, when writing programs an admissible level of correctness and supposition about the execution environment is chosen and the program is written within the framework of this model.

So, any code is incorrect by itself from the viewpoint of an ideal programmer with golden hands living in vacuum. But we can suppose that a particular code be correct in some particular conditions. When the conditions (the environment) change the code may become incorrect. In what way it becomes incorrect depends on the external changes. Search of errors occurring when the execution environment changes can be arranged in a group and successfully diagnosed, while the approach "everything in the program is incorrect" is irrational.

Let's consider an example. We have a program to port into a controller which won't have a console. The program has some number of cout, cin, printf, scanf. We should find and "deactivate" these functions. Suppose that input be performed through the ports connected to some handle on the device's case. There is no sense in saying that the code is bad, the programmer who wrote it is bad only because he hadn't foreseen that there can be no console and one cannot disable all these sections by one pressure. It won't help us. And there is no sense in trying to perform an ideal refactoring to create an ideal program. We should only find and fix the necessary fragments. One can invent a static analyzer of "input-output issues in controllers"-diagnosis kind. And it will be helpful! But, honestly, all this is due to imperfect code of course :-)

The example above is exaggerated but I just want to show that when one is writing code one cannot foresee everything. One doesn't know that in five years this code will be placed into a controller, ported on a 64-bit system or adapted to a submarine. It is rather difficult to foresee some things.

Programmers have and maintain that code which they have. It can contain a lot of magic numbers, THOUSANDS of expressions where signed and unsigned types are used together, where many warnings may be disabled because one has to use LARGE old third-party libraries. And no one will bother to perform total refactoring of such projects to make them more beautiful, portable etc. And if one insists on this - this person should be fired. :) In reality, you should solve real tasks. You should add new functionality, organize maintenance on existing systems. If necessary, you should port the code on 64-bits. But when you port the code on a 64-bit system, it is this task that will be solved and not the task of how to make the code maximum portable. And here we face the practical task of detecting particular magic numbers (but not all of them), unsafe expressions with signed and unsigned types (but not all of them).

My position may seem wrong to many people as if I'm urging to write bad code and then use various crutches (which I sell myself) to fix it in some places. I am simply a practitioner. And also I call many things by their names. :)

Mostly, program code is BAD. And it works more or less well because it is lucky. Unfortunately, programmers are persistent in not admitting it. Any "code-shaking" (changing of the compiler, execution environment etc) reveals a layer of particular types of errors. I understand that there are no "64-bit" errors. There are just errors in code. They are always present in code. But some errors will occur on a 64-bit system. I tell developers about these errors and hope it will help them. And it is these errors that I call "64-bit errors".

Popular related articles
64-bit errors: LONG, LONG_PTR and blast from the past

Date: Mar 09 2023

Author: Andrey Karpov

64-bit errors are a thing of the bygone days. Very few developers are porting code from a 32-bit to a 64-bit system these days. Those who needed it have already ported their programs. Those who don't…
macOS 10.15 no longer supports 32-bit apps. What can you do?

Date: Oct 15 2019

Author: Sergey Khrenov

On October 7, 2019, Apple released a new version of its Mac operating system, macOS Catalina. Version 10.15 contains many changes and improvements. One of the significant is the complete phasing out.…
If the coding bug is banal, it doesn't mean it's not crucial

Date: Apr 19 2017

Author: Andrey Karpov

Spreading the word about PVS-Studio static analyzer, we usually write articles for programmers. However, some things are seen by programmers quite one-sided. That is why there are project managers...
Detecting Overflows of 32-Bit Variables in Long Loops in 64-Bit Programs

Date: Mar 22 2016

Author: Andrey Karpov

One of the problems that 64-bit software developers have to face is overflows of 32-bit variables in very long loops. PVS-Studio code analyzer is very good at catching issues of this type (see the...
Is it possible to run 64-bit applications in a 32-bit OS?

Date: Dec 08 2015

Author: Andrey Karpov

Nowadays 64-bit operating systems are very widespread. But 32-bit OS are still present on the market, in quite obvious quantities. A lot of modern program tools are developed to be run only in 64-bit…


Comments (0)

Next comments next comments
close comment form