To get a trial key
fill out the form below
Team License (a basic version)
Enterprise License (an extended version)
* By clicking this button you agree to our Privacy Policy statement

Request our prices
New License
License Renewal
--Select currency--
* By clicking this button you agree to our Privacy Policy statement

Free PVS-Studio license for Microsoft MVP specialists
* By clicking this button you agree to our Privacy Policy statement

To get the licence for your open-source project, please fill out this form
* By clicking this button you agree to our Privacy Policy statement

I am interested to try it on the platforms:
* By clicking this button you agree to our Privacy Policy statement

Message submitted.

Your message has been sent. We will email you at

If you haven't received our response, please do the following:
check your Spam/Junk folder and click the "Not Spam" button for our message.
This way, you won't miss messages from our team in the future.

The role of "Fibonacci numbers&quo…

The role of "Fibonacci numbers" in the history of parallel programming

Dec 28 2009

Fibonacci numbers are the elements of the number sequence 1, 1, 2, 3, 5, 8, 13, 21, 34, ... where each following number equals the sum of the two previous ones. Fibonacci numbers can be seen in many nature objects, in the correlation of a body's proportions; Fibonacci spiral is presented in a shellfish's shell.

I have been known no rest because of these Fibonacci numbers since recently! Whatever materials on parallel programming I encounter, everywhere meet I these numbers! It looks like all parallel programming is related only to the issue of calculating Fibonacci numbers.

Calculation of Fibonacci numbers is observed in many print and electronic articles. Even the Wikipedia-article about Parallel computing contains an example of calculating them.

What example is a favorite one among Cilk developers? Of course, calculating Fibonacci numbers. Fibonacci numbers in Cilk booklet: " Parallelism for the Masses". Fibonacci discussed in "Cilk Reference Manual". To put it in a simple way, they are everywhere.

Fibonacci numbers are used to demonstrate the tool of automatic dynamic parallelization "T-system" developed for the supercomputer program "SKIF" of the Union State of Russia and Belarus: "The Open TS parallel programming system".

This list may be continued further and further. Looks like some Fibomania, really :)

Actually, mentioning Fibonacci so often can be understood. This is a simple and clear example demonstrating the parallelization principles but it is referred to too frequently. There are, of course, other examples to demonstrate parallelization of algorithms. But they all are usually solutions of some mathematical task. It is bad and I will explain why.

The role of Fibonacci numbers and other similar mathematical examples is, strange as it may be, a brake in the history of parallel programming popularization. All these articles with the examples of parallel sorting and mathematical calculations give rise to an idea that parallel programming is something remote, an area of mathematicians solving their specific tasks.

Instead of demonstrating how easy and efficiently it is to parallelize a program, examples with Fibonacci numbers make application programmers feel like it has no relation to their programs. A programmer thinks not in mathematical algorithms but in the way of working with GUI, in the terms "files" and "here I need clear the array". Or perhaps, there is a need to speed up a program complex. But this all in no way relates to parallelization because one cannot see in one's project those algorithms to be parallelized that are described in articles and books.

Multi-core systems provide developers with many ways to improve efficiency of their programs. But literature on programming often views it from the extreme side of parallelization and changing calculation algorithms. But there are many other levels of parallelization. And one should not forget to tell a developer about them and give the corresponding examples. I can give you one example from my own practice right now.

One of the steps of developing our tool PVS-Studio was using the capabilities of a multi-core system. Static code analysis of large projects might take hours so the processing speed is an important characteristic of such tools.

We began to discuss how to parallelize our system and went in a wrong direction right away without realizing it. The reason for that was the way of thinking determined by information resources on parallelization that focus on technologies and methods of parallelization of various algorithms. Our first ideas were: what technology to choose - OpenMP or some other, how to parallelize parsing of a syntax tree; and other silly things of the kind. But the solution was on the surface and was elegant and simple to implement.

Good that parallelization of static analysis algorithms was a difficult task and we rose to a higher abstraction level while reflecting on it. We do not need to quickly process one file of source code. It is processed quickly enough by itself. The problem is processing of many files. Well, let us do it in parallel! Let us simply launch several analyzers in parallel (create several processes) and gather the information they provide. We do not need OpenMP, we do not need to search for bottlenecks and check the efficiency of parallelization.

The solution described was implemented and works well now. Does this solution look obvious? Absolutely. I would not like to lie by saying that it had taken us much time to find it. But in other tasks it might be not so obvious. You might be easily carried away by searching for bottlenecks in the program, parallelizing them and detecting errors in them. So you might easily forget to view the program from higher levels. Fibonacci-type examples 'help' you in it. Programming parallel systems is a much more diverse issue. But they often and unfairly ignore this diversity focusing only on one particular technology or parallelization method.

What I am hinting at, is that before you begin to reconstruct algorithms, you should search for some methods of "simple parallelization". Sometimes it is simple as in the example above. The same approach with separate file processing can be employed in the picture conversion package. In other systems, there might be no such objects to be processed in parallel but you may try to single them out into separate entities. What is the most important, do not forget to look on your program from above.

Popular related articles
The way static analyzers fight against false positives, and why they do it

Date: Mar 20 2017

Author: Andrey Karpov

In my previous article I wrote that I don't like the approach of evaluating the efficiency of static analyzers with the help of synthetic tests. In that article, I give the example of a code fragment…
The Evil within the Comparison Functions

Date: May 19 2017

Author: Andrey Karpov

Perhaps, readers remember my article titled "Last line effect". It describes a pattern I've once noticed: in most cases programmers make an error in the last line of similar text blocks. Now I want t…
Static analysis as part of the development process in Unreal Engine

Date: Jun 27 2017

Author: Andrey Karpov

Unreal Engine continues to develop as new code is added and previously written code is changed. What is the inevitable consequence of ongoing development in a project? The emergence of new bugs in th…
Technologies used in the PVS-Studio code analyzer for finding bugs and potential vulnerabilities

Date: Nov 21 2018

Author: Andrey Karpov

A brief description of technologies used in the PVS-Studio tool, which let us effectively detect a large number of error patterns and potential vulnerabilities. The article describes the implementati…
PVS-Studio for Java

Date: Jan 17 2019

Author: Andrey Karpov

In the seventh version of the PVS-Studio static analyzer, we added support of the Java language. It's time for a brief story of how we've started making support of the Java language, how far we've co…
PVS-Studio ROI

Date: Jan 30 2019

Author: Andrey Karpov

Occasionally, we're asked a question, what monetary value the company will receive from using PVS-Studio. We decided to draw up a response in the form of an article and provide tables, which will sho…
The Last Line Effect

Date: May 31 2014

Author: Andrey Karpov

I have studied many errors caused by the use of the Copy-Paste method, and can assure you that programmers most often tend to make mistakes in the last fragment of a homogeneous code block. I have ne…
Characteristics of PVS-Studio Analyzer by the Example of EFL Core Libraries, 10-15% of False Positives

Date: Jul 31 2017

Author: Andrey Karpov

After I wrote quite a big article about the analysis of the Tizen OS code, I received a large number of questions concerning the percentage of false positives and the density of errors (how many erro…
The Ultimate Question of Programming, Refactoring, and Everything

Date: Apr 14 2016

Author: Andrey Karpov

Yes, you've guessed correctly - the answer is "42". In this article you will find 42 recommendations about coding in C++ that can help a programmer avoid a lot of errors, save time and effort. The au…
How PVS-Studio Proved to Be More Attentive Than Three and a Half Programmers

Date: Oct 22 2018

Author: Andrey Karpov

Just like other static analyzers, PVS-Studio often produces false positives. What you are about to read is a short story where I'll tell you how PVS-Studio proved, just one more time, to be more atte…

Comments (0)

Next comments
This website uses cookies and other technology to provide you a more personalized experience. By continuing the view of our web-pages you accept the terms of using these files. If you don't want your personal data to be processed, please, leave this site.
Learn More →