At times studying algorithms is not simple, especially for those who are new in programming. We made a collection of services, aimed at helping you to sort out the way algorithms work.
In this article I’m going to discuss a problem few people think of. Computer simulation of various processes becomes more and more widespread. This technology is wonderful because it allows us to save time and materials which would be otherwise spent on senseless chemical, biological, physical and other kinds of experiments. A computer simulation model of a wing section flow may help significantly reduce the number of prototypes to be tested in a real wind tunnel. Numerical experiments are given more and more trust nowadays. However, dazzled by the triumph of computer simulation, nobody notices the problem of software complexity growth behind it. People treat computer and computer programs just as a means to obtain necessary results. I’m worried that very few know and care about the fact that software size growth leads to a non-linear growth of the number of software bugs. It’s dangerous to exploit a computer treating it just as a big calculator. So, that’s what I think – I need to share this idea with other people.
Modern programmers live in a very special period of time, when the software is penetrating into literally all spheres of human life and is installed on a numerous amount of devices that are a part of our every day life. Nobody is surprised by software in the fridges, watches and coffee-machines. However, the dependence of people on the smart technology is also growing. The inevitable consequence: the reliability of the software becomes priority number 1. It’s hard to scare someone with a freaked out coffee-maker, although it can bring a lot of harm (liters of boiling coffee flowing on your white marble countertop…). But the thought of growing requirements for the quality of software is really important, that’s why let’s talk about errors in the code that led to significant waste of time and money.
In scientific computation we use floating point numbers a lot. This article is a guide to picking the right floating point representation for you. In most programming languages there are two built-in precisions to pick from: 32-bit (single-precision) and 64-bit (double-precision). In the C family of languages these are known as
double, and those are the names I will use in this article. There are other precisions:
quad etc. I won’t cover these here, but a lot of the discussion makes sense for
quad too. So to be clear: I will only talk about 32-bit and 64-bit IEEE 754 here. Continue reading
The author of the blog “banterly.net” was recently looking through his university days archive and came across this following problem that he created for himself trying to understand how C++ inheritance works. It was not obvious to him back then and he remember that even for TAs and some developers it was not very clear what was the deal, with some getting the answer right but not the why.He still find it intriguing today so I decided to share it, hoping that it may also be intriguing for others.
Perhaps, readers remember my article titled “Last line effect”. It describes a pattern I’ve once noticed: in most cases programmers make an error in the last line of similar text blocks. Now I want to tell you about a new interesting observation. It turns out that programmers tend to make mistakes in functions comparing two objects. This statement looks implausible; however, I’ll show you a great number of examples of errors that may be shocking to a reader. So, here is a new research, it will be quite amusing and scary.