This is Words and Buttons Online — a collection of interactive #tutorials, #demos, and #quizzes about #mathematics, #algorithms and #programming.

Why is it ok to divide by 0.0?

This is the speed of light in meters per second.

299 792 458

It is a relatively small number. It has nine digits and one can easily write it down on a small sheet of paper.

But what if we only have about half of that sheet?

If we're ok with some small error, let's say 0.1%, then we can say that the speed of light is approximately this:

300 000 000

It's still nine digits but we can also write it as:

3×108

Or, using engineering notation,

3e8

It's the same number, the same power of ten, it just takes fewer symbols to write it down.

Now 3e8 represents the speed of light. With some small error, of course.

But 3e8 with the 0.1% error also represents a lot of other numbers. In fact, it represents every number in the range of [3e8 - 3e5, 3e8 + 3e5]. That's 600 001 integer numbers. And an infinite number of real ones.

Digital computers mostly keep numbers in a similar way. They store a small amount of binary digits for the number and even a smaller amount of binary digits to indicate the e-something analog which is the multiplying power of two. In this way, they can operate both very large and very small numbers. But of course, the number of digits is finite so there are numbers so large they can not be written at all. We just don't have enough digits to write them down.

3728084...many more digits...901359572e1

And the same goes for the small numbers. There are numbers so small that we can't write them down in engineering notation.

1e-82883617...even more digits...46221667

With digital computers, we have to represent all those tiny numbers with:

0.0

Not the integer zero but the floating-point zero. That's two different animals. Actually, these are three different animals since there is also -0.0 that represents all the tiny negative numbers.

Anyway, just like 3e8 represents a range of numbers, a 0 represents not a single number but every number that is so small you don't have enough digits to write down how small it is.

So the number representing zero might be an actual zero but it might very well not be one. So when we ask a computer to divide a number by zero we're actually telling it to divide a number by some arbitrary real small number. And it does. And the result is the arbitrary very large number.

In floating-point numbers, there is a special number to represent all the arbitrary large numbers. It's called “infinity” but just as the 0.0 doesn't only represent zero but all the ultra-small numbers, floating-point infinity doesn't represent infinity but rather all the numbers that are so large you don't have enough digits to write down how large they are.

But why do we have to keep our numbers small? Why is it important to fit things into some relatively small amount of digits? Cant, we just have all the digits we want?

And the answer is — the speed of light.

In computing, especially in high-performant computing when you measure latencies in nanoseconds, the light suddenly doesn't seem to go travel all that fast.

How far do you think light travels in one nanosecond given the optimal conditions?