Thursday, May 31, 2018

The 528th digit of Pi


Gotfried Wilhelm von Leibniz
Two posts ago I mentioned that the best simple fractional approximation of the value of p is 355/113 = 3.14159292..., which was discovered in the West in the 16th century. Later, better approximations were obtained, but no longer in the form of a fraction, rather as the sum of a series. Several infinite series of terms are known whose sum is p. So it is enough to add a sufficiently large number of terms to obtain as many digits of p as we want, as long as we have time to do the sums. The first to propose one of these series was the French mathematician François Vieta. As his series was quite complicated, we give here the much better known series proposed in 1673 by the German mathematician and philosopher Gotfried Wilhelm von Leibniz:

The more terms we add of this series, the closer we will come to the value of p. The following table shows the advances made over time in the calculation of the successive approximations of this number, using different series, formulas or procedures.

Year
Author
Number of digits of p
1593
Vieta
17
1615
Ludolf von Ceulen
35
1717
Abraham Sharp
72
1844
Zacharias Dase
200
1873
William Shanks
707 (527)

Let’s consider the calculation by William Shanks. After several years computing, he obtained 707 digits of p, breaking the previous record. For three quarters of a century, nobody could improve it. In 1949, an electronic computer was used for the first time to calculate the digits of p. It was then found that Shanks had made a mistake in digit 528, which he said was 5, but the computer –which couldn’t be wrong– discovered was actually 4. From that point, all the remaining digits computed by Shanks, up to digit 707, were wrong. Fortunately Shanks never knew, as he had died in 1882, nine years after completing his calculation.
Now we consider the following question:
Between 1873 and 1949, what was the value of the 528th digit of p? Was it 4 or was it 5?
It looks like a stupid question, but consider:
         If realists are right and mathematics exists outside the human mind, the value of the 528th digit of p was always equal to 4. Shanks simply made a mistake when he computed it.
         If anti-realists are right, and the value of p makes no sense outside the human mind, then the 528th digit of p had no value before 1873, was 5 between 1873 and 1949, and changed to 4 in 1949. Shanks did not make a mistake, he simply gave p a slightly different value from the value we assign it today.

The reader must decide which of the two possibilities looks more reasonable.
The next question is: Why do we need to know so many digits of the value of p? Do we need them to compute the diameter of a circle, knowing its circumference? Let us look at a practical example.
Suppose that the Earth were a perfect sphere, with a circumference equal to that of the meridian going through Paris: 40,000 km (this was the first definition of the meter). If we divide 40,000,000 by the value of p, we will get the diameter of the Earth, which is equal to 12,732,395 meters. If we use the simplest approximation by Archimedes (22/7), we get 12,727,272 meters, i.e. an error a little over 5 kilometers.
If we use the best simple fractional approximation (355/113) the error would be of the order of one meter, even though this fraction only provides six exact decimal digits of p. If we use the value of p with 10 exact decimals (3.1415926536) the error would be about 40 microns. And if we go to 20 exact digits (3.14159265358979323846) we would get the diameter of the Earth with an error of the order of a few femtometers, about the size of elementary particles, much smaller than the atoms. Does anyone think we need to know the diameter of the Earth with so much approximation? So, why waste time calculating more digits of p?
One of the reasons why we have continued computing digits of p has been to apply statistical randomness tests to these digits. In fact, the digits of p seem to be random, for any sequence of digits that we happen to test appears among the digits of p a number of times inversely proportional to the length of the sequence. Let us look at a few examples:
         If the digits of p meet the conditions of randomness, every one-digit number should appear approximately 10% of the time in any sequence of digits of p arbitrarily large. Thus, among the first billion digits of p, each one-digit number should appear about 100 million times. Well: 0 appears 99,993,942 times; 1, 99,997,334; 2, 100,002,410; 3, 99,986,911; 4, 100,011,958; 5, 99,998,885; 6, 100,010,387; 7, 99,996,061; 8, 100,001,839; and 9, 100,000,273. As you can see, all these numbers are very close to the expected 100 million.
         The same happens with the 100 possible two-digit sequences (from 00 to 99), each of which appears approximately 10 million times. The randomness conditions are also met by three-digit sequences (from 000 to 999), each of which appears approximately one million times. And so on.
Andrey Kolmogorov
If the digits of p satisfy the conditions of randomness, it would seem reasonable to think that the value of p should be random. Nothing is further from the truth. There is another measure –Kolmogorov complexity– that analyzes the randomness of a number by checking to what extent it can be compressed. Well, the value of p can be compressed enormously, for any algorithm used to calculate billions of digits of p is much shorter than the value of p and represents it exactly (or would represent it exactly if we let it run an infinite time). So in p we have the apparent contradiction of a number whose digits look random, but in fact they are not, for their value is perfectly determined.
Computer scientists are also interested in calculating many digits of p to compare the efficiency of different types of computers. In fact, it is possible to compare them by measuring the number of exact digits of p they can calculate in a given time, using the same algorithm. Different algorithms can also be checked, by running them in the same computer. Finally, the calculation of p has been used as a test problem, to verify that new computers work properly, without making mistakes.

The same post in Spanish
Thematic Thread on Mathematics: Previous Next
Manuel Alfonseca

No comments:

Post a Comment