Thursday, July 11, 2019

Zero probability


In a previous post I mentioned that an event can happen once or several times, although the probability of its happening is zero. The probability of an event is defined as the ratio of the number of favorable cases to that of possible cases. Therefore, if the number of possible cases is infinite, while that of favorable cases is finite, the probability turns out to be zero.
At first glance it seems incredible that an event with zero probability can actually happen. I think the matter will be clearer with a simple example. Two friends, A and B, are talking, and what they say is this:
A: If I ask you to choose a number between 1 and 100, what is the probability that you choose a specific number, such as 25?
B: 1/100, obviously.
A: If I ask you to choose a number between 1 and 1000, what is the probability that you choose 25?
B: 1/1000.
A: If I ask you to choose a number between 1 and 10,000, what is the probability that you choose 25?
B: 1/10,000.
A: If I ask you to choose a positive integer number, what is the probability that you choose 25?
B: Zero, for the set of integers has infinite elements, and one divided by infinity is equal to zero.
A: Choose any number among all the positive integers and tell me which number you have chosen.
B: I choose 22500-1.
A: You have just made an event with zero  probability.
Thinking a little you’ll see that the probability of choosing, among all the integers, any finite set, however large, is also zero. For instance:
A: If I ask you to choose ten different numbers between one and one hundred, what is the probability that you choose precisely the numbers between 11 and 20? (their order does not matter)
B: 1 / 17,310,309,456,440
A: And if I ask you to choose ten different numbers among all the positive integers, what is the probability that you choose precisely the numbers between 11 and 20?
B: Zero.
I leave to the curious reader to compute why the probability of choosing numbers 11 to 20 among those from one to one hundred is precisely what B has stated.
To finish this post, I’ll propose a few more exercises for the reader. Whoever solves them has the opportunity to write a comment explaining how they arrived to the solution.
1.      What is the last digit of 62500?
2.      What is the penultimate digit of 62500?
3.      What is the penultimate digit of 61,000,000?
4.      What is the probability that the last digit of 6n is odd?
5.      What is the probability that the penultimate digit of 6n is odd?
By Vincent Pantaloni, CC BY-SA 4.0, Wikimedia Commons

The same post in Spanish
Manuel Alfonseca
Happy summer holidays. See you by mid-August

Thursday, July 4, 2019

Mathematical theology

Ernst Zermelo
Ernst Zermelo (1871-1953) was a famous mathematician of the early twentieth century. Among his achievements, the following can be mentioned:
  • In 1899 he discovered Russell’s paradox, two years before Russell. Although he did not publish it, he did comment it with his colleagues at the University of Göttingen, such as David Hilbert. Russell’s paradox proved that Cantor’s set theory is inconsistent, since it makes it possible to build the set of all sets that don’t belong to themselves. There are sets that don’t belong to themselves, such as the set of even numbers, which is not an even number. Others do belong to themselves, such as the set of infinite sets, which is an infinite set. Now we can ask ourselves: Does the set of all the sets that don’t belong to themselves belong to itself? This question leads us to a paradox: if it does belong, it must belong; and if it doesn’t belong, it mustn’t belong.
  • In 1904 he proved the well-ordering theorem as the first step towards proving the continuum hypothesis, the first of Hilbert’s 23 unsolved problems. The well-ordering theorem states that every set can be well-ordered, which means that any non-empty ordered subset must have a minimum element. To prove it, he proposed the axiom of choice, which we will discuss later.
  • In 1905 he began to work on an axiomatic set theory. His system, improved in 1922 by Adolf Fraenkel, is a set of 8 axioms, which today is called the Zermelo-Fraenkel (ZF) system. Adding the axiom of choice to this system, we obtain the ZFC system, which is most used today in set theory.

Thursday, June 27, 2019

Travelling to the past?

S.Agustín, por Louis Comfort Tiffany
Lightner Museum
In his Confessions (Book XI, chapter 14), St. Augustine wrote these words, still valid today:
What then is time? If no one asks me, I know what it is. If I wish to explain it to him who asks, I do not know.
In the current situation of our scientific and philosophical knowledge, we still don’t know what time is.
·         For classical philosophy and Newton’s science, time is a property of the universe. Therefore, time would be absolute.
·         For Kant, time is an a priori form of human sensibility (i.e. a kind of mental container to which our sensory experiences adapt).
·         For Einstein, time is relative to the state of repose or movement of each physical object. There is, therefore, no absolute time.
·         For the standard cosmological theory, there is the possibility to define an absolute cosmic time for every physical object, measuring the time distance since the Big Bang to the present.
·         For the A theory of time (using J. McTaggart’s terminology) the flow of time is part of reality. The past no longer exists. The future does not yet exist. There is only the present. If the A theory is correct, travel to the past is impossible, because you cannot travel to what does not exist.
·         For the B theory of time, the flow of time is an illusion. Past, present and future exist simultaneously, but for each of us the past is no longer directly accessible, and the future is not yet accessible. Einstein adopted the B philosophy of time. In a condolence letter written to someone who had lost a beloved person, he wrote the following:
The distinction between past, present and future is only a stubbornly persistent illusion.

Thursday, June 20, 2019

The symbol of death


Azrael, the angel of death
Evelyn De Morgan (1855-1919)
For an educated classical Greek, the number 8 represented death. Why? Let’s see what this funeral assignment was based on.
  1. Multiply by 8 the first 8 natural numbers.
  2. Add the digits for each result.
  3. If the total obtained has more than one digit, we add those digits again.

Multiply
Add digits
2nd addition
1×8=8
8
8
2×8=16
1+6=7
7
3×8=24
2+4=6
6
4×8=32
3+2=5
5
5×8=40
4+0=4
4
6×8=48
4+8=12
1+2=3
7×8=56
5+6=11
1+1=2
8×8=64
6+4=10
1+0=1

Observe that we obtain the sequence 8,7,6,5,4,3,2,1. For the Greeks, this succession starts at 8 and descends to die at 1. That is why number 8 represented death.

Thursday, June 13, 2019

Algorithmic censorship and diversity in scientific research

Manuel Cebrian

Manuel Cebrian started working in USA at MIT, and after a long journey that took him to the West Coast of the United States and then Australia, he returned to MIT and is now in Berlin. He became famous thanks to having won two important competitions organized by the government of the United States, related to the use of social networks to solve more or less complex problems:
  • DARPA Network Challenge (2009), which offered $40,000 reward to the first team that managed to discover, in less than 8 hours, the location of fifteen red balloons distributed in different locations in the United States by Pentagon personnel, using a social network of their own creation, built for one month before the competition date. Although they received many news about false sightings (fake news) Cebrian’s team managed to win the competition, against over 9000 participating teams.
  • DoS Tag Challenge (2012), which offered $5,000 reward to the team that managed to locate five actors, identified by their photograph, who acted as though they were five criminal suspects that would remain visible for 12 hours in five European and American cities: New York, Washington, London, Bratislava and Stockholm. Although they just managed to locate three of the five suspects, Cebrian's team won the competition again, despite the unethical behavior of other participating teams, one of which copied their website to deceive possible informants, so they’d send their information to the web of a different group.
During his stay in Australia, Cebrian worked on forecasting the negative effects of natural catastrophes by analyzing data provided by social networks (essentially Facebook and Twitter). The bad news is that, increasingly in recent years, algorithms that estimate general interest of news and messages in social networks are influencing whether and how those news and messages are propagated over the networks. These algorithms decide whether the messages and news will be more or less seen, and whether the users will stay more or less time on the platform.
In light of this, Cebrian and his colleagues transferred their research on social networks to the way in which Artificial Intelligence (AI) affects human communication and cooperation. The theoretical analysis of computational models inspired by the changes made in the large platforms made it possible to conjecture that the functioning of social networks can be affected by algorithms capable of enhancing or weakening information. As a result of their use, messages and news originated by certain users reach a smaller number of followers, which could be considered a form of algorithmic censorship.
Concerned about the growing trend towards control of the Internet by large companies, Cebrian and his team have made a thorough analysis of the publications in the field of Artificial Intelligence, and of the cross-references between this and other fields of science and the humanities, between 1950 and 2017. It has been found that, although the number of publications on AI has increased progressively, the mutual impact with other sciences seems to be decreasing. Just four fields (Computer Science, Mathematics, Geography and Engineering) maintain a trend higher than a randomized cross-data distribution, and just one (Computing) is increasing, although at a level below that reached during the seventies and eighties.
Another result obtained by the analysis described in the just mentioned article is that research on AI is increasingly dominated by a few research institutions, and the most cited publications appear in a small number of journals and conferences. This decrease in diversity, which has reached 30% since 1980, affects authors, articles and citations, and suggests that cross-citing research hubs may exist, with well-defined preferences. On the other hand, the preponderance of large companies in these hubs (Google, Microsoft, Facebook) could lead to the goal of research changing, from finding solutions to important technological problems for human beings, to a new situation where the objective is collecting customer data and selling them to advertisers, thus controlling the purchase impulses of society.
Let us look at one of the most important and worrisome conclusions of the article:
The gap between social science and AI research means that researchers and policymakers may be ignorant of the social, ethical and societal implications of new AI systems.
Martín López Corredoira
This decrease in diversity, combined with censorship of what does not fit the orthodox model, also affects other fields of science. Consider, for instance, the field of cosmological physics (cosmology). The publication with the highest impact is now arXiv, to the point that it is currently very difficult to publish there, unless you belong to an important institution, and totally impossible if the article differs in any way from the ΛCDM model, the current standard for cosmology, as denounced in this book:
Nobody should have a monopoly of the truth in this universe. The censorship and suppression of challenging ideas against the tide of mainstream research, the blacklisting of scientists, for instance, is neither the best way to do and filter science, nor to promote progress in the human knowledge. The removal of good and novel ideas from the scientific stage is very detrimental to the pursuit of the truth. There are instances in which a mere unqualified belief can occasionally be converted into a generally accepted scientific theory through the screening action of refereed literature and meetings planned by the scientific organizing committees and through the distribution of funds controlled by "club opinions". It leads to unitary paradigms and unitary thinking not necessarily associated to the unique truth. 

The same post in Spanish
Manuel Alfonseca

Thursday, June 6, 2019

Will we live 500 years?

James H. Schmitz

A few years ago, especially in 2015 and 2016, news began to appear in the mass media announcing the imminence that our life expectancy is going to rise in an accelerated way, so we’ll soon achieve immortality. At that time I wrote in this blog three posts (this, this and this) where I declared myself skeptical about these forecasts. In another post, also published in 2016, I distinguished between two very different concepts:
  • Life expectancy: the average duration of human life. Although it depends on the age of the person, the value usually given corresponds to the moment of birth. Life expectancy has been growing progressively in recent centuries, mainly due to advances in medicine, although recent data from the UN seem to indicate that this increase is decreasing.
  • Longevity: the maximum duration of human life. Its value seems to be around 120 years, and no significant increase is noted in recent decades. In fact, there are only two people who were thought to have exceeded that longevity, the Japanese Shigeziyo Izumi and the French Jeanne Calment, but both cases are currently in doubt. The first lost his title of the longest-lived man in the world when it was discovered that his date of birth could actually correspond to a brother of the same name, older than him, who died quite young. In the case of the French woman, there is a controversial Russian study that asserts that her daughter could have exchanged her identity for her mother’s when the latter died, supposedly in 1934.

Thursday, May 30, 2019

NASA goes back to space

Buzz Aldrin on the Moon
NASA Images at the Internet Archive
In the early 1960s, the Soviet Union took the lead in the space race. At the end of that decade, the United States took over with the Apollo Project, which in 1968 began to launch manned flights (Apollo 7), in 1969 put for the first time two men on the Moon (Apollo 11), and until December 1972 made five more lunar landings, the last of which was Apollo 17. Since then, mankind has not returned to the Moon, although there have been several unmanned automatic lunar landings.
From the 1980s, NASA changed tactics and began using space shuttles for its manned flights. These ships differed from the previous ones because the shuttle was reusable: when returning to Earth, it could land in a similar way to an airplane, rather than descending on the sea, like the capsules of the Apollo project. In all, five shuttles were built, named Columbia, Challenger, Discovery, Atlantis and Endeavor.