Thursday, June 13, 2019

Algorithmic censorship and diversity in scientific research

Manuel Cebrian

Manuel Cebrian started working in USA at MIT, and after a long journey that took him to the West Coast of the United States and then Australia, he returned to MIT and is now in Berlin. He became famous thanks to having won two important competitions organized by the government of the United States, related to the use of social networks to solve more or less complex problems:
  • DARPA Network Challenge (2009), which offered $40,000 reward to the first team that managed to discover, in less than 8 hours, the location of fifteen red balloons distributed in different locations in the United States by Pentagon personnel, using a social network of their own creation, built for one month before the competition date. Although they received many news about false sightings (fake news) Cebrian’s team managed to win the competition, against over 9000 participating teams.
  • DoS Tag Challenge (2012), which offered $5,000 reward to the team that managed to locate five actors, identified by their photograph, who acted as though they were five criminal suspects that would remain visible for 12 hours in five European and American cities: New York, Washington, London, Bratislava and Stockholm. Although they just managed to locate three of the five suspects, Cebrian's team won the competition again, despite the unethical behavior of other participating teams, one of which copied their website to deceive possible informants, so they’d send their information to the web of a different group.
During his stay in Australia, Cebrian worked on forecasting the negative effects of natural catastrophes by analyzing data provided by social networks (essentially Facebook and Twitter). The bad news is that, increasingly in recent years, algorithms that estimate general interest of news and messages in social networks are influencing whether and how those news and messages are propagated over the networks. These algorithms decide whether the messages and news will be more or less seen, and whether the users will stay more or less time on the platform.
In light of this, Cebrian and his colleagues transferred their research on social networks to the way in which Artificial Intelligence (AI) affects human communication and cooperation. The theoretical analysis of computational models inspired by the changes made in the large platforms made it possible to conjecture that the functioning of social networks can be affected by algorithms capable of enhancing or weakening information. As a result of their use, messages and news originated by certain users reach a smaller number of followers, which could be considered a form of algorithmic censorship.
Concerned about the growing trend towards control of the Internet by large companies, Cebrian and his team have made a thorough analysis of the publications in the field of Artificial Intelligence, and of the cross-references between this and other fields of science and the humanities, between 1950 and 2017. It has been found that, although the number of publications on AI has increased progressively, the mutual impact with other sciences seems to be decreasing. Just four fields (Computer Science, Mathematics, Geography and Engineering) maintain a trend higher than a randomized cross-data distribution, and just one (Computing) is increasing, although at a level below that reached during the seventies and eighties.
Another result obtained by the analysis described in the just mentioned article is that research on AI is increasingly dominated by a few research institutions, and the most cited publications appear in a small number of journals and conferences. This decrease in diversity, which has reached 30% since 1980, affects authors, articles and citations, and suggests that cross-citing research hubs may exist, with well-defined preferences. On the other hand, the preponderance of large companies in these hubs (Google, Microsoft, Facebook) could lead to the goal of research changing, from finding solutions to important technological problems for human beings, to a new situation where the objective is collecting customer data and selling them to advertisers, thus controlling the purchase impulses of society.
Let us look at one of the most important and worrisome conclusions of the article:
The gap between social science and AI research means that researchers and policymakers may be ignorant of the social, ethical and societal implications of new AI systems.
Martín López Corredoira
This decrease in diversity, combined with censorship of what does not fit the orthodox model, also affects other fields of science. Consider, for instance, the field of cosmological physics (cosmology). The publication with the highest impact is now arXiv, to the point that it is currently very difficult to publish there, unless you belong to an important institution, and totally impossible if the article differs in any way from the ΛCDM model, the current standard for cosmology, as denounced in this book:
Nobody should have a monopoly of the truth in this universe. The censorship and suppression of challenging ideas against the tide of mainstream research, the blacklisting of scientists, for instance, is neither the best way to do and filter science, nor to promote progress in the human knowledge. The removal of good and novel ideas from the scientific stage is very detrimental to the pursuit of the truth. There are instances in which a mere unqualified belief can occasionally be converted into a generally accepted scientific theory through the screening action of refereed literature and meetings planned by the scientific organizing committees and through the distribution of funds controlled by "club opinions". It leads to unitary paradigms and unitary thinking not necessarily associated to the unique truth. 

The same post in Spanish
Thematic thread on Natural and Artificial Intelligence: Preceding Next
Thematic Thread on Science in General: Previous Next
Manuel Alfonseca

No comments:

Post a Comment