Its destruction was a criminal act committed by only one person and it took, I guess, a few hours. Tortured by order of king Artaxerxes, Herostratus confessed that the sole purpose of his insane deed had been his craving for earning long-lasting fame. In order to prevent him from achieving his goal, after having him killed, Artaxerxes imposed severe punishments upon those who mentioned his name. As we can see, he lost the battle.
What happened to Herostratus not only demonstrates the wide limitations of political powers perceived as absolute, but also provides consistent lessons on the law of entropy, applicable to both the physical and the social worlds. It takes only hours and the effort of only one person to destroy a work that took centuries and thousands of men to build. By definition, building is a slow and difficult process, while destroying what has been built is not. Indeed, the greater the technological power, the greater the destructive potential of social entropy.
The story of Herostratus challenges the two principles that have explained progress for millennia and the political ideas based in it: the increase of human power and its massive distribution. The twentieth century and its wars and industrial genocides have proven it conclusively, establishing the obsolescence of automatic and inevitable progress, but matters can be worse. In effect, only one hundred years ago the mere idea of a single man having the power to wipe out all human life from the face of the earth was unthinkable. A century later, the blink of an eye in human history, that man already exists. The fact that he has been democratically chosen by the citizens of his country, the United States of America, to make decisions that affect the lives of seven billion human beings threatens the concept of democracy itself and suggests, as Einstein posited about Hiroshima and Nagasaki, the need to control global impact technologies on a global scale. It is not a problem of the US, but an inevitable product of the obsolescence of nationalism, that is, of the idea that a world where destructive technologies rise to global power can be reasonably governed by two hundred sovereign and independent states. Twenty-first-century technologies and nineteenth-century political institutions. What could possibly go wrong?
The present time brings us terrible news. The first one is that the president of the country whose nuclear power can destroy civilization as we know it is a “sovereignist”, that is, he invokes the absolute right of his country to do whatever suits it best without taking into consideration any limitations imposed by other countries or, and by no means, international agencies. Whoever may try shall be branded a “globalist” by sovereignists as Bolton, and shall be considered a public enemy. And the second piece of bad news is that that same man – Mr. Trump – has just withdrawn his country from the INF Treaty, signed by Gorbachev and Reagan in 1987 and considered as one of the pillars of global security. The reaction of Putin – another sovereignist from the early days, much less powerful but, at the same time, much more unscrupulous, freewheeling and smarter than Trump – came swiftly forward: Russia has also repudiated INF. As a result, the security of the whole world, and especially of Europe, goes back 33 steps, back to 1986. Chapeau!
Now, let’s go back to Herostratus. This is the thing: if human power goes on growing and massively spreading and if that process has already produced a human being with almost absolute destructive power, how long can it take for the second one to turn up? And then the third? And then the twenty-ninth? And then the fifty-fourth? How long, therefore, would it take for a modern delusional Herostratus to decide to immortalize his name by becoming the most important human being in History by having carried out only one act of destruction? Doesn’t the rise of the Korean Kim Jong-un anticipate this? And what will happen when the increase in technological knowledge and its spread will make weapons of mass destruction – not necessarily nuclear, nor global-reaching, but of great impact – available not only to terrorist states, but also to terrorist organizations? What do we believe these guys, who currently film themselves cutting journalists’ throats and drive trucks into people taking a walk along a boulevard at sunset, are going to do?
But the problem is much bigger. It does not only comprise technologies created to destroy, like the atomic bomb, but also those created “for the good” but whose effects are – or can be – ambiguous and disruptive. When the knowledge needed, for example, to clone a human being, to reverse the aging process, to increase brain storage and processing capacity by implanting cybernetic devices – all technological achievements we are close to attain – is actually acquired, who shall supervise their implementation? Would they be only available to those who can pay for them? Who shall be responsible for the huge global impact made by their use, including consequences such as uncontrolled growth of world’s population? Aren’t we, as Yuval Harari fears, facing the possibility of transforming the current social division into a biological gap that would divide humanity into a hyper intelligent elite and a useless, unproductive and, consequently, socially irrelevant and disposable mass?
Where is the ancient paradigm of greater power and wider distribution of power, which was the basis of progress for millennia, taking us now in the context of ambiguous technologies that can be used for both creation and destruction, and of a speeding technological progress leading to unpredictable consequences? What power shall be the first (a nation state, or a member of GAFA (Google, Apple, Facebook, Amazon)) to develop an artificial intelligence exponentially superior to that the humans have? And if it were decided that, for the sake of limiting these hard-to-manage risks (concerning the survival of the world as we know it and of democracy) some kind of regulation is necessary in relation to these and other topics (like the massive use of robots and algorithms that are to replace human work), who would be qualified to make these decisions and apply them in such a significant area as the nation states, created in the nineteenth century? Would it be useful – let’s say – to forbid human cloning, aging reversal, or brain implants in the territory of one single country? How hard would it be for the citizens of countries governed by sovereignists to access these procedures by simply catching a plane?
In a world where tax havens proliferate, wouldn’t we immediately witness the emergence of rogue states where these technologies, forbidden only in some countries, could be used? How many international conflicts would arise and where would the territorial ideas of sovereignists end up, in a world that is currently undergoing dematerialization and virtualization? Finally, given that this is about decisions whose impact affects every human being and whose regulatory scope is necessarily global, doesn’t it imply the need for a global federalism in which countries are internally sovereign, and yet make global decisions jointly? And doesn’t it call for the gradual creation of democratic institutions in which all human beings can participate, that is, of a global democracy?
Log in