The Metaverse

subtitled: "The Convergence of Virtual Reality and Blockchain Technologies"

by piero scaruffi
Cognitive Science and Artificial Intelligence | My book on A.I. | My book on consciousness | Human 2.0 | Bibliography and book reviews | Contact/feedback/email
(Copyright © 2021 Piero Scaruffi | Terms of use )


This primer includes the following articles:
  • Fundamentals
  • Philosophy
  • Blockchain Technology
  • A brief History of Metaverses
  • Alternative Worlds in Literature and Cinema
  • Utopia
  • The Zeitgeist from Cyborgs to Cybernauts
  • The Interface
  • The Future of Writing
  • Intelligent Metaverses

Fundamentals

A metaverse is a virtual space where large numbers of people can gather to play, work, socialize and trade. Potentially, the metaverse will become a digital/virtual 1-to-1 map of the real world. Your identity in the metaverse is an avatar, that interacts with other avatars. (The term "avatar" is borrowed from ancient Indian religions, in which "avatar" refers to a mortal incarnation of a deity on Earth). The metaverse is a shared virtual space where people are represented by digital avatars. The difference between the traditional Internet and the metaverse is that "things" (user-generated content) have a location in the metaverse, just like in the real world (hence the metaverse is a “spatial Internet”).

The term "metaverse" was introduced by Neal Stephenson in his science fiction novel "Snow Crash" (1992), coincidentally in the same year when Cynthia Dwork at IBM invented the proof-of-work algorithm (the foundation of blockchain technology), in the same year when the CAVE opened at the University of Illinois, and in the same year when Nicole Stenger presented the first immersive movie, "Angels" (1992).

The first thing to notice is that the metaverse is not only a space on the Internet, but it is also a "shared" space, a space that many "avatars" share. These avatars are not limited to accessing it but they actually shape it. The avatars create a shared space. Needless to say, this reflects what happens in societies where individuals create a town, their shared space.

Metaverses are community-building software.

User-generated content is a key feature of a metaverse, but, unlike in Web 2.0, it's avatars of users, not the users themselves, that (who?) shape the metaverse, by creating and trading content.

An important premise is that right now there isn't just one metaverse, the way there is only one Internet: there are many attempts at creating a metaverse, from the generation of Second Life (2003) to the blockchain-based generation of The Sandbox (2014) and Decentraland (2017).

A metaverse generally implements an economic system that is modeled on the economic system of the real world. Avatars can make things, can sell things and can buy things. Each metaverse has a kind of digital money that can be used for such trades, and generally they are cryptocurrencies based on blockchain technology.

The early attempts at molding a metaverse were made by games, typically games with ambitions of "virtual reality", although they rarely required the use of VR headsets. Of course, if one wants to truly replicate the experience of the three-dimensional real world on the two-dimensional screen, a sensory immersive experience would be a must; but most metaverses are content with simply replicating the processes and rituals of the real world, minus the bodily experience.

The underlying technology of the metaverse owes a lot to videogames. It is debatable whether we would have metaverses without the rise and progress of multi-user videogames, notably MUDs (multiuser dungeons) and MOOs (MUDs object-oriented). In fact, so far the difference between MUDs/MOOs and metaverses is mainly in ambition: a metaverse typically wants to control all digital experiences, not just a game's set of rules.

The metaverse is inherently limited: the two-dimensional "flat web" of the screen on which the avatar "lives" is obviously a mere approximation of the three-dimensional real-world, just like a circle can only approximate a sphere. Total sensory immersion is a given in the real world, whereas in a metaverse it can only be achieved through artifices and strategems such as VR headsets and clever programming. The individual is a three-dimensional body of flesh within a three-dimensional universe whereas the avatar is a two-dimensional picture made of pixels on a screen. The (embodied) individual is not replaced by the (disembodied) avatar: the individual is given a disembodied "second life" in an alternative universe, the metaverse. No matter how closely it mimicks the real universe, the metaverse is a completely different kind of universe. In fact the common laws of physics may cease to exist in a metaverse, replaced by any alternative laws of physics (for example, in Second Life avatars can fly).

The big limitation of metaverses has always been that a metaverse requires vast computation power. New model of computation is required to develop a large-scale metaverse. That model could be "computation on the chain"...


Philosophy

Historically, the metaverse of "Snow Crash" came out about a decade after cyberpunk literature had been inaugurated by William Gibson's novel "Neuromancer" (1984) and just one year after the debut of Tim Berners-Lee's World-wide Web. It was preceded by several narratives that toyed with the notion of simulation, from Daniel Galouye's "Simulacron-3" (1964) to Paul Verhoeven's film "Total Recall" (1987). The idea of simulated life spilled over from science fiction into philosophy and the humanities in general. The physicist Frank Tipler in his book "The Physics of Immortality" (1994) even "calculated" that evolution would end with a simulation of all the conscious beings who ever existed, i.e. the resurrection of all the dead (the "omega point" theorized by Pierre Teilhard in the 1930s).

The technical difference between William Gibson's cyberspace of the 1980s and Neil Stephenson's metaverse of the 1990s was that cyberspace was an alternative universe whereas the metaverse is tightly coupled to the real one. But the fundamental difference was one of mood: Gibson was a nostalgic existentialist lost in a vast and hostile habitat (the hacker as a hiker in the wilderness) compared with Stephenson's enthusiastic endorsement of a busy videogame-inspired hyper-reality (the avatar as an everyman going about an ordinary life). However, Gibson's cyberspace looked like the modern capitalistic world whereas Stephenson's looked like a chaotic anarchic medieval dystopia of warlords and plagues.

The 1990s witnessed a lively debate about the fact that information was being "dematerialized" by the Internet, even before the advent of Wikipedia (2001). The mathematicians involved in inventing the computer had long reached the same conclusion, although in different directions: Alan Turing (the universal machine), Norbert Wiener (cybernetics) and Claude Shannon (information theory) all published their main works before the debut of the first programmable electronic computer (1951). In the 1960s the young school of Artificial Intelligence envisioned "expert systems" that incapsulated human knowledge with no need for bodies and thereby "cloned" human experts (again, with no need for bodies). Even when, in the 1990s, Artificial Intelligence veered towards neural networks, the artificial neurons were one-dimensional numbers, i.e. mere approximations of the real three-dimensional neurons. Once computers started doing more sophisticated things than crunching numbers, notably with the invention of databases, implicit in the practice of computer science, even the most mundane one, was the assumption that information was indeed disembodied and could flow seamlessly from one substance to another, for example from the human mind to an electronic machine and viceversa, and of course from one machine to another, regardless of the machine's "body". It didn't take long to realize that self-regulating machinery wouldn't even need a human in the loop. In fact, at the same time that databases were evolving towards the world encyclopedia Wikipedia, a school of thought envisioned super-intelligent machines that would be better than humans at both learning and acting, a school of thought that developed via "Speculations Concerning the First Ultraintelligent Machine" (1965) by Jack Good (real name Isadore Jacob Gudak), Masahiro Mori's "The Buddha in the Robot" (1974), Hans Moravec's essay "Today's Computers, Intelligent Machines and Our Future" (1978), Ray Solomonoff's article "The Time Scale of Artificial Intelligence" (1985), Marvin Minsky's essay "Will Robots Inherit the Earth" (1994) and culminated with Ray Kurzweil's "The Singularity is Near" (2005), the book that popularized the notion of the "singularity". One can trace this mindset all the way back to the early days of electronic computers, when (in 1957) Herbert Simon declared that "there are now in the world machines that think, that learn, and that create - moreover, their ability to do these things is going to increase rapidly".

Katherine Hayles's book "How We Became Posthuman" (1999) came out in the same year as the Hollywood blockbuster "The Matrix" (1999), a mediocre remake of Rainer Werner Fassbinder's masterpiece "World on a Wire" (1973) but much more discussed by philosophers, sociologists, etc. Hayles' nightmare was "a culture inhabited by posthumans who regard their bodies as fashion accessories rather than the ground of being". Hayles correctly described "how information lost its body", and then described the "post-human condition" (when information prevails over matter) as one in which the difference between bodily reality and virtual simulation becomes blurred.

Since then, the post-human condition has become the normal condition for a vast population of humans who are constantly plugged into the Internet, a condition that accelerated when the covid pandemic of 2020 forced millions of people to live and work in isolation at home; and the metaverse could be the natural culmination of the post-human condition.

On the other hand, whether or without a body, humans seems to have a genetic propensity to "build". As Edward Casey discussed in "The Fate of Place" (1997), the ancient Greeks perceived the relationship between human and world in terms of "place" while the scientific revolution of Galileo, Descartes and Newton shifted the view towards the more abstract notion of "space" (Galileo's extraterrestrial space, Newton's absolute space Descartes' "res extensa"). The philosophers of phenomenology returned to "place". Maurice Merleau-Ponty's "Phenomenology of Perception" (1945) placed the body at the center of experience: there is a world because there is a body, and the tools we use to interact with the world are prosthetic extensions of our body. Martin Heidegger's essay "Building Dwelling Thinking" (1951) defined "place" as both an artifact and a process: the process of "cultivating" it (the experience of living in it) is as significant as the outcome, the physical manifestation of living in it (the "construction"). The human condition is so tightly coupled with the notion of "place" that even the post-human condition requires the same notion to exist, although not necessarily in the same physical way.

Note that the the hyper-audiovisuality of the digital era can lead to either wildly fantastic abstract worlds, whose beings don't look like humans and whose objects bear no resemblance to the objects of human civilization, or to photorealism, an accurate reproduction of the world as we know it. All metaverses so far have chosen the latter. The metaverse is typically a place that looks a lot like the real place.

"Building" is not only about building the material order of a place, the order of roads and houses: that is only the exterior building of a place. It is also about building the interior, i.e. the social order that inhabits the place. The metaverse enacts the replica of social order through the virtual cloning of ordinary activities. The social order of our material world is the product of a historical process that stretched over thousands of years, through wars, revolutions, cultural movements, fads, etc. The social order of a metaverse is an experiment in employing a different route. One of the fundamental chores required to all members of a community is to learn to coexist with the other members. "Universal freedom" is a contradiction in terms: your degree of freedom depends on the amount of freedom that you want to grant to the people around you, to your neighbors, coworkers, relatives, etc. The more freedom you give them, the less freedom you have; the more freedom you have, the less freedom they have. A community organizes around a delicate balance of degrees of freedom. This in turn depends on what Harold Garfinkel called the "observable-reportable" character of practical reasoning and action in his "Studies in Ethnomethodology" (1967): our ability to interpret the actions of others, to be good psychologists, and our ability to make it easy for others to interpret our actions. In this aspect the metaverse is not any different from a community in the real world, except that one ships the adolescential training and is propelled immediately into adulthood.

The metaverse can also build "experiments". The metaverse is not limited to being an escapist illusion: it can be, directly or indirectly, a method to design the future. David Kirby introduced the concept of "diegetic prototypes" in "The Future is Now" (2010). He argued that imaginary devices presented in films indirectly help to usher in new technologies because those films present them to a large audience as a) feasible, b) useful and c) harmless. In other words, they justify a business plan to research and build them. At the same time, Julian Bleecker with his essay "Design Fiction" (2009) had argued for a kind of design that relied on narratives of speculative, and often provocative, scenarios to explore possible futures: designing ideas, not only artifacts. Both Kirby and Bleecker basically advocated to abandon the manichean "utopian/dystopian" depictions of possible futures and to instead focus on ways to critically explore possible futures before realizing them. If we were better at thought experiments, perhaps we would invent better worlds. The metaverse is a large-scale collection of diegetic prototypes and of design fictions, of speculative design objects that are not only imagined but even effectively deployed in the (virtual) world. Their simulated "materialization" can tell us something about the social impact of their future real materialization.

In fact, it will be interesting to capture the evolution of a metaverse in some equivalent of the photograph. A nascent branch of history is the one that deals with interpreting an era's society by studying the photographs of that era. A famous example of photographs that "tell" the story of an epoch were Zhensheng Li's photos of Mao's Cultural Revolution, published in "Red-Color News Soldier" (2003). The world knew and knows very little of what happened during that fateful decade (1966-76), but his photographs "immerse" us into Chinese ordinary life. In that case it was just one photographer depicting an era, but in general it is a whole generation of photographers who, indirectly, write the history of an era with their photographs of ordinary (as well as extraordinary) life. At the same time those photographs influence the way people perceive themselves. Photography has been a powerful force in shaping national, generational and cultural identities. The "photohistorian" (stealing the term from a journal published since 1989 by the Royal Photographic Society) can pick up aspects of history that elude the traditional historian. Ditto for the age of television and for the age of the Internet: just by looking at old TV shows and old web pages one can get a feeling of an era, and a "photohistorian" can organize and rationalize aspects of that era. Hopefully some technology will emerge that will allow to take "screenshots" of the metaverse as it evolves, to document its evolution.

Plato's dialogue (or, better, monologue) "Timaeus" (4th century BC) opens with the memorable question: "What is that which always is and never becomes?" It is not the universe. Unlike the creation stories of most religions, in which a god (like Yahweh in the younger sections of the Bible) or gods (like the elohim in the oldest section of the Bible) create the universe from nothing, Plato thinks that a "demiourgos" imposed order on a preexistent chaos to generate our beautifully ordered universe, the "kosmos". What is and always will be is the model (the "paradeigma") that the demiurge "copied" to shape the universe. Each player of a metaverse is such a demiurge, trying to shape the metaverse according to an ideal model. The difference is that there are potentially millions of demiurges in the same metaverse, each trying to realize a different model. The demiurges must coexist and collaborate. Creating a universe is just the beginning. The real challenge is to co-create its future.


Blockchain Technology

Humans of the real world have invented governments, laws, tribunals, banks and various agencies to make transactions trustworthy. When you pay for a home, your country has set up a sophisticated procedure that makes comfortable with the idea that someone will take your money and you will indeed get their home. Ditto when you buy a car or even if you just buy intangible assets like stocks. The individuals of a real-world society rely on a "centralized" system of legal procedures. Because it is not governed by a centralized government, a metaverse needs an alternative method to make people believe that the transactions performed by their avatars are safe; a metaverse needs a different way to establish "trust" between individuals. Luckily in 2008 someone invented blockchain technology, a technology that provides that kind of service in a decentralized community. (See my introduction to Blockchain Technology) That's why, today, virtually all metaverses use a cryptocurrency based on blockchain technology.

The blockchain secures the virtual assets of avatars (including the identity of such avatars), i.e. ownership, and even enforces the proper execution of rules because the blockchain contains the very instructions for a deterministic implementation of a transaction (no need for a police force!) Cryptocurrencies built on the blockchain are programmable payment systems. The programming makes the network "trustless" and the absence of a central authority makes it "permissionless'. From a financial point of view, blockchain technology acts as a clearing and settlement platform, while at the same time being the very infrastructure for transfer/ circulation of assets.

One thing to emphasize is that, in general, human societies foster collaboration. Human civilization progressed so quickly and dramatically thanks to the ability of humans to collaborate, sometimes on very large scales, from the pyramids to the computers. When we think of economic systems, we tend to think of competition: corporations ferociously competing with each other for supremacy, nations competing for resources and domination, individuals competing for higher salaries and positions. But the key to progress has always been collaboration, sometimes indirect, and rarely as publicized as competition.

Similarly, the blockchain fosters collaboration in the metaverse. The lack of a centralized authority makes collaboration virtually boundless. Alas, it can also foster collaboration of the undesired kind (criminal kind), but the positive side is that it fosters collaboration among complete strangers. The blockchain establishes trust between individuals regardless of who they are, where they live, what job they have, how much money they have, whether they are Christian or Muslim, men or women, elderly or teenagers. Collaboration on the blockchain is not geographically constrained. This doesn't sound too different from the existing Web 2.0 that has given us social media and, in general, systems of user-generated content. However, in the metaverse there is a cryptocurrency and there is a clear proof of ownership, and that makes the difference on the way to Web 3.0: users can automatically and systematically monetize what they create. (See my introduction to Web 3.0).

A fundamental phenomenon that is supercharging growth in the metaverse is the advent of "non-fungible tokens" (NFTs) that are making it a lot easier to sell all sorts of digital content. The trade of NFTs happens on the blockchain. (See my introduction to Blockchain Technology and in particular Cryptoart).


A brief History of Metaverses

The first MUD had been created in 1980 by Roy Trubshaw and Richard Bartle at Essex University. In 1986 Lucasfilm launched "Habitat", a graphical MUD created by Randy Farmer and Chip Morningstar and running on Commodore 64 computers connected via dial-up lines. Habitat was a social virtual world in which each user was represented by an "avatar". In 1990 Pavel Curtis at Xerox PARC launched his computer game LambdaMOO. Technically speaking, it was a text-based MUD ("multi-user dungeon"), but it also worked as a chain of transmission between the era of MUDs and the era of "virtual worlds".

Meanwhile, there were places on the Internet where to meet others. "The WELL" ("Whole Earth Lectronic Link"), created by Stewart Brand in 1985 and tied to the counterculture of the San Francisco Bay Area, was such a place: a text-based world (preceding the World Wide Web by almost a decade) a virtual community of computer users structured in bulletin boards for online discussions. Annette Markham (1998) identified three ways in which users can perceive and use an online community: as tool, as place, and as way of being. These three categories are not exclusive, and instead belong to a continuum: the same user, depending on the day, might think of the virtual world as a way of being, a place to be visited or a tool for engagement. The WELL was precisely such a "place" as well as a way of being and a tool for engagement.

Neal Stephenson's "Snow Crash" came out in 1992 and coined the word "metaverse".

Ron Britvich's AlphaWorld (Boston, 1994), in which users could build their own structures, can be considered the first serious attempt at a metaverse. Jim Bumgardner's virtual world The Palace (California, 1995) was influenced by virtual reality but in a two-dimensional space, inspired also by comic books. Fujitsu's virtual world WorldsAway was created by a team led by Randall Farmer and launched in 1995 by Fujitsu Cultural Technologies. At this point there was enough momentum that Bruce Damer organized the conference "Earth to Avatars", held in San Francisco in 1996. The following year Damer published the book "Avatars! Exploring and Building Virtual Worlds on the Internet" (1997). The metaverse became popular while thinkers were focusing on "collective intelligence", especially after Pierre Levy's book "Collective Intelligence" (1994). Howard Rheingold's book "Smart Mobs" (2002) explored how technology could augment such collective intelligence. Those who remembered it also found analogies with Herbert Wells's concept of "world brain" (from a lecture of 1936 at the Royal Institution).

Despite this early conceptual attempts, Stephenson's metaverse remained science fiction until Web 2.0 happened and made it possible: the same technology that enabled social networks also enabled metaverses. Sampo Karjalainen's and Aapo Kyrola's virtual hotel Habbo Hotel (Finland, 2001), where players could design rooms, play games and trade goods, and Derek Liu's virtual society Gaia Online (Silicon Valley, 2003), influenced by Japanese manga and by the MMORPG Ragnarok, where players were represented by avatars and congregate in fora, laid the foundation for social networks because they enabled strangers to become online friends.

Will Harvey's and Jeffrey Ventrella's virtual world There.com (Silicon Valley, 2003) and Philip Rosedale's virtual world Second Life (Silicon Valley, 2003) came out at the same time that a philosopher, Nick Bostrom, was hinting that we may live in a simulation in his essay "Are You Living in a Computer Simulation?" (2003); and despite being dismissed by all reputable physicists, to this day plenty of philosophers discuss that hypothesis. At this point the idea of the simulation was competing with the idea of the singularity for popularity among futurists. Another thing that was becoming popular was Wikipedia (launched by Jimmy Wales in 2001) that created a whole new awareness about the "collective brain": in June the Wikimedia Foundation was founded, in October the first workshop of Wikipedians took place (in Germany) and by the end of the year it boasted hundreds of thousands of articles in multiple languages.

However, a threat was looming large on virtual worlds like Second Life: the combined rise of social networking software (Friendster launched in 2002, MySpace in 2003, Facebook in 2004), of texting on mobile devices, and of voice and video over IP (Skype in 2003, YouTube in 2005). Social networks introduced a competing model, that seemed to exert a strong appeal on the generation of the 2000s: a curated version of their real life in this universe instead of an anonymous vicarious life in an alternate universe. Vanity over immagination. The social network became a game to get as many "likes" as possible. The model of social networking sites was a disembodied world of information that remains tightly coupled with the real embodied world. Social networking sites shared with the metaverse vision the property of grass-roots, bottom-up self-organization rather than the traditional top-down organization. At the same time the success of MMORPGs, from Sony's Everquest (1999) to Blizzard Entertainment's World of Warcraft (2004), impressed another paradigm shift: competition rather than collaboration. The generation of the 2000s seemed more interested in vanity and competition than in imagination and collaboration. Furthermore, virtual worlds paled in comparison with fantasy worlds of videogames, for example Azeroth, introduced by Warcraft (1994) and refined in World of Warcraft, and Hyrule, introduced by Nintendo's The Legend of Zelda (1986) and transitioned to 3D in 1998. A MMORPG like Electronic Arts' The Sims Online (2002), which simulated a world economy, came very close to being a metaverse. David Baszucki's and Erik Cassel's Roblox (2006) and Markus Persson's Minecraft (2011) were particularly revolutionary because they enabled players to create their own games.

Thus came the "winter" of virtual worlds. Some tried to merge the two paradigms of virtual world and social network by creating platforms where players could hang out with real people in virtual reality, like Altspace (Bay Area, 2013) and High Fidelity (San Francisco, 2013), the latter conceived by Philip Rosedale.

If "Snow Crash" had launched the vogue of the metaverse in the 1990s, one could argue that Ernest Cline's novel "Ready Player One" (2011) restarted it in the 2010s.

The "winter" lasted until Sebastien Borget launched The Sandbox (Britain, 2014) and Ari Meilich and Esteban Ordano launched Decentraland (Argentina, 2017), worlds in which avatars could purchase parcels of virtual land (using Ethereum-based crypto tokens such as "mana" for Decentraland and "sand" for The Sandbox) and build on them. Other virtual worlds grounded on the Ethereum blockchain were Artur Sychov's Somnium Space (Britain, 2017) and Ben Nolan's CryptoVoxels (New Zealand, 2018). This generation introduced blockchain technology and cryptocurrencies in the metaverse so that players could buy, sell and trade. Upland (Silicon Valley, 2018), developed by Dirk Lueth, Idan Zuckerman and Mani Honigstein, allowed players to trade virtual properties linked to real-world properties, basically transporting the concept of NFTs into videogames.

Inevitably, videogame companies started aiming for the metaverse. First and foremost was Epic Games, a company founded by Tim Sweeney in 1991 and previously mainly known as the maker of one of the most popular game engines, the Unreal Engine (first debuted in 1998), which in 2017 introduced the videogame Fortnite. In "battle" mode, Fortnite was just a regular videogame, but in "party" mode it was a platform for non-gaming activities, ranging from pop-star concerts to social meetups, and in "creative" mode it even allowed players to invent their own islands. In 2021 Epic raised $1 billion from investors to fund its long-term vision of the metaverse.

At the end of 2020 Nvidia unveiled Omniverse, a collaboration tool for designers of 3D applications, but publicized as a metaverse. In 2021 Mark Zuckerberg publicized Facebook's transition towards a "metaverse company", but the virtual world Horizon (announced in 2019) was still not released and Horizon Workrooms, which launched in 2021, was another collaboration tool a` la Omniverse.


Alternative Worlds in Literature and Cinema

Progenitors of the metaverse are any fictional universes devised by writers. One can start with Homer's poems as proto-metaverses, followed two thousand years later by Chretien de Troyes' five romances of the Arthurian cycle in the 12th century.

Fantasy worlds have been the settings for many novels, from the islands of Jonathan Swift's "Gulliver's Travels" (1726) to the children's worlds of Lewis Carroll's "Alice's Adventures in Wonderland" (1865) and Frank Baum's "The Wonderful Wizard of Oz" (1900), from the sci-fi visions pioneered by Edwin Abbott's "Flatland" (1884) and Clive Lewis' "Out of the Silent Planet" (1938) to the modern mythological settings of Mervyn Peake's "Gormenghast" series (1946-59) and JRR Tolkien's "The Lord of the Rings" (1954-55). Note that Frank Baum conceived of see-through glasses in his novel "Master Key" (1901), predating augmented reality by almost a century. Stanley Weinbaum's story "Pygmalion's Spectacles" (1935), in which the protagonist transported into a fictional world by a pair of goggles, predated virtual reality by half a century.

The most popular worlds in the age of the Internet were the ones introduced by George Martin's "A Game of Thrones" (1991), adapted into a TV series (2011-19) and a videogame (2012), and JK Rowling's Harry Potter series (1997-2007), adapted into several movies (2001-11). At the same time several novels featured someone living in a simulation, starting with Frederik Pohl's "The Tunnel under the World" (1955), Philip Dick's "Time Out of Joint" (1959), Stanislaw Lem's "Professor Corcoran" (1961) and Daniel Galouye's "Simulacron-3" (1964). Fast forward to the age of the metaverse, and the virtual worlds of literature had become much more sophisticated, for example the world of Aincrad in Reki Kawahara's web-only novels "Sword Art Online" (2002-08), later adapted into a book novel (2009), a manga (2010-12), an anime (2012) and a videogame (2013), and the world of OASIS (or the Ontologically Anthropocentric Sensory Immersive Simulation) in Ernest Cline's novel "Ready Player One" (2011). Tad Williams' tetralogy "Otherland" (1996-2001) felt like a metaverse version of Tolkien's "The Lord of the Rings".

Television and cinema had presented several imaginary worlds. The sci-fi ones, like Gene Roddenberry's TV series "Star Trek" (1966-69), were descendants of comic books such as "Buck Rogers" (1929, by Phil Nowlan and Dick Calkins) and "Flash Gordon" (1934, by Alex Raymond). The characters of Rainer Werner Fassbinder's film "World on a Wire" (1973), based on Galouye's "Simulacron-3", lived in a simulation, and the protagonist of Steven Lisberger's film "Tron" (1982) was a hacker trapped into a computer.

Life is a reality television show in Peter Weir's film "The Truman Show" (1998). Clearly the most influential film was "The Matrix" (1999), the Hollywood remake of Fassbinder's "World on a Wire". Of all the elaborate sci-fi worlds of cinema one of the most impressive in the Internet age was created for Mamoru Oshii's film "Avalon" (2001). Virtual reality and many other futuristic technologies were ubiquitous in TV series such as Charlie Brooker's "Black Mirror" (2011) in Britain and Greg Daniels' "Upload" (2020) in the USA.


Utopia

Humans have been dreaming of alternative universes since immemorial time. Countless writers have written books that describe a utopia (a Greek word that means "no place"), typically to explain how people should live. The oldest utopia in the Western world is probably the Eden of the Tanakh (the "Old Testament"). In ancient Greece philosophers had competing visions of the principles for building the ideal state, and two wrote influential books: Plato's "Republic" (4th c BC) and Zeno's "Republic" (3rd century BC, a famous work but lost). During the Roman Republic and Empire it probably wasn't appropriate to dream of ideal states (other than the Roman one) but when the Roman Empire started disintegrating Augustine wrote about the "City of God" (5th century). Then another one thousand years elapsed with no major utopias, mainly because the "dark ages" annihilated prospects of peace and happiness.

The zeitgeist changed dramatically at the beginning of the Italian Rinascimento, when Italians became fascinated with the concept of the "ideal city". The movement was perhaps started by Leon Battista Alberti's treatise "On Architecture" (1452), which largely interpreted architecture according to the values of Plato's "Republic". Alberti's pupil Antonio di Pietro Averlino, aka Filarete, designed a new city for Francesco Sforza, then Duke of Milan, known as "Sforzinda", according to principles laid out in his "Trattato di Architettura" (1465). Several Italian paintings of an "ideal city" survive, all of them tentatively and temporarily titled "The Ideal City": one in Baltimore, attributed to Fra' Carnevale and painted in the 1480s; one in Urbino, formerly attributed to Piero della Francesca but most likely by Luciano Laurana, the principal architect of the Palazzo Ducale of Urbino; and one in Berlin, painted in the 1490s, formerly attributed to Paolo Uccello, but more likely by Francesco di Giorgio Martini.

The discovery of America, widely publicized by the man after whom it is named, Amerigo Vespucci, in the letter "Mundus Novus" (1503), stoked the imagination of his contemporaries, and one decade later Thomas More published the book that coined the word, "Utopia" (1516). Ironically, at the same time that More in England was writing (in erudite Latin) about an ideal city, Machiavelli in Italy was writing (in ordinary Italian) about the exact opposite.

The 17th century witnessed an acceleration in utopian thinking, as demonstrated by Tommaso Campanella's book "City of the Sun" (1602) in Italy, Johann Valentin Andreae's book "Christianopolis" (1619) in Germany, and in England with Francis Bacon's book "New Atlantis" (1627), the commune of the Diggers (1649), and James Harrington's book "The Commonwealth of Oceana" (1656). The genre of the utopian novel was inaugurated by Gabriel Foigny's "La Terre Austral Connue" (1676), which was followed by Louis-Sebastien Mercier's "L'An 2440 Reve s'il en fut Jamais" (1771) and Etienne Cabet's "Voyage en Icarie" (1839).

The American and French revolution, with their aspiration to rewrite the rules of society, probably increased the motivation to think of alternatives to the existing society. Robert Owen's experiment New Harmony (1825) in the USA and Charles Fourier's book "Le Nouveau Monde Industriel Et Societaire" (1829) in France created the two most popular paradigms of the proto-socialist commune. In particular, Owen and Fourier inspired several utopian communities in the USA. Further impulse towards abandoning the modern city came from the "transcendentalism", an anti-materialist philosophy that encouraged a return to nature (the Transcendental Club was founded in 1836 in Boston by the likes of Waldo Emerson and Henry Thoreau). Albert Brisbane popularized Fourier's thought in the USA with his book "Social Destiny of Man" (1840). Utopian experiments of the time included George and Sofia Ripley's Brook Farm near Boston (1841-47), Charles Sears' and Nathan Starks' Phalanx in New Jersey (1843-54), and Humphrey Noyes' Oneida near New York (1848-81). Lev Tolstoy's "Anna Karenina" (1878) was as influential as Thoreau's "Walden" (1854) in promoting a return to nature.

Meanwhile in Europe the anarchist Pierre-Joseph Proudhon was distributing his pamphlet "What is Property?" (1843), that answered "property is theft", and Karl Marx and Friedrich Engels were publishing the "Communist Manifesto" (1848). The First International was formed in 1864 and the ephemeral Paris Commune was created in 1871. The forces of anarchism and communism converged towards a whole new category of more or less scientific utopia. The Russian anarchist Pyotr Kropotkin "Fields, Factories, and Workshops" (1899)

All of these thinkers were criticizing the materialist and industrial society, although from different perspectives. They were joined in Britain by John Ruskin, particularly with the chapter "The Nature of Gothic" in the second volume of his "Stones of Venice" (1853), by Edward Carpenter, who distributed the pamphlet "Civilisation" (1889), and by William Morris, who published the utopian novel "News from Nowhere" (1890). Notably, Ebenezer Howard published the book "To-morrow" (1898), better known as "Garden Cities of Tomorrow", the book that initiated the "garden city" movement in urban planning. He was influenced by the utopian novel "Looking Backward" (1888) published in the USA by Edward Bellamy, by Henry George's study "Progress and Poverty" (1879), and by trascendentalists (he spent five years in the USA), anarchists and communists. All those critiques of the political and economic status quo converged in his vision of the "garden city".

That's when the architect as a visionary was born, or reborn. Tony Garnier's "La Cite' Industrielle" (exhibited in 1904, published in 1917) Italian futurists (Antonio Sant'Elia) and and Russian constructivists contributed to reimagine the city (if not the whole state), cultimating perhaps in Bruno Taut's "The Dissolution of Cities" (1920), an indirect product of German expressionists. A direct line can be drawn between that book's title and Frank Lloyd Wright's book "The Disappearing City" (1932), in which the Chicago architect discussed his own ideal city, Broadacre City. Another pinnacle of utopian architecture was the Ville Contemporaine conceived by Charles-Edouard Jeanneret, better known as Le Corbusier, a concept presented in 1922 at the Salon d'Automne in Paris and described in the book "The Radiant City" (1935).

While utopia was being at least conceived, if not implemented in Western Europe, in Russia it was dying: Tolstoy died in 1910 and Kropotkin in 1921, and Lenin had seized power in 1917 with his own version of utopia.

Herbert Wells, perhaps the first intellectual who deserved to be called a "futurist", penned the programmatic novel "A Modern Utopia" (1905) and then the sci-fi utopia "Men Like Gods" (1923), located in a parallel universe. There was still the scientifically-engineered utopia of "Walden Two" (1948) by the psychologist Burrhus Skinner, a novel meant to promote the tenets of behaviorism, but the 20th century was rather the century of anti-utopias, of dystopias, notably Jack London's "The Iron Heel" (1907), Franz Kafka's "The Trial" (1915), Yevgeny Zamyatin's "We" (1921), in which government ("Big Brother") is the problem. Aldous Huxley's "Brave New World" (1932), in which people themselves are the problem, Rex Warner's "The Aerodrome" (1941), George Orwell's "1984" (1949), similar to "We", Ray Bradbury's "Fahrenheit 451" (1953), Ayn Rand's "Atlas Shrugged" (1957), Anthony Burgess' "A Clockwork Orange" (1962), Arkady and Boris Strugatsky's "Hard to be a God" (1964), James Ballard's "Crash" (1973), Margaret Atwood's "The "Handmaid's Tale" (1985), Phyllis James' "The Children of Men" (1992), Lois Lowry's "The Giver" (1993), David Foster Wallace's "Infinite Jest" (1996), Kazuo Ishiguro's "Never Let Me Go" (2005), Cormac McCarthy's "The Road" (2006), Vernor Vinge's "Rainbows End" (2006), Naomi Alderman's "The Power" (2016), etc. And some even more disturbing dystopias popped up in cinema: Fritz Lang's "Metropolis" (1927), Chris Marker's "La Jetee" (1962) Jean-Luc Godard's "Alphaville" (1965), John Frankenheimer "Seconds" (1966), Franklin Schaffner's "Planet of the Apes" (1968), George Lucas's "THX-1138" (1971), Andrei Tarkovsky's "Stalker" (1979), Richard Fleischer's "Soylent Green" (1973), John Boorman's "Zardoz" (1973), Norman Jewison's "Rollerball" (1975), Michael Anderson's "Logan's Run" (1976), George Miller's "Mad Max" (1979), Ridley Scott's "Blade Runner" (1982), David Cronenberg's "Videodrome" (1983), Terry Gilliam's "Brazil" (1985), Paul Verhoeven's "Robocop" (1987), Kevin Reynolds's "Waterworld" (1995), Kathryn Bigelow's "Strange Days" (1995), Jean-Marie Jeunet's "City Of Lost Children" (1995), Mamoru Oshii's "Ghost in the Shell" (1996), Andrew Niccol's "Gattaca" (1997), Peter Weir's "The Truman Show" (1998), continuing in the 21st century with Steven Spielberg's "Minority Report" (2002), Kar-wai Wong's "2046" (2004), Andrew Stanton's "WALL-E" (2008), Alex Garland's "Ex Machina" (2015), Yorgos Lanthimos's "The Lobster" (2015), etc.


The Zeitgeist from Cyborgs to Cybernauts

The 1980s were very much the decade of the cyborg (the body augmented with electronic organs or limbs), not of the avatar. Futurists were more intrigued by the potentialities of extensing the human body than by the potentialities of living in an alternative universe. It was the decade of Stelarc's robotic prosthesis "Third Hand" (1980) and James Cameron's film "The Terminator" (1984). That intellectual mood was perhaps best represented by Donna Haraway's "A Cyborg Manifesto" (1985), in which she correctly pointed out that Darwin in the 19th century had blurred the distinction between human and animal and then in the 20th century computers had blurred the distinction between natural and artificial. That trend culminated with cyborgs that blurred the distinction between body and nonbody. However, these intellectuals didn't think of avatars that blur the distinction between reality and fantasy. (To be fair, Lynn Hersham created an avatar in the real world: between 1972 and 1979 she lived an ordinary life as "Roberta Breitmore", a fictitious person). The focus was on the individual (being extended into a cyborg), not on the whole society (being increasingly extended into a computer-controlled smart city).

Because they were focusing on the individual instead of the society as a whole, many thinkers missed the decline of socializing that started at least as far back as the invention of email (1972) on Unix. Email allowed people to keep in touch much more frequently but marked the decline (and eventual death) of the handwritten letter. If it is debatable how much the linguistic skills declined because of email (after all, people wrote more often, so they practiced more often, even though the emails were typically shorter and full of abbreviations), there is consensus that the "depth" of communication was reduced. Emails went to more people and more often, but tended to be more superficial. Quantity generally comes at the expense of quality. Precisely because they were sending many more emails, users were being less careful about what they were writing, and this generated a pandemic of anxiety related to rude emails and misunderstandings. Furthermore, email and then texting, and then videochats, implemented the transition from the physical movement of people to the virtual movement of information, thereby dramatically reducing the opportunities for physical encounters. Then came videogames which turned the computer into a combination of hermit-like reclusion and opiate-like addiction, although they created their own subcultures and communities. (Multi-user online gaming became a social lifeline). During the 20th century, the radio set and then the television set had taken over the fireplace's role as the magnetic core of domestic sociality, but the television set became obsolete in the 2000s, at least for the younger generations who obtain their entertainment and news from personal devices like the laptop and the smartphone. Then in the 2000s came social networking and chat systems like Facebook and Wechat with their shallow social life. Facebook had a "Like" button but not a "dislike" button, implying an almost drug-like quality of digital friendship. Friendship became a commodity. Platforms like Facebook started making money out of the user's social life: a user's social life became someone else's business model. At the same time the social networking platform became a vanity platform on which the most common activity was to post "selfies" that glamorized the user's own life: a platform born for social networking became a platform for solitary cult of personality. (Don't blame this phenomenon on the platform: the tendency of tourists to take pictures of themselves in front of just about anything predates Facebook). The ultimate form of large-scale vanity and self-cult of personality was live video streaming, the equivalent of a reality TV show but with the protagonist being the creator, at the same time making and being the show. Digital social networking became an interesting experiment in self-representation and self-perception.

The metaverse is a place in which to socialize all the time, but it is not in the real world. The rise of the metaverse signals a need that videogames and social networks could not satisfy. It is all vicarious, but the avatar is neither lonely nor vain. While in live streaming (and to some extent also in social networks) you are naked in front of everybody, in a virtual world you hide yourself, you become someone else. Your condition is the same as the condition of the player of a multi-user player, except that there is an actual "life" to talk about.

The neuroscience of sleep seemed to imply that "we" routinely live alternative existences. For example, Allan Hobson in "The Chemistry of Conscious States" (1994) argued that two chemical systems inside the brain regulate the waking and the dreaming experiences: respectively, the "aminergic" and the "cholinergic" systems. Our conscious and unconscious identity swings between these two end points. There are universes that we all involuntarily experience, many times in our life: dreams. During sleep our "avatar" is thrown into these universes that are often fantastical and sometimes terrifying.

Perhaps the popularity of Buddhist meditation among high-tech Silicon Valley visionaries (from Steve Jobs of Apple to Jack Dorsey of Twitter), too, contributed to demystify the notion of entering a metaverse. The oldest Buddhist meditation practice, Vipassana meditation, in particular, popularized by Joseph Goldstein's and Jack Kornfield's book "The Path of Insight Meditation" (1995), trains the "user" to achieve an alternative state of mind, like a metaverse of sensations and no reactions. In 2007 Google launched a meditation program called "Search Inside Yourself" which then became the Search Inside Yourself Leadership Institute. In 2009 Soren Gordhamer started the annual Wisdom 2.0 conference in San Francisco. Books like Daniel Ingram's "Mastering the Core Teachings of the Buddha" (2008) and Jay Michaelson's "Evolving Dharma" (2013) became bestsellers among software engineers, who flocked to Jack Kornfield's Spirit Rock meditation retreat north of San Francisco. Polls showed a phenomenal increase in the number of people who meditate in the USA. Meditation apps on smartphones were downloaded millions of times.

During the covid pandemic, many activities (from work to study) moved online thanks to videoconferencing platforms like "Zoom", and their users kept moving in and out of a metaverse of sorts, the metaverse where they met coworkers, customers, teachers, classmates and so on, a metaverse juxtaposed to the physical universe that was reduced to an apartment or a home.

One reason why the metaverse resonates with the public is that we already live in a metaverse of sorts. Sci-fi writer Philip Dick asked in 1978: "What is real? Because unceasingly we are bombarded with pseudo-realities manufactured by very sophisticated people using very sophisticated electronic mechanisms". But he had not seen anything yet. What came after the opening of the Internet to the public was astronomically more invasive than anything that Dick had seen on television in the 1970s. We already live in a metaverse, but it's a metaverse made of ads, banners, pop-up windows, and all sorts of distracting (and often brainwashing) experiences, a metaverse controlled by corporations that want to hijack our attention span. (Personally, i also find very annoying the videos that start automatically, whether they are commercials or not, and, honestly, even most pictures, especially when i'm just searching for simple information). The whole apparatus of redundant decoration around a piece of information is part of this unwanted metaverse forced on Internet users. Radio and television entertainment got flooded very quickly with commercials, and the Internet has simply provided an even more efficient platform to reach consumers in every corner of the world and reach them multiple times a day. Platforms like YouTube exploded the amount of time that one has to spend watching commercials: if television was showing a commercial every 20-30 minutes, virtually any video on YouTube starts with a commercial, even if the video is only a few seconds long. (The creator of the video, of course, gets absolutely no money from the commercials that are displayed before and during the video). We live in the age (foreseen by Baudrillard) in which the advertisement has become longer than the show. This artificial world of commercials has also invented a fantastic way to spy on us: the "cookie", which websites can deploy on any device that connects to the Internet. In 1994 Lou Montulli, working at Netscape, invented the Internet "cookie", a piece of information deployed by the browser on the user's computer, initially to find out whether visitors to the Netscape website had already visited the site before. The "cookie" has become the accepted method for websites to record the user's browsing activity, i.e. to "spy" on the users. Tracking cookies map a user's online life with increased accuracy. A website can then drop even cookies that belong to its advertisers, the "third-party cookies", so that the websites spying on your online life are not the ones you voluntarily accessed but the ones who paid for advertising space on those websites. These cookies are used to customize the adverts based on your online life, so that the marketing campaign follows you, the unsuspecting cybernaut, as you visit different websites. Radio and television never had the power to customize advertisements for each individual viewer. It is a nobrainer on the web. This greedy metaverse views the cybernaut only as a consumer. The cybernaut travels cyberspace shackled to a billboard that keeps posting adverts continuously refined based on the stations of the journey and continuously suggesting new destinations. This metaverse is constructed by corporations for the purpose of extracting the maximum amount of money from the cybernauts who venture into it. It is a shopping universe superimposed on the universe of digital information, and powered by a vast apparatus of surveillance.

Maybe people are ready for a metaverse where they can feel free and not surveilled.


The Interface

The philosophy of modern computing has focused on the network but maybe it should have focused on the interface. Computing is, by definition, interactive: a human user needs some calculations to be performed by the machine and therefore there's an input (a request) and an output (a result). The interface is the way this process of input/output is carried out. Originally, there was little interest in shaping the interface to reflect the application: the user interface was designed to maximize the user's ability to express the desiderata and to minimize the time required to do so (later also to minimize the chances of mistakes). Originally, the command line was good enough: the user interact with the machine by writing "commands" in a cryptic language whose grammar was limited to the "verb + object" construct (e.g., "delete filename"). However, the interface is really the counterpart of the interaction: the human user was talking to the interface, not to the machine (whose working has become more and more obscure with each new generation of interfaces).

As Don Norman said: "The real problem with interface is that it is an interface. Interfaces get in the way. I don't want to focus my energies on interface" (2002). Nicholas Negroponte wrote similarly that goal of interface design should be to "make it go away".

The role of the interface is actually bigger. The metaverse depends on the interface just like the real world depends on looking and feeling the way it does.

Neal Stephenson's essay "In the Beginning Was the Command Line" (1999) points out that the graphical user interface has created a layer (or even multiple layers) of abstraction between the human user and the actual functioning of the computer, between what the user wants and what the computer does. The "interface" has become increasingly sophisticated, in theory mimicking the way humans interact, but in practice it has moved the user farther away from the machine. The original interface was simply a set of switches on the panel of the computer's console. Then came the punched cards that separated programmer from the computer that was being programmed. Then came the command line and a language to communicate orders to the computer, and of course we knew that the computer cannot "read" such command lines: there's a layer of software that translates them into on and off "switches". Then came the "GUI" that created the metaphor of the virtual "desktop". Anything you do with the GUI gets translated into commands, which means that the GUI adds another inscrutable layer between you and the machine. And the GUI kept expanding, moving us further and further away from the command line and from the actual "switches" that still exist somewhere, buried under layers of software.

Steven Jones pointed out that the interface has become a highly recursive phenomenon (2008): the user who browses the web has to deal with the interface provided by the operating system (MacOS, Windows, iOS, Android, Linux...), then with the interface provided by the browser, then by the interface provided by the specific website that the user accesses (and each website has a different interface). And that's not to mention the physical device: the experience is different if one uses a smartphone, a smartwatch, a tablet, a laptop or a desktop.

There was no obvious line connecting functionality, interface and aesthetic. The funny thing is that a straight line has emerged that works in the opposite direction, from aesthetic to interface to functionality.

Lev Manovich's "The Language of New Media" (2001) relates the so-called "new media" (which really means "computer-based media") to the visual cultures of the past (to the "old media", the pre-computer media), and shows how new media fit in the lineage of visual aesthetic that begins with the invention of perspective by the Italian Rinascimento. A straight line connects Rinascimento painting, photographic camera, TV monitor and computer screen, the same way that a straight line connects Johannes Gutenberg's printing press, Francesco Rampazetto's Scrittura Tattile (1575), Christopher Sholes' QWERTY typewriter (1874) and the computer keyboard. New media mostly adapt conventions of old media to computer-based technology. What is truly unique about "new media" are the interface and the database. The "user interface" was from the beginning the prosthetic extension that allowed computer users to access cyberspace (a cyberspace that for five decades consisted of independent, separated databases).

The interface was, first and foremost, a language, a language to interact with a machine, and then, starting with Ivan Sutherland's Sketchpad (1963) it became a visual language in a virtually infinite coordinate system. The Graphical User Interface (GUI), first demonstrated in 1968 by Douglas Engelbart in San Francisco and first commercialized in 1973 by Xerox PARC with the Alto desktop computer, turned the computer into a virtual world because it made it possible to conceptualize an environment and ways to explore it. The GUI "simulated" our interaction with the natural environment. By clicking on an "icon", the user was transported into a program, and many such programs used a GUI themselves, therefore launching their own representational space, their own virtual world. The GUI turned the screen into a complex representational space. Matthew Kirschenbaum in "Interface, Aesthetics, and Usability" (2004) noticed another lineage, connecting the GUI with its overlapping "windows" to to the artistic collages of dadaism, futurism and cubism at the beginning of the 20th century. The art critic Clemente Greenberg in 1959 wrote that "collage was a major turning point in the whole evolution of modernist art in this century". By analogy, the GUI was a major turning point in the evolution of cyberspace. The digital scanner (1957) and later the digital camera (1990) created a bridge between the real world and the virtual world, opened the channel over which we can transport the real world into the virtual world. As digital content grew more complex in the age of the personal computer, it became important to find ways to "search" and to "navigate" cyberspace.

Brenda Laurel in "Computers as Theatre" (1991) suggested that interface design should learn from drama theory: make the experience as "dramatic" and emotional as a theatrical drama. A user should "feel" the interface, not just use it. The exact opposite happened in that year.

In 1991 Tim Berners-Lee came up with the World-wide Web, building it on top of the largest network in existence, the Internet. Surprisingly, the early browsers and the early search engines (notably Google) did not fully take advantage of the GUI, as if they didn't know how to represent visually the enormity of data that they were assigned to interface; but that fact also highlighted a transformation in the function of the user interface, a slow but constant alteration in the balance of power between user and computer. The interface, that was born as a language for the user to deliver commands to the computer, was becoming increasingly a channel for the computer to deliver content to the user. It was as if the exploration of and interaction with cyberspace, now organized in a recursive web of webs, could not rely anymore on a visual representation squeezed into the small monitor of the user.

Around 1996 Silicon Valley startups like Pointcast and Marimba popularized "push" technology: Pointcast gathered information from the Web and then displayed it on persona computers, which conceptually was the exact opposite of "surfing" the Web. Marimba pioneered the model of subscription-based software distribution so that a computer user could automatically get updates to the applications running on that computer. Push technology created a new way for technology to communicated with humans: via "notifications", which became pervasive and invasive starting in 2009 when Apple introduced them on its iPhones. Notifications further changed the dynamics of the user's interaction with the device because it felt like the user was no longer the one deciding when to interact with the device: it was the device deciding when the user was supposed to interact, just like it was the "newsfeed" deciding which news the user was supposed to read.

Pushed to the extreme, the science of human-machine interfaces becomes the quest for creating artificial life forms: it is tempting to think of computational artifacts as "naturally" interactive, and therefore similar to pets, if not humans. The personification of machines happened independently of the experiments of artificial intelligence: users routinely curse the computer as if the computer were a stupid or stubborn or evil person. The MIT roboticist Pattie Maes in 1995 gave a talk titled "Interacting with Virtual Pets and other Software Agents" that described a future in which society is made of both real and virtual life forms. The virtual ones, today better known as "bots", will actively interact with the real ones. She argued that there was a real need for these artificial life forms because "the digital world is too overwhelming for people to deal with, no matter how good the interfaces we design". In other words, the interface to the digital world must evolve towards artificial life forms or humans won't be able anymore to interface the digital world created by machines. The personification of interfaces is widely represented by the myriad different kinds of interfaces that the user has to interact with. Each application reveals a different interface, and often each one is a mindboggling interface: knowing how to find a statement on your bank's website doesn't help you find a statement on another bank's website, because the path, the screen layout and the titles can be completely different. At the same time her boss Nicholas Negroponte was writing in "Being Digital" (1995) that the goal of the interface should be to "know you, learn about your needs": again, an artificial life form, an artificial person as the interface. And this artificial "person" will be increasingly in control of the interaction.

If that's the "mainstream" story of the interface, there's another parallel story, that begins with "Pong" (1972), the first major arcade videogame, and with the Magnavox Odyssey of 1972, the first videogame console. Arcade games, as innocent as they looked, operated an important reversal of trends: they brought the player closer to the machine because there was a bodily component to playing the game. There was still an interface, but the interface was meant to challenge the physical skills of the player. The videogame arcade of the 1980s spawned a new kind of athlete, who was both the equivalent of a sport athlete and the equivalent of a chess player, both body and mind, immersed like the sport athlete in the embodied theater of the physical space of the game (e.g. the stadium) while immersed like the chess player in the disembodied theater of the virtual space of the game (the combinatorial space of chess moves). These machines were powered by computers but, because they were self-contained machines, they were perceived as a completely different artifact compared with multipurpose computers. Except for being coin-operated and for requiring a power outlet, an arcade machine or a videogame console belonged to same functional category of a deck of cards or a tennis racquet: its function was directly accessed by the player. Last but not least, the player was in control of the interaction. A parallel history of human-computer interaction (and of simulation) originated with videogames, and this history progressed while the big story of human-computers interaction continued to steal the limelight.

These two modes of interacting with the machine both collide and complement each other in the metaverse: the user is confronted by the usual multi-layer interface but the user's avatar interacts directly and bodily with the world.

Readings on the Interface:
Johnson, Steven: "Interface Culture: How New Technology Transforms the Way We Create and Communicate" (1997)
Kirschenbaum, Matthew: "Interface, Aesthetics, and Usability" in The Oxford Companion to Digital Humanities (2004)
Jones, Steven: "The Meaning of Video Games" (2008)
Laurel, Brenda: "Computers as Theatre" (1991)
Manovich, Lev: "The Language of New Media" (2001)
Negroponte, Nicholas: "Being Digital" (1995)
Norman, Don: "The Design of Everyday Things" (2002)
Stephenson, Neal: "In the Beginning Was the Command Line" (1999)


The Future of Writing

The function of a novelist has always been to create a world and then escort the reader into that world. The future of "storytellers" in the age of metaverses could be to create a digital world and then escort people into that world; and then socialize with them. The "readers" will "experience" the story in the metaverse by becoming part of the story, by joining the characters in the story. This will create a different kind of bond between "writer" and "reader" (between producer and consumer of the experience).

The word "my" can be ambiguous and misleading. For example, when we describe a place as "my hometown", we use "my" in the sense of "where i was born" or "where i ended up spending my formative years", not in the sense of "i created it". My bicycle is "mine" because i bought it; but my website is "mine" because i made it. "My home", "my life", "my world" will mean something different in the metaverse.

Every parent knows that her or his children are not the most beautiful nor the most intelligent in the world, but they are her/his children, children that she/he raised. The creators/ demiurges will have a similar feeling for the virtual worlds that they will have created.

Everything that happens in one's life is a story, and everything in the metaverse will be a story. The difference is the place where "everything" happens: in real life it generally happens in a place that we had limited freedom to choose and shape. "Life" in the metaverse will not be about going to a place that preexists but about going to the places that we design.


Intelligent Metaverses

Artificial Intelligence creates "artificial beings", and Virtual Reality creates an "artificial world". The beings that inhabit a virtual world are avatars of real people, but could also be independent beings that exist only in that world, robots that exist not in hardware but in software. You could create a virtual world and populate it with artificial people that interact with your avatar and with the avatars of your friends. Artificial Intelligence can code and shape the personality of an artificial person: a young salesman who in the evening plays in a rock band; a Buddhist girl who dropped out from university and memorizes Chinese classics that she recites at the neighborhood park; a retired airliner pilot who paints landscapes from the window of his apartment; etc. Artificial Intelligence will "power" such an artificial person so that s/he behaves just like a real person. You will not know which are avatars of real people and which are artificial people.

Secondly, Artificial Intelligence could be used to "power" your own avatar. My avatar in a metaverse disappears when i am not playing. A.I. could keep it "alive" even when i am not playing. This new generation of avatars could be "autonomous" avatars who learn your personality and then continue living in the metaverse even when you "switch off" and return to real life. You could wear augmented-reality glasses to watch what your avatar is doing in the metaverse or you could simply ignore it and catch up days later. Your avatar will live in the virtual world while you live in the real world. Every time that you plug into the virtual world again, you will regain control over your avatar. The avatar will learn from you how to behave (what kind of person you want it to be), and you will learn from your avatar's progress what that behavior leads to (what that kind of person does). Maybe at some point the roles will get reversed: the avatar will "teach" you what to do in the real world. That's assuming that Artificial Intelligence gets intelligent enough to simulate the human mind.

David Hanson (Hanson Robotics) has been around since 2003 building robots that simulate a real person's personality, most famously Sophia (2016), that was granted citizenship in Saudi Arabia, and the (failed) android/avatar of Russian billionaire Dmitry Istkov (founder of the 2045 Initiative who wants to create immortality). Sophia is mostly a testament to how far A.I. still is from creating any form of intelligence (let alone human one). In 2020 OpenAI's system GPT3, capable of answering ordinary questions in ordinary language, represented a major improvement in A.I. and renewed speculations that some day a system like GPT3 will power psychologically realistic avatars.

When a person dies, the avatar of that person may continue living forever in the metaverse. In fact, the avatar of a person can live in multiple worlds. Grandchildren can interact with the avatar of a grandparent they never met. The dead person will be dead but the avatar will keep evolving. In 2014 there was already a startup selling this kind of service (Marius Ursache' Eterni.me).


The Metaverse as Homotopy

A homotopy is a continuous deformation of a map into another map.

In 1908 Bertrand Russell published his paper "Mathematical Logic as Based on the Theory of Types" (1908) to repair a flaw in classical logic, i.e. to outlaw some antinomies that occur in set theory (like the famous 1901 paradox of the set of all sets that are not members of themselves). In the Theory of Types, objects are classified based on types, and types express properties which makes it is possible to reason about their objects. Alonzo Church in "A Set of Postulates for the Foundation of Logic" (1933) expanded the Theory of Types into a rigorous formal system. Several decades later, Per Martin-Lof in "An Intuitionistic Theory of Types" (1972) proposed a constructive intensional type theory as an alternative to traditional logic, which makes no distinction about how one has reached a conclusion.

Steve Awodey and Michael Warren invented Homotopy Type Theory with their paper "Homotopy Theoretic Models of Identity Types" (2007). Homotopy Type Theory, an expansion of algebraic topology, views types as spaces rather than sets, tokens as points rather than elements of objects, and equalities as paths. For example, the identity of two objects of the same type can be understood as the existence of a path from one point to the other point within the space of their type. Types are treated intensionally rather than extensionally: two types that are identical in terms of content are not identical if they are defined differently (e.g. "the number that follows 2" and "the number that precedes 3"). There is a type associated with each mathematical proposition, and the tokens of a type are "certificates" (or "witnesses" or "proofs") to the truth of that proposition. The theory comes with a proof technique called "path induction" (which is the elimination rule for the identity type) which facilitates automatic proof verification. A proof consists of a sequence of applications of rules that transform tokens, beginning with the tokens of the premises and ending with a token of the conclusion. In 2009 Vladimir Voevodsky. added "univalence" to Homotopy Type Theory: the "univalence axiom" states that "identity is equivalent to equivalence", i.e. that isomorphic things can be identified. All mathematical entities can be described in the language of tokens and types, which means that Homotopy Type Theory can be used as the foundation for the whole of mathematics.

To be continued...


See also my History of Virtual and Augmented Reality, my Thoughts on Virtual Reality, and my Timelines of VR/AR.
A bibliography is include at the end of Thoughts on Virtual Reality,
Back to the index