On 28 October, Facebook changed its corporate name to Meta and staked its claim to become the company of the Metaverse.
It was the same time its global head of safety, Antigone Davis, gave evidence to the UK Parliament’s joint select committee on the Online Safety Bill, a meeting I chaired. After weeks of competition lawsuits being brought in America by the Department of Justice and state-level attorneys general, then a series of revelations about the company’s poor user safety record from the whistleblower Frances Haugen, Facebook’s announcement looked more like a PR manoeuvre than a genuine change of strategy. The launch of Meta says a lot about how Mark Zuckerberg sees the relationship between his company and its users. Meta has come to mean very self-aware or self-reflective. It derived from ancient Greek, and means “beyond”. Will the new Meta be like Facebook but more so? Given Facebook is an engagement-based advertising business that increases revenues by getting people to use its services more often and for longer, it would be safe to assume Meta is its next big play to dominate the attention economy.
Crash landing
The Metaverse Zuckerberg wants to create is an extended reality experience, developed from virtual reality concepts we have already seen, like Second Life. That means the company will essentially aim to get people to spend more valuable time in a fake world, rather than the real one. The term Metaverse was first used in a 1992 sci-fi novel by Neal Stephenson, Snow Crash. It’s long been a favourite among the Palo Alto tech crowd, including Google co-founder Sergey Brin. It presents a dystopian future in the US, where elected government has been replaced by corporate franchises run by criminal gangs. The rich live in gated communities, and young people live in cramped housing and work as delivery couriers, anticipating the modern gig economy. Its Metaverse creates a virtual experience almost indistinguishable from the physical world, and is an escape from its drudgery.
Self-replicating information
A central idea of Snow Crash is that, long before modern media was invented, information spread like a virus, infecting humans with new beliefs and ideas. In Stephenson’s book, his lead character, Hiro Protagonist, tells us: “No matter how smart we get, there is always this deep irrational part that makes us potential hosts for self-replicating information.” While these forces have always existed, social media platforms have created opportunities to spread disinformation, conspiracy theories, hate and extremism like never before. Groups and pages can be created that overnight are interacting with millions of users. The algorithms of companies like Facebook promote content based on engagement rather than an assessment of the value or safety of that content. The artificial intelligence (AI) tools they have created – in theory, to identify harmful content – do not seem capable of doing the job without a much higher level of human intervention. The Wall Street Journal recently reported from internal documents leaked out of Facebook that its own engineers believed AI was only picking up about 5 per cent of hate speech on the platform. An extended reality world where people are more cut off than ever before from real human interaction, a world shaped and controlled by algorithms, is a real cause for concern.
Safety valves
These issues, and the responsibility that technology companies have for the safety of their users, has been central to the inquiry the joint committee on the Online Safety Bill has been leading. We cannot leave the resolution of these issues to the companies alone, we need independent regulation of high-risk, big technology setups. If we want to keep people safe, we need to be able to apply the rule of law consistently to all areas of the human experience. If we fail, we may deliver to our children the dystopian world of a Snow Crash metaverse.