Peter Franklin: It’s Effective Altruists v effective accelerationists – slugging it out over the future of AI | Conservative Home

Peter Franklin is an Associate Editor of UnHerd.

Do you ever get the feeling that our leaders don’t really care about us? Yes? Well, I don’t blame you. But if you think things are bad now, then just consider politics in the eighteenth century.

Back then, politicians didn’t have to care about us because few men and precisely no women had the vote. Parliament was there to represent the elites not the people — and the civil service was a system of sinecures based on the exploitation of artificial monopolies. Furthermore, in the absence of true democracy, the political parties, in as much they existed at all, were concerned with causes and loyalties far removed from popular concerns.

No matter how arrogant and out-of-touch they might seem, the politicians of our time and place could not get away with the unashamed haughtiness of the Georgian era.

But big business is another matter. In this milieu, the movers-and-shakers still strut their stuff like periwigged aristocrats. And nowhere is this hauteur more unchallenged than in the tech sector.

Indeed, the more advanced the technology, the lesser the ability of the public to make a meaningful contribution to key decisions. A case in point is the regulation of artificial intelligence (AI) and especially artificial general intelligence (AGI) — which might prove to be the most important invention of the 21st century.

A useful definition of AGI is a computational system that can equal or exceed human capabilities in most economically-useful tasks. Just to be clear, nothing like this exists yet, but many experts in bog-standard AI think that we’re on the brink of a breakthrough. And that has lead to an ideological split at the highest levels of the (mostly US-based) tech industry.

Until recently, this was a rarefied debate — about as easy to follow to non-experts as the finer points of the Tory-Whig rivalry were to the average English cowherd. But, earlier this month, something happened to bring the conflict to wider attention. The drama centred on OpenAI — which has been described as the world’s most important company.

Controlled by a not-for-profit organisation, but generously funded by Microsoft and other investors, OpenAI has already made waves. Earlier this year, it amazed the world with GPT-4 — an AI system that can generate complex and meaningful text in response to response to natural language questions and instructions. Though professional writers can still churn out better copy, GPT-4 output is genuinely impressive.

The excitement around GPT-4 has propelled Sam Altman, the CEO of Open AI, to front rank of Silicon Valley superstars. At the age of 38, he is the Bill Gates or Steve Jobs of his generation. It therefore came as a huge shock when on the 17th of this month he was sacked by the OpenAI board.

Of course, these things happen — for instance when Steve Jobs parted ways with Apple in 1985. However, Altman’s departure was as if Jobs had been sacked in 2007: i.e, after he’d returned to Apple and had successfully developed the first generation iPhone.

OpenAI’s move wasn’t just surprising it was seemingly inexplicable. Indeed, what happened next was an investor and employee rebellion in which Altman was restored to his position and the board reconstituted.

So what could explain this extraordinary tale of coup and counter-coup? Surely the top brass in the world’s most important company can’t have been so trivially-minded as to indulge in mere office politics?

Most likely, the answer is the precise opposite. As others have noted, those who wanted Altman out were connected to the Effective Altruism or “EA” movement . EA is heavily influenced by utilitarian philosophy — which seeks the greatest good (or happiness) for the greatest number of people (or sentient beings). People associated with this movement include the philosopher Peter Singer, the Facebook co-founder Dustin Moskovitz and, less happily, the disgraced crypto-king, Sam Bankman-Fried.

Some EA adherents, especially those in Silicon Valley, are worried that if Artificial General Intelligence is achieved, humanity could be enslaved or destroyed by its new creation. Even if the risk is very low, from a utilitarian point of view the negative consequences are so great as to justify extreme controls on the development of the technology and, if necessary, total cessation.

However, the Effective Altruists are countered by a rival faction who call themselves “effective accelerationists” (incidentally, they don’t seem keen on capital letters). If you see anyone on social media who includes the abbreviation “e/acc”after their names — for instance, the tech legend Marc Andreessen — this is it what it means.

The accelerationists also believe that AGI is coming — but they want it as soon as possible, given the potential of the technology to improve the human condition. Furthermore, with all the non-AI related risks to humanity’s continued survival — from nuclear war to climate change to pandemic disease — the e/acc position is that we can we can improve our odds by getting some super-intelligence on our side.

So rival philosophies, playing for high stakes, with some powerful names on both sides. This would certainly be a high-minded motive for the tussle over Sam Altman (who leans towards the accelerationist side of the debate). After all, what does a spot of corporate turbulence matter compared to the future of the human race?

The only trouble is that this debate is so high-minded — and high-powered — as to be above our heads. And by that I don’t just mean the general public, but most of the media and just about all of our politicians. What could be the most important issue of our time, is being contested in the absence of anything resembling democracy.

In this respect we really have returned to the politics of the eighteenth century in which loosely defined rival parties battle it out with about as many voters involved as the average rotten borough. So is there anything that British politicians can do to bring the decision-making process back within the democratic realm?

Rishi Sunak was certainly right to show international leadership through his AI Safety Summit, but much more needs to be done. For instance, as Garvan Walsh argues on this site, the UK has a golden opportunity to take the lead on AI regulation — because both the EU and the US are making a right old hash of it. We also need an ambitious industrial policy capable of making the most of our own AI sector, which though much smaller than its American and Chinese counterparts, is still significant.

Perhaps most importantly, the UK government should proceed on the basis of ideological neutrality. At this stage, its impossible to objectively say which of the rival AI factions is right. Indeed, both sides could be wrong because both the Effective Altruists and accelerationists believe that we’re already on the path to Artificial General Intelligence.

Yet, as things stand, there is zero proof that the progress being made by OpenAI and its rivals gets us any closer to AGI which could depend on an entirely different set of principles.

Clearly, the accelerationists want AGI to happen, and the Effective Altruists want to be in charge if it does. But we should be prepared for the possibility both factions are chasing fairies.

Source link

#Peter #Franklin #Effective #Altruists #effective #accelerationists #slugging #future #Conservative #Home