
Elon Musk has made headlines again. This time by proposing (at least) six months pause for the training of AI systems more powerful than GPT-4.
The rationale is two-fold.
1. AI Safety. There is a view that “AI systems with human-competitive intelligence can pose profound risks to society and humanity”. So the idea would be to pause the most sophisticated AI research, while we figure out how to make it “safe”.
2. The need for a democratic discourse about AI’s profound impact on humanity. E.g. Should we automate away all the jobs and develop nonhuman minds that might eventually replace us?
Elon’s aims are noble, but the plan doesn’t work
So the idea is to “slow down” while we get our bearings and make things “safe”. And that makes sense if the whole world agrees.
But there is absolutely no chance that this will happen.
AI research is currently the world’s most important arms race. And China is showing no signs of slowing down.
So if we press pause for six months, all that will happen is that we give China a six months lead.
And we cannot afford to give competing regimes an AI advantage.
What to do instead?
Short term:
We should identify the Systemically Important AI Labs and immediately subject them to government oversight.
There are likely less than a dozen labs that currently have the expertise and resource to push the boundaries of AI. And we know who they are. Governments should start working with them immediately. As in, next week.
We don’t need them to stop their work. We do need them to be accountable.
AI has the potential to be as powerful (and create as much havoc) as the most powerful tools we have created – e.g. nuclear power and global banking. So it needs to be regulated in the same way.
As a starting point, SIAILs should report to government departments (and – in time – regulators) on how they ensure that their models are accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
We don’t need to wait years for this. We should start now. Go go go.
Medium-term:
We do need a debate on the future of AI. This debate has two primary strands.
1. AI Safety
We need to codify how AI is developed and deployed. And how best to regulate it. You need a license to run a bank and a nuclear power plant. It would only be natural that the same is needed to operate a god-like artificial intelligence. We shouldn’t wait three years for a framework. It would be better to develop a v1 of a regulatory framework now and then look to improve it over time.
2. Socio-economic
Because AI can “automate all the work”, it has the potential to set us free. It can mean liberation from drudgery. But if the ownership of the most powerful AI-tools is concentrated in the hands of a few, it can also mean destitution for the have-nots. Governments should have stakes in the most powerful companies. Not through expropriation, but through sovereign wealth funds. Because we need mechanisms to ensure that the fruits of AI benefit all.
AI has the biggest potential of anything in our lifetime. We must ensure we use it for good. Dreams of a global “pause button” are wishful thinking. Instead, we need rapid and concerted efforts to direct progress on AI.
Hopefully, our elected leaders will rise to the occasion.