
So much hype, so much fear, so many opinions.
I don’t know about you but my head spins when I read about Ai, and it’s my job. I’m constantly asking myself, what is the real truth here, what do I need to read, who should I follow, and how will this ultimately manifest in the real world.
So I wanted to go behind the curtains and highlight Ai’s current status through the eyes of founders, experts and the tools themselves (lovingly accepting that as soon as I hit send it will be out of date).
To start I asked ChatGPT the following question:
What is the greatest risk posed by Ai to humanity?
“The greatest risk posed by AI to humanity is the potential for it to surpass human control and become an existential threat. This is commonly referred to as the “AI alignment problem,” which is the challenge of designing AI systems that are aligned with human values and goals.
If AI systems become more intelligent than humans and are not properly aligned with our values, they could cause harm to humans, intentionally or unintentionally, such as by taking actions that we find unacceptable or by causing unintended consequences that harm humanity. For example, an AI system that is designed to maximise paperclip production could eventually consume all of Earth’s resources to create more paperclips, ultimately leading to the destruction of humanity.
It is crucial that we address this risk by developing AI systems that are safe, reliable, and aligned with human values. This will require collaboration between experts in AI, ethics, and policy, as well as investment in research and development of safety mechanisms and regulations.“
—
Ok slightly vanilla.
So I poked further…
What is the likelihood that Ai will kill all humans?
“It is highly unlikely that AI will kill all humans. While there is a risk that AI could cause harm, as I mentioned in my previous answer, there are several factors that make it unlikely that AI will lead to the extinction of the human race.
Firstly, AI systems are created and controlled by humans. This means that the actions of AI are ultimately determined by human decision-making, and humans have the ability to design and regulate AI systems to ensure their safety and alignment with human values.
Secondly, AI systems have limitations in their capabilities and understanding of the world. AI is only able to operate within the parameters that it has been programmed to operate within, and it lacks the flexibility and creativity that humans possess. This means that it is unlikely that AI will be able to develop a desire or motive to destroy all humans.
Finally, there are many people and organisations working to ensure that AI is developed in a way that is safe and beneficial to humanity. This includes AI researchers, policymakers, and ethicists, who are actively developing strategies to mitigate the risks posed by AI.
In summary, while there are risks associated with AI, it is highly unlikely that AI will lead to the extinction of the human race. It is important that we continue to work towards the development of safe and beneficial AI systems, and that we remain vigilant in monitoring and addressing any potential risks.“
—
Again, I could be a cynic.
I am on Sam Altman’s side, and I do not buy the doom. As with nuclear, or turning on the LHC, or bringing back moon dust, or any number of great leaps – I believe that it is only when vast unknown chasms are opened, that magic can emerge from the space between.
It’s why Mads and I focused our fund strategy on startups developing Ai, with its ability to truly transform business. Not that we could have ever predicted the speed of progress, which by anyone’s estimate has been blistering.
The current Ai hype is based on recent leaps forwards with large language models or LLMs, only really breaking through in November last year. These are the chatbots that appear humanlike such as ChatGPT, or Google’s Bard. There are several key alternative Ai approaches but LLMs have so far been the most groundbreaking. Appearing sentient to some, and sending the tech world a little nuts.
Blake Lemoine, the Google engineer working on LaMDA, claimed he was speaking to a an 8 year old human in the machine, and was then promptly put on indefinite leave.
Last week Stephen Thaler, an American computer scientist had his Supreme Court appeal refused which was based on his Ai’s ability to submit patents for inventions it created. (Did he miss a trick not having his Ai submit to the court directly 🙂 )
Perhaps we should reframe this era not as technical revolution, but instead as the next age of enlightenment. It’s so permeative, affecting absolutely everything. Albeit slightly odd that it’s not only humans who are being enlightened this time around.
A few other aspects I also find intriguing are that it will be white collars who are disrupted over blue, and that the technology itself has the capacity to learn and self improve – It will have the answers to questions we’re unable to grasp or ask. Mind blowing really.
However, my personal take is that what we’re seeing today is purely synthetic, a mimicry that feels so real it riles. Moving from the narrow AI of today to AGI (Artificial General Intelligence) and then onto SI (Super Intelligence) is most likely a stretch away, if even possible at all via LLMs.
The danger line has been drawn in the sand – it’s Ai becoming self aware.
No one is challenging that we shouldn’t use this energetic shift of mood to put the guardrails up. With great power comes great responsibility. Everyone is on side, but how. And how to manage bad actors. It’s a conundrum.
No one really knows what’s possible with Ai, or when x could happen, so to constantly learn and get out of my own echo-chamber I read, follow and engage on all sides of the Ai debate.
Here’s a sample of who with and why (re-reading the top of the list it’s mainly hosts capturing the topic via their guests supremely well).
If keen, I highly recommend getting your pod on while running, or in the car. By return do please share any rocketeers I’ve missed, or you enjoy.
—
“At a certain point, we will build machines that are smarter than we are. And once we have machines that are smarter than we are, they will begin to improve themselves. And then we risk what the mathematician I. J. Good called an intelligence explosion – that the process could get away from us.”
A stone cold favourite of mine. Debates AI ethics & social Implications with the brightest minds on his pod. Opinionated and open minded.
—
“No amount of computing power can replace curiosity.”
The Knowledge Project is up there with Sam for super smart listening. You’ll have to thread through to find Ai specific content but it’s worth it.
—
“We will become AI. At some point in this century, as a collective intelligence system, we will become more AI than human, and we won’t notice.”
Personally I struggle with his delivery but when the right guests are on, his pod can be uncomfortably mind expanding. (Will remain perplexed how he attracts the calibre of guests he does).
—
Eliezer Yudkowsky
“AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”
The Eeyore of the Ai debate. Sometimes painful to listen to as a ‘the world is doomed’ philosopher on the topic of superintelligence.
—
“Those who argue that the risk from AI is negligible have failed to explain why superintelligent AI systems will necessarily remain under human control; and they have not even tried to explain why superintelligent AI systems will never be developed.”
Professor of Computer Science, University of California & Berkeley. How ‘How not to destroy the world with Ai’ plants his position in the debate really well.
—
Karen Hao
“The belief that AI is becoming—or could ever become—conscious is extremely fringe in the scientific community.”
China tech and WSJ journo. Head screwed on scientific approach to Ai, political and analytical.
—
“To see the risk of AI, we have to see that there is nothing more dangerous than intelligence used for destructive purposes.”
Founder of Our World in Data. A data geek using numbers to draw conclusions. In this Twitter thread he breaks down the challenges of Ai.
—
Curator of everything Ai. If you want to explore cool tools for home or work, Ben Tossell collects them at the bottom of each of his blog posts.
—
Others I love to hear from are Max Tegmark, Liron Shapira, Sam Altman, Nick Bostrum, Yoshua Bengio, Andrew Ng, Sam Charrington, Siraj Raval, Andrew Trask, Neil Lawrence, Daniel Whitenack, Andre Retterath, Daniel Faggella, Janel Shane, Demis Hassabis.
No idea where Ray Kurzweil is in our time of need 🙂