Technology

Deepmind founder: AI must not ‘move fast and break things’

Listen to this Article Now
Getting your Trinity Audio player ready...
Spread the love

By Zoe Kleinman

Technology editor

Demis Hassabis, the co-founder of Google Deepmind, one of the UK’s biggest AI firms, says the booming industry should not follow the same path adopted by the older tech giants.

“Move fast and break things” was a motto coined by Meta boss Mark Zuckerberg, the creator of Facebook.

It was intended to encourage rapid innovation and company growth.

But for some it became a phrase that symbolised big tech firms acting rashly and causing disruption.

ADVERTISEMENT

“I don’t think we should move fast and break things, you know, the typical Silicon Valley mantra. It has been extremely successful in building massive companies and providing us with lots of great services and applications… but AI is too important,” Mr Hassabis said.

“There’s a lot of work that needs to be done to ensure that we understand [AI systems] and we know how to deploy them in safe and responsible ways.”

Setting out the AI risks

The British tech leader was speaking to the BBC on the eve of the UK’s AI safety summit.

He believed that the threats posed by artificial intelligence fall broadly into three categories:

  • the risk of AI generating misinformation and deepfakes, and displaying bias
  • the deliberate misuse of the tech by bad actors
  • the technical risks of a future breed of general artificial intelligence becoming more powerful; how to keep it under control and make sure it sticks to the right goals

“They all require different solutions at different time scales but we should start work on them all now,” he said.

DeepMind is building an AI program called AlphaFold, which has the potential to advance the discovery of new medicines by predicting the structure of almost every protein in the human body.

An earlier product called AlphaGo beat the top human player of the Chinese strategy game Go, in a tournament held in 2016.

The player later retired from the game, saying “there is an entity that cannot be defeated”.

Safety summit

Tech bosses are pushing for governments to regulate the rapidly-evolving technology.

Over the next two days, around 100 world leaders, tech bosses, academics and AI researchers are gathering at the UK’s Bletchley Park campus, once home to the codebreakers who helped secure victory during World War Two.

They will take part in discussions about how best to maximise the benefits of artificial intelligence – such as discovering new medicines and being put to work on potential climate change solutions – while minimising the risks.

The summit will focus on extreme threats posed by frontier AI, the most advanced forms of the tech which Mr Hassabis described as the “tip of the spear”. The summit’s priorities include the threat of bio-terrorism and cyber attacks.

International delegates include US Vice President Kamala Harris and European Commission President Ursula von der Leyen. China is also sending a representative.

There has been some criticism that the guest list is dominated by US giants including ChatGPT creator OpenAI, Anthropic, Microsoft, Google and Amazon – as well as Tesla and X (formerly Twitter) owner Elon Musk. Prime Minister Rishi Sunak will livestream a conversation with Mr Musk on X on Thursday evening.

Others have questioned whether announcements earlier this week from both the US and the G7 specifically about AI safety had overshadowed the event – but Mr Hassabis said the UK could still play “an important role” in shaping discussions.

‘Kind of sci-fi’

IMAGE SOURCE,AMMIE SEKHON

Image caption,

Aidan Gomez, the founder of Cohere, says “Terminator scenarios” are “kind of sci-fi”

Aidan Gomez, the founder of Cohere, has come to the UK from Toronto for the summit. His firm was valued at $2bn in May 2023.

He said he believed there were more immediate threats than the “doomsday Terminator scenario” which he described as “kind of sci-fi”.

“In my personal opinion, I wish we would focus more near-term where there’s concrete policy work to be done,” he said.

“The technology is not ready to, for instance, prescribe drugs to patients, where an error could cost a human life.

“We really need to preserve human presence and oversight of these systems… we need regulation to help us steer and guide the acceptable use of this technology.”

SOURCE: BBC