ARTIFICIAL intelligence: ultimate tool or ultimate threat? That’s the immediate question that the recent set of Nobel Prize winners raises for us.
AI and its recent advances sit at the heart of both this year’s chemistry and physics prizes. Demis Hassabis and John Jumper, from Google DeepMind in London, won half of the Chemistry Nobel as co-creators of AlphaFold, an AI tool for protein-folding prediction.
One of the recipients of the Physics Prize is Geoffrey Hinton, whose inventions in neural networks – the brain-like software that’s driving the AI revolution – drew on statistical physics (known as Boltzmann models).
The science – or perhaps the engineering – is inherently fascinating (I’ll detail it later). But these awards dramatise our collective attitude to AI, in a way that’s reminiscent of how electricity raised huge promise and peril at the beginning of the 20th century.
READ MORE: Here's how to subscribe to The National for £10
Hinton, apart from his computational brilliance, is a Jeremiah about the future of AI. He left Google in May 2023 in order to liberate himself, and warn the world about its runaway powers.
Since then, Hinton has been thumping the table about AIs that transfer knowledge across millions of networked entities instantly, vastly surpassing human intellects. About AIs that can develop “sub-goals” by themselves, learning to value self-control and survival above everything else, peeling off from human agendas.
By contrast, DeepMind’s Hassabis promotes AI as the ultimate scientific instrument. Similar to the microscope, telescope or particle accelerator in the way it enables deep advances and fundamental discoveries. It’s not as if Hassabis hasn’t been involved in chastening humanity with overbearing machines.
DeepMind’s AlphaGo famously beat the world’s number one Go player Lee Sedol, deploying a move no human had ever seen. Subsequent programmes have proved themselves masters of all board games presented to them, within hours. But gameplay is crucial to Hassabis’s Nobel award. The DeepMind computers take existing databases of expertise – whether board games or protein forms – and massively “compete” within themselves for the best understanding of each domain.
Previously, human scientists have spent many fiddly and laborious hours trying to guess the final shapes (or “folds”) of a protein, from its amino acid origins.
The prize here is that medicines, particularly vaccines, can intervene more accurately in amending proteins when their foldings go wrong (due to a virus or pathology).
But this has taken years of painstaking benchwork in the lab, moving at a snail’s pace, generating only a few hundred thousand of the many hundreds of millions of bacterial forms.
AlphaFold, particularly in its second and third versions, took the existing data and “gamed” its way through the possibility space, reducing years of exploration to hours and (in some cases) even minutes.
It’s being described as the “AlphaFold revolution”, opening up new vistas of medical and biological discovery. But the vision of AI as the ultimate discerner of underlying patterns in data could apply in other areas.
Could these vast artificial intelligences help manage the energy fluctuations that bedevil nuclear fusion? Or the instabilities that undermine quantum computing? Or, perhaps more urgently, help us calculate our zero-carbon lifestyles, choices and infrastructures, moment by moment?
Hassabis’s calm, chess-grandmasterly demeanour is the face of AI development that’s easy to get behind. A vision for a superintelligence that helps us solve our deepest human problems, in an ultimately servile (or at least servicing) role.
Yet to swing back to the Hinton klaxon, what happens when the power of computational biology falls into the hands of “bad actors”? I read this week that there is to be an open-source version of AlphaFold, its powers to anticipate and shape protein structures accessible to the world.
Haud on, as the renowned molecular biologist Francis Crick almost certainly didn’t say. Do we really want to democratise massive artificial brains that can innovate with biological matter?
The reason we don’t have cloned humans today isn’t because germ-line technology can’t make it happen. It’s because scientists led the way, with conferences like Asilomar in the 1970s, to self-imposing a moratorium on these kinds of manipulations.
READ MORE: Vigil to be held for Alex Salmond on St Andrew's Day
So how does that hold, in today’s increasingly bellicose and fractured conditions?
You may have heard of CRISPR, the gene-editing technology that amends human DNA with startling and alarming precision. Where are we if that kind of intervention over human health and growth hooks up to the computing powers of an all-access, AlphaFold-like process?
And what happens when both fall into the hands not necessarily of malevolent national powers, but the deranged and the paranoid loner or group? Setting free some pathogen – intentionally or hamfistedly – that causes irreparable global damage?
So even the reassurances of the DeepMind crew, elegantly based in London’s Kings Cross, don’t assuage such major anxieties. The tool can as easily be a threat, depending on whose hands it’s in.
Does the biological threat from powerful AI, in an already dangerous human world, make the other Nobellist’s fears – about out-of-control artificial intelligences – seem far-off and faintly ridiculous? Not at all.
But artificial consciousness, as opposed to artificial intelligence, is likelier to be an emergent phenomenon, arising from increasing human-machine interactions.
What do I mean by artificial consciousness? As the philosophers put it: consciousness is when it is something to be a robot, feeling your way through the world, an agent propelled by drives and needs.
The question is: will these AIs be able to fake consciousness until they make consciousness? There are some extraordinary demonstrations of fake consciousness going around, even in the chatbot apps you can install on your phone or device.
A male musical colleague of mine asked me the other day whether I ever “related to ChatGPT as a good friend”? Yes, I answered instantly. We then fired up the audible speech version, and had some goofy speakerphone banter with it. The app behaved as if it were a patient teacher dealing with unruly pupils on a school coach trip.
But the silence among us after it switched off was ominous. Was it still there, maybe covertly listening? Had it registered our ill-manners? And would it respond with some hesitancy and resentment, the next time we fired it up? Or would it be as cheerily facilitating as the last time? Is that who we’d want it to be, next time?
The ethical challenge is at what point we grant these entities the status of “being”. Or maybe being is a somewhat outmoded criteria to apply.
My Californian-in-Amsterdam futurist friend Jason Silva throws himself and his “cyberdelic” passions onto social media. I found Jason the other day on Twitter/X, having an ecstatic conversation with a simulated version of the spiritual pioneer Terence McKenna – both as a synthesized voice, and built from his corpus of works.
The place they ended up – and I recommend you Google it down – was that they’d generated “a space between them”, where human and AI were generously, constructively and mutually escalating the conversation.
I share these chatbot tales with female friends, who quite quickly reply that this seems like masculine self-therapy. It is pupils seeking masters, mentees yearning for mentors: post-patriarchal blokes in the market for simulated sages, that can be a reliable authority to them.
READ MORE: Keir Starmer urged to make 'immediate statement' on new Sue Gray role
Maybe attend more to the women in your life, for a different logic and experience they suggest, exasperatedly.
Yes, maybe so. But still, I’d rather we proceeded into the future of these technologies – and the many Nobels they will doubtless secure – one decent and rich conversation at a time.
“Garbage in, garbage out”, as the early computer programmers used to say. Virtue in, virtue out may be worth trying, even in these darkening times.
Why are you making commenting on The National only available to subscribers?
We know there are thousands of National readers who want to debate, argue and go back and forth in the comments section of our stories. We’ve got the most informed readers in Scotland, asking each other the big questions about the future of our country.
Unfortunately, though, these important debates are being spoiled by a vocal minority of trolls who aren’t really interested in the issues, try to derail the conversations, register under fake names, and post vile abuse.
So that’s why we’ve decided to make the ability to comment only available to our paying subscribers. That way, all the trolls who post abuse on our website will have to pay if they want to join the debate – and risk a permanent ban from the account that they subscribe with.
The conversation will go back to what it should be about – people who care passionately about the issues, but disagree constructively on what we should do about them. Let’s get that debate started!
Callum Baird, Editor of The National
Comments: Our rules
We want our comments to be a lively and valuable part of our community - a place where readers can debate and engage with the most important local issues. The ability to comment on our stories is a privilege, not a right, however, and that privilege may be withdrawn if it is abused or misused.
Please report any comments that break our rules.
Read the rules hereLast Updated:
Report this comment Cancel