Shannon Vallor is leading a campaign for ordinary people to take power back from the giant tech corporations controlling our lives. She talks to our Writer at Large
FROM her Edinburgh University office, Professor Shannon Vallor is planning a revolution. She’s coming for the Tech Bros: the Elon Musks and Mark Zuckerbergs.
Her battleground is artificial intelligence. Vallor wants us –ordinary people – to wrest back control of AI, perhaps the most important technology on Earth today, from mega-corporations like Twitter, Facebook, Google, Apple, Microsoft and OpenAI, the company behind ChatGPT.
The fight is so important to humanity’s future that Vallor equates it with the 20th century’s anti-colonial campaigns and the suffragettes’ struggle for voting rights.
Vallor is one of the world’s most distinguished AI ethicists. She is professor in the ethics of data and artificial intelligence, and a director at the Edinburgh Futures Institute. Big tech should fear her. She was once Google’s AI ethicist, so knows these giant corporations intimately.
If big tech is allowed to dominate AI, humanity is in trouble, Vallor believes. We’ll cede control of our lives to corporations which only care about profit and power.
If big tech is tamed, however, AI can be repurposed for what Vallor calls “human flourishing”.
Vallor lays out her manifesto in a highly-anticipated new book, published next week, called The AI Mirror: How To Reclaim Our Humanity in An Age Of Machine Thinking. It’s an indispensable guide to understanding the technology that’s going to have sweeping influence over all our lives.
Mirror
BEFORE we can tame big tech, we need to understand exactly what AI is, Vallor believes. Most of us just buy into claims by the likes of Elon Musk.
We have been led to believe that AI can interact with us in very humanlike ways, that it’s a digital mind. “But what we’re interacting with isn’t another mind at all, it’s a mirror,” Vallor says.
AI isn’t “thinking”. What AI does, Vallor explains, is simply reflect us back to ourselves. AI like ChatGPT just swallows up everything it can online – books, newspaper articles, message board postings, information about our consumption patterns, and financial transactions – and spits it back at us.
“We need to remake our mental image of AI and how we interact with it,” she says, “so we’ve a more reality-grounded understanding of what it can do for us, what it can’t, and how we can live well with it.”
Now, clearly, the problem with this “AI mirror” is that as it’s built on our data, it’s filled with our flaws. AI is already reproducing our biases.
There are cases where AI was used by HR teams for job applications. Men tended to get the posts as the AI simply looked at who got similar jobs previously and replicated human discrimination. Or there are cases of AI used in parole hearings. Ethnic minorities are more likely to be denied, as previous human decision-making favoured white people.
“AI presents tremendous challenges for fairness and justice,” Vallor believes. “It’s trained on our data so carries all our discriminatory impulses. These systems aren’t objective. They’re not neutral any more than we are because they’re reflections of us.”
This makes AI very dangerous, Vallor feels. “However, if we understand what AI is doing, then it doesn’t have to be as harmful to us.” We can tame it, control it, stop using it to amplify our faults, and start using it to correct our faults.
Humanity is being bombarded with two AI narratives: doom and utopia. The “doomsayers” believe we’re headed for the Terminator movies while the Tech Bros claim we have a ticket to paradise via machines.
Apocalypse
BOTH narratives claim that AI will “surpass us”. And both are wrong. Vallor quickly dismantles the “apocalypse scenario”. There won’t be any “rise of the robots. We need to take that hyperbole off the table”. AI cannot achieve consciousness and kill us. If we stop believing these myths “we can focus on what AI really is, and the challenges it actually presents”.
That doesn’t mean AI can’t harm us, however. Allowing AI to run nuclear weapons systems would be a deadly mistake. Not because AI would deliberately launch missiles, but because it might make an error. AI needs human oversight.
“We can imagine scenarios where AI produces great harm, but those would be human mistakes, not AI taking over and deliberately harming us.” The quintessential human mistake would be allowing AI such power in the first place.
The biggest threat to humanity comes from us surrendering responsibilities to AI. Vallor says we’re being constantly told by tech companies that AI “will determine our future. That’s very useful for those with the power to determine how AI is developed, how it gets used and governed. The people with the most power in the AI ecosystem have very much an interest in telling everyone else that nobody can control this technology”.
That sense of disempowerment stops us “demanding that AI be developed in ways that benefit us”. It turns us “into passive bystanders hoping AI takes us somewhere nice”.
Vallor adds: “It’s a very convenient illusion.” But then “mirrors have always been tools of magicians and charlatans”. Some AI “evangelists have gone through the looking glass” – they’ve drunk their own Kool-Aid and really believe AI means utopia.
The other big threat is “the rush to automate human decision-making”. In other words: want a bank loan? AI will decide.
“The temptation to automate decisions is enormous,” says Vallor. Bosses don’t have to pay AIs like humans. “But people aren’t making careful distinctions about which decisions are safe to automate, what guardrails we need, and who’s accountable if something goes wrong.”
Not only will AI reproduce the same mistakes humans would make, but by ceding control we’ll de-skill ourselves as a species. “Potentially, we degrade our own decision-making [abilities]. Judgment requires practice. That’s why we don’t let children make important life-changing decisions.”
Chaos
VALLOR says that given “the world is so overwhelming right now” – and many feel crises like war, climate change, political polarisation and economic chaos are impossible to cope with – there’s the temptation to “just say let machines take care of it. But that’s such a dangerous move. Once we do that, we lose the ability to chart our own course”.
Again, that plays into the hands of tech billionaires – it gives them all the power. “What’s being human other than having the power to decide for yourself what you think?” Vallor asks.
She notes with grim irony that the growth of this paternalistic view of technology comes as “we’re moving into an authoritarian phase of history. The worry isn’t that AI is undermining our agency in ways we haven’t seen before, but that it might be more effective at seducing us into surrendering our agency rather than having anyone need to take it by force”.
The big tech mantra that AI controls the future and will supersede us has unsettling echoes for Vallor. “Historically, what’s been the motivations for such rhetoric? For authoritarian ideologies that say the future is already decided? Those motivations were often driven by the desire to consolidate and maintain power, to create impunity,” she says.
“Those motivations are plausibly circulating around AI right now – the motivation to say ‘AI is smarter than you, don’t even think about questioning the way it’s used’.”
Vallor calls the “powerful actors and corporations” promoting this view of AI “the wizard behind the curtain”.
However, AI is “human-made and human-guided”, and therefore “still very much within human control, if we choose to govern it wisely and appropriately”. The risk is that “those with the tightest grip … obstruct our efforts to hold the developers of these technologies accountable”.
At the moment, humanity is being told “we’ve no way of steering the bus. AI is behind the wheel, and we’re in the back. I’m saying we need to get up, grab the steering wheel and decide where we’re going collectively because the only people driving the bus are the big corporations”.
She adds: “There’s definitely no AI behind the wheel.”
Saviour
VALLOR says there is quasi-religious fervour within Silicon Valley towards AI, a “saviour complex. It has all the dangers many religious movements have”.
To harness AI’s positive power, we must move away from those “large language models” – like ChatGPT, based on all that biased garbage scraped from the internet. Instead, says Vallor, think of an AI based on large data sets of human tissue samples, and used to reduce cancer diagnosis from weeks to hours.
AI needs to be an assistant that sits beside humans, helping us make better decisions. It’s not our boss. AI medicine “has tremendous potential. Now contrast those kinds of applications with what people are already trying to do, which is saying ‘we haven’t invested in mental healthcare, so we’re going to train a version of AI to be a therapist’, and then give vulnerable people this tool instead of access to trained, certified [counsellors]”.
Vallor adds: “These are very different models: one goes after the cheap, low-hanging fruit without regard to human welfare, without addressing the structural deficits causing people to go without care.”
The other model, she says, asks how we use AI to give people better care. AI’s purpose should be to raise “the baseline” for humanity.
She adds: “There are two ways to think about developing AI. One is where human flourishing is the goal, where we improve the human condition. The other is to allow AI to be developed according to whatever commercial and political incentives are currently dominant, and then call whatever that is ‘progress’.
“Frankly, that’s what we’re doing now. We’re allowing dominant commercial and political incentives to shape AI and who it benefits and who it hurts.
“We know that’s not a way forward, because we know those incentives are unsustainable – we see that in our fracturing political systems, in the damage we’re doing to our environment.
“The way we’re building AI now isn’t a road to the future – it takes the dominant patterns of our recent past and bakes them in as the path we stay on, and we know that path is a dead end. Everyone admits we can’t just keep going.”
We need to “change track”, Vallor says, “but the most dangerous thing about the kind of AI we’re investing in is that it‘ll ensure we stay on the same track”.
Instead of using AI to cut jobs or create dreadful art or meaningless chatbots, think of AI used to improve land for sustainable farming, help emergency responders, predict dangerous climate patterns, assist landmine removal in war zones, or preserve languages facing extinction.
War
“TECHNOLOGY isn’t just about engines of war and wealth,” Vallor points out. If we think of the ancient world, technologies like fishing, leather-making, farming and pottery all began “as engines of human flourishing, for sustaining life and communities.
“AI that’s beneficial is about repairing something broken, sustaining something breaking down, or healing something needing healed.”
If we thought about AI like that, “we’d have a completely different vision. We’ve got to turn the wheel – take the bus someplace else. I want to convince everyone currently hearing that they’ve no power over the future, and AI will determine it, that they’ve a moral right to demand a voice in how this technology is shaped and how it affects us.”
As Google’s former AI ethicist, Vallor isn’t aiming her fire at big tech staffers. “At Google, I worked with people who were very serious about ensuring AI is deployed responsibly. Now that’s within a corporation that has its own incentives, and those aren’t always aligned with what the individuals within a company might be trying to achieve,” she says.
“Some of the best people I’ve worked with, I came across at Google. Many of them have since left the company, as
have I.” Her time there, however, gave her valuable insight into what “responsible AI” means.
The “million-dollar question”, Vallor believes, is how voters force governments to act to create responsible AI, especially in a society where so many are “disengaged from politics”, and “there’s no sense of political will to change how technology is governed”.
People feel “disempowered” when it comes to technology. Vallor says: “But here’s a rhetorical question: how did we break away from a world where absolute monarchs were accepted as inevitable?
“People were able to break from colonial rule and not see that as inevitable. Women broke free from a world where they were seen as property. How many times have we broken free of these kinds of chains? We need to change the way people think about technology in order to reanimate their sense of agency and possibility. If we get that change going, people will feel they’ve the right to demand leaders who show they can govern the powers shaping technology.”
Vallor doesn’t shy away from being a “catalyst” for change. “I’m trying to do whatever I can to motivate that shift in thinking.”
Revolution
MORE people around the world are starting to think like Vallor, and not just in universities and think tanks. We’re reaching a tipping point where the conversation about taming tech, and concerns about how it can be used to “consolidate authoritarian control”, will leap into the mainstream. “There’s so much ground for optimism and confidence that change is possible,” she adds.
“Strength is building in the ‘responsible AI movement’. If we continue to press for this revolutionary shift in how people understand their relationship to technology – presented to us as out of our hands and control – if we can break that illusion, then we’ll see real political change. We’ve a moral right to shape what the future looks like.”
Key to change is getting big players onboard – trade unions, political parties, and prominent cultural figures. “If we get many pushing in the same direction we can tip the balance, because it’s not in our favour at the moment.”
There’s a temptation, Vallor admits, towards “neo-Luddism”. The Luddites saw technology as a threat to jobs and smashed machinery during the Industrial Revolution. However, she explains: “AI isn’t the problem. We are
– the values and incentives we’ve built into our economy and politics, that’s the problem. AI is just an expression of that right now.
“Go back to the mirror metaphor. If you don’t like what you see, what do you do? Smashing the mirror isn’t helpful. Trying to disable AI is fruitless and self-destructive. AI is one of many tools we’ll need in the future. We must convince people that AI doesn’t have to be this harmful, extractive damaging thing that only benefits a wealthy few.”
Additionally, let’s pull a few policy levers. Breaking up big tech monopolies is one move Vallor supports, along with the European Union’s AI Act, which seeks a regulatory framework.
We need to recall the safety measures put in place for emerging technologies like aviation in previous eras. Both carrot and stick are needed. Tech companies could see liabilities capped if they proved safety procedures were robust and there was no negligence. However, slipshod safety measures mean liabilities increase. Tech companies could be treated like environmental polluters and made to pay for clean-up costs.
Currently, fining big tech firms hundreds of millions in damages “is trivial compared to the profit from continuing business as usual”. Big tech needs to be taught that “responsibility becomes the only way to be more profitable”.
Culture
THERE has been a trend lately of mocking AI “artworks” and attempts at writing. AI currently seems to create pictures of people with three legs or heads on backwards. Vallor warns we shouldn’t find this funny.
“There’s an immense effort to fine-tune these models,” she says, “so the gap between the quality of what these models output and what humans are capable of doing at our best will become narrower to the point where we won’t easily notice the difference.”
AI art and literature is only in its infancy. Vallor sees attempts to use AI to replicate human creativity as “inherently unjustifiable”. Art is what makes humans human, after all. She’s appalled at the idea of humans “surrendering our creative power to machines and letting them determine the shape of culture. That’s the existential risk.
“Human creativity is about taking what’s inside us – our values, desires and vision – and putting it out there for others to respond. A machine can’t do that as there’s nothing inside. The creative impulse is fundamental to what we are as humans. If we surrender that, we surrender the future.”
To Vallor, there’s no real difference between any big tech corporation. They all pursue short-term profit and exploit regulatory loopholes. Twitter is the exception as it’s now a platform “run almost entirely for the benefit of one man’s fragile ego”.
The bottom line is that the future the tech companies want to forge “seems to have no place in it for us”. Vallor points to the recent “Crush” ad for Apple’s iPad, which showed artworks being physically destroyed.
“It openly celebrated a vision of the future as the mechanistic compression of rich, colourful, human vitality and artistry into a flat, grey maximally-efficient device for content delivery. It tells you something that they really thought this vision would make us feel awe and wonder, rather than horror and revulsion.”
If we want an example of the cynical “extractive approach” by AI companies where they “scrape” everything humans do “into an AI training bucket, process it and then sell it all back to us on their terms”, then look no further than Scarlett Johansson. Open AI wanted to use her voice. Johansson refused.
“They just created their own version without her permission and exploited it anyway,” says Vallor.
“There is, in the AI world, now a push to take everything that has cultural or intellectual value and then feed that work into an AI model, and use that model’s output to replace the original with a proprietary product. And they don’t ask for permission. Why should that be the model of innovation we celebrate?”
Why are you making commenting on The Herald only available to subscribers?
It should have been a safe space for informed debate, somewhere for readers to discuss issues around the biggest stories of the day, but all too often the below the line comments on most websites have become bogged down by off-topic discussions and abuse.
heraldscotland.com is tackling this problem by allowing only subscribers to comment.
We are doing this to improve the experience for our loyal readers and we believe it will reduce the ability of trolls and troublemakers, who occasionally find their way onto our site, to abuse our journalists and readers. We also hope it will help the comments section fulfil its promise as a part of Scotland's conversation with itself.
We are lucky at The Herald. We are read by an informed, educated readership who can add their knowledge and insights to our stories.
That is invaluable.
We are making the subscriber-only change to support our valued readers, who tell us they don't want the site cluttered up with irrelevant comments, untruths and abuse.
In the past, the journalist’s job was to collect and distribute information to the audience. Technology means that readers can shape a discussion. We look forward to hearing from you on heraldscotland.com
Comments & Moderation
Readers’ comments: You are personally liable for the content of any comments you upload to this website, so please act responsibly. We do not pre-moderate or monitor readers’ comments appearing on our websites, but we do post-moderate in response to complaints we receive or otherwise when a potential problem comes to our attention. You can make a complaint by using the ‘report this post’ link . We may then apply our discretion under the user terms to amend or delete comments.
Post moderation is undertaken full-time 9am-6pm on weekdays, and on a part-time basis outwith those hours.
Read the rules hereLast Updated:
Report this comment Cancel