Utopia or dystopia? Time to grab the AI steering wheel
by Aruna Sathanapally
Overshadowed by rolling events in the Middle East, for 24 hours last week Canberra hosted one of the world’s most powerful tech leaders.
I spoke to Dario Amodei, chief executive of AI giant Anthropic, in the midst of that flying visit, in front of a group of parliamentarians, experts, and media, in the Great Hall of Parliament.
I have found Amodei’s explanations of the trajectory of artificial intelligence, technology that he has played an outsized role in developing, perplexing.
Earlier this year, he published an essay setting out the risks that AI presents, sounding a warning bell that our window to shield ourselves is now.
It is rare – and somewhat confusing – to have an innovator crying out for regulation even as they build their metaphorical nuclear bomb. As I asked Amodei himself – if the risks are as he fears, why are we even doing this?
A cynic might suggest that both the utopian and dystopian futures on offer are helping build the hype that has led to a surge of capital, a pipeline of data infrastructure, and hopes across the private sector for a productivity boom.
The truer answer is likely to be that Amodei believes deeply in the promise of technological breakthroughs to genuinely lift human welfare, while at the same time seeing that AI’s Chernobyl moment could come first.
In our discussion, he was clearest about the risks that this technology poses in the hands of authoritarian governments, with its potential for mass surveillance and control. Given Anthropic’s stoush with the Trump administration, this is not a theoretical issue.
It is becoming harder to ignore the transformative potential of generative AI tools. Strides in programming capability have been dramatic. Uptake in language-based professions, such as the law, has been swift. But concerns about accuracy and reliability persist. Borrowing words from American writer Kelsey Piper, using the technology feels simultaneously like having a genius at your beck and call, and yelling at your printer.
Last week, Amodei urged us not to base our assessments on what we see the technology do today, but to look instead at the rate of change. Try something today, then again in three months. And then consider where we could be in just a few years.
We should not baulk at powerful new discoveries, unless there is good reason to do so. Technology drives productivity growth. Without a major productivity boost, we face hard decisions about how to meet the needs and wants of a growing and ageing population, and how to survive in a more expensive world. Even if only the conservative estimates of the productivity growth spurred by AI eventuate, that would make a material difference.
Equally, we should not naively assume that technological advances will automatically deliver better lives or a better society without active steps by humans to shape the technology and the institutions that surround it. The future of AI, in the words of MIT economist David Autor, is it not an exercise in forecasting; it is an exercise in design.
Cue Amodei’s request that we treat the development of the technology with seriousness and immediacy. If we want this technology to lift living standards, history tells us we need to grab the steering wheel.
In the Australian context, this means confronting at least three issues.
First, how do we build and diffuse safe and reliable AI tools that improve people’s lives? How do we harness this technology to solve our biggest challenges, to enhance human cognitive work rather than simply replace it? For example, demand for labour is growing fastest in healthcare and aged care because we have one of the longest life expectancies in the world. We stand to gain from technology that makes it easier to get timely services to prevent and manage chronic health conditions, and helps healthcare workers to concentrate their precious time on the highest-value activities.
Second, how can Australia best influence global development of AI regulation to ensure safety and alignment? This is a collective action problem. We need to stand up for international co-operation and shared rules, including steps to mitigate catastrophic risk. AI needs infrastructure – chips, energy, and (on present technology) cooling. AI expert Janet Egan has set out how Australian data centres that run on renewable energy could crowd out dirtier alternatives, and increase Australia’s influence and resilience in the years ahead.
Third, how do we ensure that the gains from this new technology do not end up concentrated in the hands of the few, by disrupting the channels by which economic gains are broadly shared (education, employment, tax)?
We need to be thinking now about how to prepare and support an adaptable workforce, today and in the future. And we need our tax and transfer system to fit the times, to better capture returns to capital, be less reliant on wages and salaries, and reckon with the fact that JobSeeker is too low to be an effective safety net in a labour market shock.
It won’t be easy to come up with answers in the face of uncertain and fast-paced change. But without action on each of these fronts, public appetite – and social licence – for the technology cannot be taken for granted.