When AI gets real

When artificial intelligence gets real

A coastal property stand-off recently landed on my kitchen table. A friend’s parents own a modest beach bungalow in an aging community—the sort of place where brick pathways meander through palms like a 1970s Gold Coast development frozen in time, and not much has changed since Gough Whitlam was the Prime Minister. While gleaming high-rises have sprouted nearby over decades, the original buildings have persisted, protected by a rather brilliant governance structure requiring 80% owner agreement for any sale. Anyone who’s ever chaired a body corporate meeting knows getting 80% of Australians to agree on anything is about as likely as finding a cold and empty spot on Bondi Beach in January.

But recently, a property developer made headway by targeting individual owners with attractive offers, then tabling a surprisingly generous proposal for the entire complex. The place erupted into debate—Was the offer fair dinkum? How might negotiations unfold? Who stood to benefit most?

To assist my friend’s parents through this property puzzle, I turned to ChatGPT 4.5—not the free version that fumbles like a first-grade footballer, but the premium ‘pro’ tier that costs about the same as a decent bottle of Lisa Mcguigan Silver Pinot Grigio each week. This version includes a ‘deep research’ capability allowing the AI to spend up to half an hour exploring online sources before synthesising findings. I requested an evaluation of the offer and, astonishingly, received a comprehensive analysis within three minutes. Over the course of our conversation, I refined my questions while the AI adjusted its assessment accordingly.

The verdict? The offer undervalued the property. The AI had uncovered comparable nearby sales commanding higher prices, including one property that had been zoned upwards post-purchase, dramatically increasing its development potential and true market value. The negotiation dynamics proved particularly fascinating—the AI outlined how developers might secure majority ownership to control the body corporate, then implement burdensome rules or special levies designed to pressure remaining owners to sell. Yet this strategy created vulnerability: ‘They’ll own half a non-redevelopable complex—meaning their investment sits in limbo’, it observed. ‘Their financing partners will grow increasingly nervous’. If just 21% of owners maintained their resolve, they could force developers to ‘bleed cash’ until a more generous offer materialised.

I forwarded this analysis to my friend’s parents with qualified enthusiasm. A property solicitor might have provided more nuanced counsel—but not in three minutes, and certainly not for $200. The AI contained a few factual errors regarding property dimensions, but immediately corrected them when I pointed them out. While I regularly use ChatGPT for various tasks—teaching myself about scientific concepts, configuring an old computer for a neighbour’s six-year-old’s robotics projects, even experimenting with fan fiction based on profiles I’ve written—this property consultation felt fundamentally different. Here was an AI helping solve a genuine, complex, financially significant problem with remarkable practicality and business acumen. The system demonstrated a savviness I’d previously associated exclusively with human experience. Despite following AI developments closely for nearly two years, this moment landed differently. Strewth, I thought. ‘This isn’t theoretical anymore—it’s properly arrived’.

The complicated dance of technological scepticism

Most Australians don’t know quite how seriously to take artificial intelligence. This ambivalence stems partly from the technology’s novelty and partly from the deafening hype surrounding it. Resisting the sales pitch makes sense when forecasting technological futures remains notoriously challenging. But the contrarian dismissal that inevitably follows overblown promises doesn’t necessarily illuminate matters either. In 1879, The Times published a multi-part front-page investigation titled ‘EDISON’S ELECTRIC LIGHT—CONFLICTING STATEMENTS AS TO ITS UTILITY’. The paper quoted a distinguished engineer—president of the Stevens Institute of Technology—who objected to ‘trumpeting the result of Edison’s experiments in electric lighting as a wonderful success’. His scepticism wasn’t unreasonable; inventors had failed to create functional light bulbs for decades. His anti-hype position would have proven correct in countless other situations.

AI hype has spawned two distinctive forms of counter-narrative. The first suggests the technology will soon reach its ceiling: perhaps AI will continue struggling with forward planning or explicit logical reasoning rather than intuitive pattern-matching. According to this perspective, we require additional breakthroughs before achieving what researchers term ‘artificial general intelligence’ or AGI—roughly human-equivalent intellectual capability and autonomy. The second counter-narrative emphasises real-world implementation challenges: even if remarkably intelligent AI helps design a superior electrical grid, convincing people to build it represents an entirely different challenge. This view holds that progress inevitably encounters bottlenecks that—to some people’s relief—will moderate AI’s integration into our social fabric.

These perspectives sound persuasive and encourage a comfortable wait-and-see attitude. Yet they find little support in ‘The Scaling Era: An Oral History of AI, 2019-2025‘, a comprehensive and revealing collection of interview excerpts with AI insiders compiled by podcaster Dwarkesh Patel. This twenty-four-year-old interviewing phenomenon has built an impressive audience by posing detailed technical questions that most commentators wouldn’t know how to formulate. In ‘The Scaling Era’, Patel weaves multiple interviews into a cohesive narrative of AI’s trajectory. The title references the ‘scaling hypothesis’—the notion that simply making AI systems larger creates substantially greater intelligence. The evidence increasingly suggests this approach works.

Virtually no one interviewed in ‘The Scaling Era’—from corporate leaders like Mark Zuckerberg to frontline engineers and analysts—anticipates AI development plateauing. Quite the opposite: nearly everyone notes its surprisingly rapid improvement, with many predicting AGI could emerge by 2030 or earlier. Nor does societal complexity appear to discourage most experts. Many researchers express confidence that the next generation of AI systems, likely arriving within months, will enable widespread adoption of automated cognitive labour, initiating technological acceleration with profound economic and geopolitical implications.

The text-based nature of AI chatbots has made it relatively straightforward to envision applications in writing, legal work, education, customer service and other language-centred domains. Yet this isn’t necessarily where AI developers focus their primary attention. ‘One of the first jobs to be automated is going to be an AI researcher or engineer’, Leopold Aschenbrenner, formerly an alignment researcher at OpenAI, tells Patel. Aschenbrenner—Columbia University’s valedictorian at nineteen in 2021, who mentions studying economic growth ‘in a previous life’—explains that if technology companies assemble teams of AI ‘researchers’, and those researchers identify methods to enhance AI intelligence, the result could trigger an intelligence-feedback loop. ‘Things can start going very fast’, Aschenbrenner warns. Automated researchers might expand into fields like robotics; if one nation establishes a lead over others in such capabilities, he argues, this ‘could be decisive in, say, military competition’. He suggests we might eventually confront scenarios where governments contemplate launching missiles at data centres apparently approaching ‘superintelligence’—AI substantially smarter than humans. ‘We’re basically going to be in a position where we’re protecting data centres with the threat of nuclear retaliation’, Aschenbrenner concludes. ‘Maybe that sounds kind of crazy’.

This represents the most extreme scenario—but even conservative projections remain striking. Economist Tyler Cowen adopts a comparatively measured view: he favours the ‘life is complicated’ perspective and suggests our world contains numerous problems that remain unsolvable regardless of computational intelligence. He notes that researcher numbers have already increased globally—’China, India, and South Korea recently brought scientific talent into the world economy’—without creating profound, science-fiction-level technological transformation. Instead, Cowen anticipates AI might usher in innovation comparable to mid-twentieth century developments, when, as Patel characterises it, humanity progressed ‘from V2 rockets to the Moon landing in a couple of decades’. This might appear relatively restrained—and compared to Aschenbrenner’s forecast, it certainly is. However, consider what those decades delivered: nuclear weapons, satellites, jet travel, the Green Revolution, computers, open-heart surgery, and DNA discovery.

Ilya Sutskever, former chief scientist at OpenAI, offers perhaps the most guarded perspective in the book; when Patel asks when he anticipates AGI’s arrival, Sutskever responds, ‘I hesitate to give you a number’. Patel therefore approaches the question differently, asking Sutskever how long AI might remain ‘very economically valuable, let’s say, on the scale of airplanes’ before automating substantial portions of the economy. Sutskever, finding middle ground between Cowen and Aschenbrenner, suggests this transitional, AI-as-airplanes phase might constitute ‘a good multiyear chunk of time’ that, in retrospect, ‘may feel like it was only one or two years’. Perhaps this resembles the period between 2007, when Apple introduced the iPhone, and approximately 2013, when smartphone ownership reached one billion people—except this time, the newly ubiquitous technology will possess sufficient intelligence to help us invent even more technologies.

The technology we cannot ignore

It’s tempting to treat these perspectives as occupying their own separate reality, like watching a preview for a film you’ll probably skip. After all, who truly knows what lies ahead? But actually, we understand quite a lot. AI already discusses and explains numerous subjects at doctoral level, predicts protein folding, programs computers, inflates cryptocurrency values, and much more. We can confidently predict significant improvement over coming years—while people continuously discover applications affecting how we live, work, discover, build and create. Questions persist regarding the technology’s ultimate potential and whether, philosophically speaking, it genuinely ‘thinks’ or demonstrates creativity. Nevertheless, our mental model of the next decade or two must recognise that no plausible scenario exists where AI fades into irrelevance. The question concerns degrees of technological acceleration.

Even the CSIRO’s Data61 division, our national science agency’s digital research network, has identified AI as potentially contributing $22 trillion to the global economy by 2030. Here in Australia, the technology could add as much as $4 trillion to our economy over the next fifteen years, fundamentally transforming industries from mining to healthcare. These aren’t science fiction figures—they’re conservative projections from some of our most credible research institutions. When Atlassian’s Mike Cannon-Brookes starts investing heavily in AI startups alongside traditional software ventures, savvy business leaders take notice.

‘Degrees of technological acceleration’ might sound like an abstract concern for research scientists or Silicon Valley entrepreneurs sipping flat whites while contemplating disruption. Yet it fundamentally represents a political matter with implications for every Australian business, educational institution, and family kitchen table conversation. Ajeya Cotra, senior adviser at Open Philanthropy, articulates a ‘dream world’ scenario featuring slower AI acceleration. In this world, ‘the science is such that it’s not that easy to radically zoom through levels of intelligence’, she tells Patel. If the ‘AI-automating-AI loop’ develops gradually, she explains, ‘then there are a lot of opportunities for society to both formally and culturally regulate’ artificial intelligence applications.

Cotra recognises this might not materialise. ‘I worry that a lot of powerful things will come really quickly’, she admits. The plausibility of concerning scenarios places AI researchers in an awkward position. They believe in the technology’s potential and resist diminishing it; they harbour legitimate concerns about contributing to some version of an AI catastrophe; and they remain fascinated by speculative possibilities. This combination pushes AI discourse toward extremes. (‘If GPT-5 looks like it doesn’t blow people’s socks off, this is all void’, Jon Y, who produces the YouTube channel ‘Asianometry’, tells Patel. ‘We’re just ripping bong hits’.)

This framing suggests non-specialists need not participate, creating a cognitive dissonance reminiscent of how Australians sometimes approach bushfire planning—we acknowledge the threat intellectually but postpone meaningful preparation until we smell smoke. Either AI fails, or it reinvents our world. Consequently, despite AI’s arrival, its implications remain primarily conceptualised by technical experts. Artificial intelligence will affect everyone from Macquarie Street policymakers to mum-and-dad small business owners in Wagga Wagga, yet an AI politics has barely materialised. Understandably, civil society remains preoccupied with political and social crises centred on Donald Trump; it appears to have limited bandwidth for the technological transformation about to engulf us. If we don’t engage with it, however, those creating the technology will single-handedly determine how it reshapes our lives.

These individuals possess undeniable brilliance—intellectual horsepower that would impress even the most hardened University of Melbourne computer science professor. Without disrespect, however, they aren’t representative of broader society. They possess particular skills, affinities and values shaped by specific cultural and professional environments. Their psychological orientation toward technology—what we psychologists might term their ‘technological self-schema’—differs markedly from most Australians. In one of Patel’s book’s most revealing moments, he asks Sutskever what he plans to do after AGI emerges. Won’t he feel dissatisfied living in some post-scarcity ‘retirement home’? ‘The question of what I’ll be doing or others will be doing after AGI is very tricky’, Sutskever responds. ‘Where will people find meaning?’ He continues:

My sense is that people might actually be spending a lot of time interacting with the AI systems that were created, because the AI systems will be like people, except they’ll be a lot smarter. They won’t have certain human flaws. So my sense is that, over time, people will find a lot of meaning in interacting with these systems because these systems will make them better on the inside.

Would most people—those outside computer science who haven’t devoted their careers to creating AI—believe they might discover life’s purpose through conversing with one? Would most people think machines will make them ‘better on the inside’? These perspectives aren’t inherently unreasonable (and might, surprisingly, prove accurate). But this doesn’t mean such worldviews should guide our technological future.

The challenge lies in articulating alternative visions—perspectives that forcefully express what we want from AI and what we reject—requires serious, broadly humanistic intellectual work spanning politics, economics, psychology, art, and religion. Time for this work rapidly diminishes. Those outside AI development must now join the conversation.

What qualities do we value in people and society? Where should AI assist us, and when should it remain uninvolved? Will we consider AI successful or failing if it replaces schools with screens? What about substituting itself for established institutions—universities, governments, professions? If AI becomes friend, confidant, or romantic partner, does it cross boundaries, and why? How might it affect our cognitive development, interpersonal relationships, and collective decision-making processes? Psychological research on human-computer interaction suggests that our relationships with intelligent machines involve complex attribution processes and emotional responses that merit deeper exploration (Nass & Moon, 2000).

Perhaps AI’s success might be measured by how effectively it restores balance to our politics and stability to our lives, or by how it strengthens institutions it might otherwise undermine. Perhaps its failure appears in how thoroughly it diminishes the value of human minds and freedom. The psychological concept of ‘technological self-efficacy’—our confidence in mastering and directing technological tools—becomes particularly relevant as systems grow increasingly autonomous.

For Australian organisations from the Commonwealth Bank to Woolworths, the coming transformation demands strategic foresight beyond quarterly planning cycles. Business leaders must develop what organisational psychologists call ‘anticipatory awareness’—the capacity to envision and prepare for disruptive change before it materialises fully.

Regardless, controlling AI requires debating and establishing new human values which, previously, we haven’t needed to specify. Otherwise, we surrender our future to individuals primarily concerned with whether their technology functions, and how quickly.


Posted

in

,

by

Tags:

Comments