HomeTechAI dey cheap pass human labour, Sam Altman warn as new AI...

AI dey cheap pass human labour, Sam Altman warn as new AI model leak show big cyber risk

OpenAI CEO Sam Altman don warn say artificial intelligence dey become cheaper than human workers for many jobs. For one interview with Forbes, Altman talk say AI systems dey use less energy and time to do tasks wey humans dey do before. He explain say even though e cost plenty money to build and train AI, to use am later na cheaper. Altman talk say the cost to run AI models, especially for the ‘inference’ stage when dem dey generate output, don already dey lower than the energy wey humans need to do the same kind of intellectual work.

Altman push back against common comparisons wey dey weigh AI training costs against human effort. He note say human intelligence sef require decades of biological ‘training’ and energy consumption over lifetime. His main argument na say for per-unit basis of intellectual output, AI systems dey become highly efficient, and that efficiency go still improve more. This mean say companies fit begin use AI instead of people for many basic tasks, especially entry-level office work.

Even though this sound like bad news, Sam Altman no believe say all jobs go vanish. Instead, he talk say jobs go change. Some roles go disappear, but many go update, and new jobs go still create. We don see this kind thing before with new technology. For example, jobs like social media managers or app developers no dey exist years ago. However, jobs wey involve repetitive thinking or simple tasks fit affect first.

Altman also mention say India na one of the fastest-growing markets for AI. Many developers and companies for India don already dey use AI tools, especially for coding. But the scale of usage bring commercial reality. To deliver AI services dey more expensive than traditional internet services, meaning companies need to find ways to make markets like India financially viable at scale.

Meanwhile, another AI company wey be Anthropic don begin test new AI model wey more capable than any wey dem don release before. The company talk say the new model represent “a step change” in AI performance and na “the most capable we’ve built to date.” Descriptions of the model wey dem call “Claude Mythos” or “Capybara” bin dey for publicly-accessible data cache before Fortune magazine see am.

The leaked document show say Anthropic dey especially worry about the model cybersecurity implications. Dem note say the system dey “currently far ahead of any other AI model in cyber capabilities” and “it presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.” This mean say hackers fit use the model to run large-scale cyberattacks.

Anthropic talk say because of this risk, dem plan for the model release go focus on cyber defenders: “We’re releasing it in early access to organizations, giving them a head start in improving the robustness of their codebases against the impending wave of AI-driven exploits.” The company acknowledge say “human error” for one of their external content management system tools cause draft content to dey accessible.

The leak also include information about upcoming invite-only retreat for CEOs of European companies wey go hold for UK, and wey Anthropic CEO Dario Amodei go attend. The two-day retreat dey describe as “intimate gathering” to engage in “thoughtful conversation” at 18th-century manor-turned-hotel-and-spa for English countryside.

For Disney side, new CEO Josh D'Amaro face multiple challenges just one week after he take over. Watershed deal wey Disney strike with OpenAI late last year dissolve suddenly when the tech company announce say dem dey close down their Sora video generator app. That end what suppose be three-year $1 billion partnership, under wey some 200 Disney characters from Star Wars, Marvel, and other brands for populate short-form AI-generated videos on Disney+.

OpenAI decision come as shock to Disney executives, wey learn say Sora go shut down just 30 minutes after dem don meet with OpenAI about the video generator future. One anonymous source call OpenAI decision “big rug-pull.” OpenAI CEO Sam Altman dey reportedly plan strategy shift to refocus on business fundamentals and more streamlined product lineup.

Also for Tuesday, Epic Games—the video-game developer of Fortnite fame—announce say dem dey lay off 1,000 employees after updates to their hit signature product fail to translate to higher engagement. This na bad news for D’Amaro wey be chief architect of Disney $1.5 billion investment in Epic, announce for 2024. The deal give Disney large equity stake and call for creation of entirely new digital universe build around Disney characters and stories.

D’Amaro also inherit reputational fire at ABC, the network wey Disney own. Last week, ABC cancel already filmed 22nd season of The Bachelorette amid domestic violence allegations direct at Taylor Frankie Paul, the planned star this season. Disney stock don dip more than 4% over the past week and underline the challenge behind D’Amaro vision of technology as growth engine.

For philosophical side, Peter Thiel, co-founder and chairman of defense analytics firm Palantir, attempt to map modern history of artificial intelligence onto Judeo-Christian end-times posture. He employ literary lens of the Book of Revelation, where St. John bear witness to core end-times prophecies wey define Christian cosmology. Thiel describe AI developers as having “conjured up a demon in whose existence they claim not to believe.”

Amid ongoing generative AI revolution, most people dey concern about job loss, autonomy or misinformation, but for many prominent tech titans, their chief concern don be the end of the world as we know am—either apocalyptic or utopian. Founders and CEOs from Sam Altman to Dario Amodei don prophesy say AI go change the world for ways we no even fit fathom.

Sam Altman don make clear his transcendent beliefs around ChatGPT. For 2023, when OpenAI market cap na just 6% of wetin e dey today, he talk before U.S. Senate, urge lawmakers to pass thoughtful regulation wey go mitigate chief risk of AI overpowering humanity. Later, for summers of 2024 and 2025, amid more anti-regulatory political paradigm, Altman write series of manifestos where he paint picture of world wey no resemble our grandparents own.

For interview with Tucker Carlson, Elon Musk sardonically raise more pessimistic, even apocalyptic, concerns about the technology, claim say there na “non-trivial” chance AI fit cause “civilization destruction.” For late 2025, Musk double down, talk say, “Mark my words, AI is far more dangerous than nukes… If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course.”

For January 2026 essay, Anthropic CEO Dario Amodei claim say “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.” For September 2025 DC summit, he more bluntly add say “there’s a 25% chance that things go really, really badly.”

Bill Gates, Geoffrey Hinton, Peter Thiel and dozen other tech titans don contribute their thoughts to this discourse. Presently, e go great to know how urgent or true these predictions really be, so we fit pressure test credibility of these alarmist tech titans. Technologists go debate whether or not state of the art really dey as dangerous as dem dey make am out to be.

Case in point: for late 2025 Silicon Valley flood government with $150 million, lobby for ban on state-level AI regulation. Dem cite core themes of burdensome regulations wey chill innovation for face of growing global techno-arms race. This anti-regulatory insistence appear to dey work, as policy evidence show Trump administration don afford Silicon Valley significant self-regulatory power.

Observing their behavior for the aggregate, dem insist say AI get capacity to existentially threaten humanity, yet dem crucially circumvent democratic institutions—emphasizing these existential threats alongside implication say only dem hold power to tame this powerful technology. Why Silicon Valley want to simultaneously convince us say (a) AI na this transcendent, transhumanist technology wey fit end world, and (b) we need hands-off-the-technology paradigm?

The aim na to render AI ungovernable for public imagination—too powerful to regulate, too important to restrain—leaving these companies free to spend, build and chase new financial highs while question of economic recklessness go unasked. Some of these Silicon Valley tech titans fit genuinely believe say their inventions truly transcend human comprehension, and some others fit no even buy their own talk.

Much like their insistence on deregulation, this transcendent rhetoric na best understood as being necessary under present profit ambitions. Frustrating to investors, tech leaders don yet point to any hard evidence of genuine economic benefits wey go justify their lofty revenue projections—so wetin fill that void? Transcendent rhetoric wey keep investors optimistic, news cycle fixated and users dey come back.

The mirage of doomsday rhetoric match the mirage for balance sheet. These companies dey balloon for size off ambitious valuations from investor sentiment, then invest, loan and trade that capital with one another—closed-loop (likely) bubble propped up by ambitious long-term revenue potentials. Right now, investors dey willing pay around $50 (sometimes as high as $100) for every $1 for revenue wey these tech titans bring in.

Existence and success of this entire system, trillions of dollars of market cap, depend on (a) continued optimism from Silicon Valley investors and (b) AI companies get long-term path to realize these revenue goals. Right now, small evidence suggest Silicon Valley dey prepared to deliver. Exploring OpenAI as prime example, along with their recent restructuring as for-profit company, dem release highly ambitious revenue goals for coming years.

According to company projections, OpenAI expect to grow revenue from $13 billion for 2025 to nearly $200 billion for 2030, with annual negative net cash flows of up to $50 billion (for 2028) attributable to data center construction. HSBC estimate say company still no go profitable even into next decade, and dey short $207 billion of realizing vision for more computing power and data centers.

For essence, company need business model wey reconcile ambitions for profitability and data center capacity with underperforming top-line revenue. For all the talk wey Sam Altman dey do about AI ushering in era of unfathomable prosperity, e curious why his most recent strategic move na to release Sora 2 with short-form video format and social media interface.

OpenAI also announce move to make more intimate/sexually explicit content accessible to adult users and for January 2026 dem announce say dem go begin sell ad space on free version of ChatGPT interface. All these announcements suggest urgent strategic divergence from supposedly world-changing AI toward more monetizable products, grounded for cultivation of user attention.

Why Altman decide to introduce ads, restructure as for-profit and release hellish version of TikTok after years of prophesying AI heaven? Because OpenAI dey screwed, not the world. For November 2025 podcast appearance, OpenAI CEO Sam Altman respond to doubts about revenue viability with exasperated “Enough.” Seated for front of two framed images wey depict human space travel, Sam Altman appear defensive, as many comments note.

After minute-long rant wey call into question OpenAI revenue viability, contrasting their current $13 billion for revenue with ambitious $1.4 trillion for spend commitments, Altman cut off host with exasperated “Brad, if you want to sell your shares, I’ll find you a buyer… enough.” Perhaps tendency to prophesy apocalypse or utopia na better understood as just being good business when under otherwise ordinary, but urgent, financial circumstances.


Do you have a news tip for NNN? Please email us at editor @ nnn.ng


Halimah Adamu
Halimah Adamuhttps://nnn.ng/
Halimah Adamu na reporter for NNN. NNN dey publish hot-hot tori for Nigeria and around di world for naija pidgin language so dat every Nigerian go fit follow national news, no mata dia level of school. NNN dey only publish tori wey be true-true, wey get credibility, wey dem fit verify, wey get authority, and wey dem don investigate well-well.
RELATED ARTICLES
- Advertisment -

Most Popular