OpenAI’s Altman, Ethereum’s Buterin Outline Competing Visions for AI’s Future

OpenAI's Altman, Ethereum's Buterin Outline Competing Visions for AI's Future



This week, two of tech’s most influential voices offered contrasting visions of artificial intelligence development, highlighting the growing tension between innovation and safety.

CEO Sam Altman revealed Sunday evening in a blog post about his company’s trajectory that OpenAI has tripled its user base to over 300 million weekly active users as it races toward artificial general intelligence (AGI).

“We are now confident we know how to build AGI as we have traditionally understood it,” Altman said, claiming that in 2025, AI agents could “join the workforce” and “materially change the output of companies.”

Altman says OpenAI is headed toward more than just AI agents and AGI, saying that the company is beginning to work on “superintelligence in the true sense of the word.” 

Binance

A timeframe for the delivery of AGI or superintelligence is unclear. OpenAI did not immediately respond to a request for comment.

But hours earlier on Sunday, Ethereum co-creator Vitalik Buterin proposed using blockchain technology to create global failsafe mechanisms for advanced AI systems, including a “soft pause” capability that could temporarily restrict industrial-scale AI operations if warning signs emerge.

Crypto-based security for AI safety

Buterin speaks here about “d/acc” or decentralized/defensive acceleration. In the simplest sense, d/acc is a variation on e/acc, or effective acceleration, a philosophical movement espoused by high-profile Silicon Valley figures such as a16z’s Marc Andreessen.

Buterin’s d/acc also supports technological progress but prioritizes developments that enhance safety and human agency. Unlike effective accelerationism (e/acc), which takes a “growth at any cost” approach, d/acc focuses on building defensive capabilities first.

“D/acc is an extension of the underlying values of crypto (decentralization, censorship resistance, open global economy and society) to other areas of technology,” Buterin wrote.

Looking back at how d/acc has progressed over the past year, Buterin wrote on how a more cautious approach toward AGI and superintelligent systems could be implemented using existing crypto mechanisms such as zero-knowledge proofs.

Under Buterin’s proposal, major AI computers would need weekly approval from three international groups to keep running.

“The signatures would be device-independent (if desired, we could even require a zero-knowledge proof that they were published on a blockchain), so it would be all-or-nothing: there would be no practical way to authorize one device to keep running without authorizing all other devices,” Buterin explained.

The system would work like a master switch in which either all approved computers run, or none do—preventing anyone from making selective enforcements.

“Until such a critical moment happens, merely having the capability to soft-pause would cause little harm to developers,” Buterin noted, describing the system as a form of insurance against catastrophic scenarios.

In any case, OpenAI’s explosive growth from 2023—from 100 million to 300 million weekly users in just two years—shows how AI adoption is progressing rapidly.

From an independent research lab into a major tech company, Altman acknowledged the challenges of building “an entire company, almost from scratch, around this new technology.”

The proposals reflect broader industry debates around managing AI development. Proponents have previously argued that implementing any global control system would require unprecedented cooperation between major AI developers, governments, and the crypto sector.

“A year of ‘wartime mode’ can easily be worth a hundred years of work under conditions of complacency,” Buterin wrote. “If we have to limit people, it seems better to limit everyone on an equal footing and do the hard work of actually trying to cooperate to organize that instead of one party seeking to dominate everyone else.”

Edited by Sebastian Sinclair

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest