Why Altman and Musk pose a problem for Washington
Where did you follow the OpenAI drama over the weekend?
For most people, the chaos of former OpenAI CEO Sam Altman’s ouster unfolded — and is still unfolding — largely on X.
For all the controversy around the platform formerly known as Twitter, it’s still as central to conversations in Silicon Valley as it was in its early days. (You could argue that Elon Musk’s takeover has performed a sort of version control, returning it to an early-Obama-era Wild West feel and laser focus on internecine techie politics.)
But the collision of Musk’s Twitter takeover and the chaos at OpenAI reveals something even bigger than social media’s shifting tectonic plates — the extent of the society-shaping power wielded by a very small cadre of Silicon Valley titans.
One of Sam Altman’s co-founders at OpenAI was, after all… Elon Musk. Musk turned his back on the project because of a complicated disagreement with Altman over their progress compared to Google DeepMind, as well as his own personal beliefs about AI development, and has now launched his own AI venture.
Meanwhile, Greg Brockman, the OpenAI president ousted alongside Altman, was the first CTO of Stripe, which raised early money from Musk and fellow OpenAI co-founder Peter Thiel. Musk also tried using his huge account on X to intervene personally in the OpenAI drama this weekend, tweeting directly at board member and the company’s chief scientist Ilya Sutskever, someone he says “has a good moral compass and does not seek power.”
And if you think OpenAI’s governance shakeup is a chance to shake off these ties, think again. New OpenAI CEO Emmett Shear is a visiting partner at Y Combinator, the startup accelerator where Sam Altman once was president. It still serves as a Silicon Valley business and social hub, and two of its co-founders were Jessica Livingston and Trevor Blackwell, also — you guessed it — OpenAI co-founders. Still following?
With “how to govern AI” still topic A in the Washington policy space (or close to it), the blowup at the most high-profile AI company shines a light on a particularly thorny challenge for regulators trying to shape the future.
Individual personalities — and individual fortunes — matter far more in the world of Silicon Valley startups than they do in corporate America’s more consensus-oriented, traditional bureaucracies. Once, industrial names like Morgan or Rockefeller or Ford drove national policy from their boardroom chairs, a version of America we might have thought we’ve put to rest. Not in tech: Today we take it for granted that Bezos, Zuckerberg and Musk are more or less synonymous with their corporate empires. (Perhaps blame Steve Jobs, the charismatic Apple founder and world-shaper who looms large over them all in the mind of business builders.)
Big organizations move slowly, and respond to rules. Startup titans, not so much. It’s extremely difficult to imagine more established tech giants like IBM or Microsoft changing their business model on a personal whim or passion, like with Musk’s free speech crusade for Twitter, or Mark Zuckerberg’s sudden commitment to the metaverse, or Altman’s belief in human-like AI superintelligence.
OpenAI, in particular, was intended to serve a greater mission under its unconventional nonprofit structure, but it’s become clear just how much the company is shaped by a single person, its ousted CEO. Samuel Hammond, a senior economist at the Foundation For American Innovation, and a blogger focusing on AI and governance, calls it a “cultish and borderline messianic employee culture, as shown in their willingness to all resign in solidarity,” citing social media reports that Altman personally interviewed every new hire at the company, a philosophy he once advocated for in a blog post.
He described to me how Sam Altman’s personal beliefs have come to define the company, and therefore the larger existential debates around the potential existence (and risk) of superhuman “AGI,” or artificial general intelligence.
“Over the last year, Altman reoriented OpenAI to be even more mission driven, changing their core values to emphasize that anything that didn’t advance AGI was ‘out of scope,’” Hammond said.
The lesson for not just America, but humanity writ large, is that a very small group of people have come to wield total, personalized control over many of the systems, whether Musk’s social media platform or Altman’s intelligence machines, that are shaping society’s present and future.
Regulators and critics have proposed strategies for reining in that influence, from the European Union’s elaborate regulatory regime to Federal Trade Commission Chair Lina Khan’s belief in antitrust enforcement to some proposals to emulate Silicon Valley governance itself.
None have yet succeeded. Personal rule in Silicon Valley had major, well-documented ramifications for the era of startup culture that was dominated by app-based social and connectivity companies like Facebook or Uber. It will have even larger ones in the AI era, where, realistic or not, the discourse is characterized by arguments about the very fate of humanity.
Washington’s consumer-protection chiefs are saying “I told you so” over the past week’s chaos at OpenAI.
Speaking with Morning Money’s Sam Sutton, Consumer Financial Protection Bureau Director Rohit Chopra said Sam Altman’s ouster is a sobering reminder of the massive power a small cadre of tech bigwigs have over society-shaping products. (Ahem.)
“There is a race to develop the foundational AI models. There will probably not be tons of those models. It may in fact be a natural oligopoly,” Chopra said. “The fact that Big Tech companies are now overlapping with the major foundational AI models is adding even more questions about what we do to make sure that they do not have outsized power.”
Commodity Futures Trading Commission Commissioner Christy Romero chimed in on how it’s not “quite known what all the risks are” with regard to AI. Chopra, too, warned of the technology’s financial risks, saying its use could “actually lead to very procyclical effects that would magnify tremors into much larger financial quakes.”
The slugfest between Europe’s biggest economies and the European Union over the AI Act is turning potentially deadly for that bill’s prospects.
POLITICO’s Gian Volpicelli reported on the clash, with France, Germany, and Italy arguing that the most powerful AI systems should be exempt from the upcoming law so the bloc might have a better chance of developing its own equivalents to systems like the American-made ChatGPT.
The three countries wrote a paper declaring Europe’s need for a “regulatory framework which fosters innovation and competition, so that European players can emerge and carry our voice and values in the global race of AI,” something one European negotiator told Gian amounts to “a declaration of war.”
Negotiators are now racing to meet a Dec. 6 deadline for the bill’s text, with an upcoming European Parliament election in June 2024 potentially jeopardizing the bill’s chance of passage.
- Who is Ilya Sutskever, the board member who fired Sam Altman?
- 95 percent of OpenAI’s employees have now threatened to leave the company.
- Research funding might require its own scientific revolution.
- Palantir won a massive contract to handle the U.K. NHS’ data.
- Elon Musk’s lawsuit against Media Matters contains some unflattering evidence.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); Nate Robson ([email protected]) and Daniella Cheslow ([email protected]).
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.
Source: https://www.politico.com/