Why AI is such a hard problem for DC
Presented by Spectrum for the Future
POLITICO’s AI & Tech Summit yesterday brought together legislators, entrepreneurs, and policy wonks to hash out exactly where American governance stands in respect to this transformative new technology.
So… what came of it?
Like most serious policy discussions, the chatter Wednesday was split between what AI means for America’s leadership globally, and what policymakers can do here at home to enhance it. (Read the big takeaways from most individual panels here.)
Geopolitically, the main issue seems clear: Competition with China, and establishing global standards for AI that counter the authoritarian use of technology. But when it comes to what to tackle first in Washington, the answer is murkier. Let’s take a look at some of the summit’s biggest moments to get a better picture:
ON CHINA: Palmer Luckey, the flamboyant founder of the defense contractor Anduril Industries, distilled years of anxiety and tightrope-walking among global tech giants over a theoretical conflict with China into a pithy appeal to companies not yet sold on economic nationalism.
“How stupid will you feel if you build a company that assumes the geopolitical situation with China stays the same or improves?” Luckey asked, positing a scenario where an invasion of Taiwan or other geopolitical tensions force strong American sanctions or even military action.
“You won’t be able to look back and say, ‘Who could have predicted this, nobody saw this coming.’ When you’re a new company you can choose to decouple yourself from China, you can choose to make things in other countries, there are other options for most products,” he said.
Even just 10 years ago, a fully-globalized, China-heavy supply chain for advanced technology like the microchips used to power advanced AI systems seemed like a permanent feature of the economic and tech landscape. Now America is actively restricting China’s access to futuristic chip technology, with more controls likely on the way. The shape of the digital future can change, very fast.
One former official for the Office of the U.S. Trade Representative warned that stopping China from getting its hands on the most advanced U.S. microchips might not be enough, as they make a huge push to invest in “legacy chips” not impacted by current trade restrictions. Lakshmi Raman, the CIA’s director for artificial intelligence, warned that China is growing its AI tools in “every which way” to launch a suite of AI-powered cyberattacks and disruptions.
…AND IN WASHINGTON: What are legislators and regulators actually going to do about it?
Most of the talk around AI legislation at yesterday’s summit was about its impact in the U.S. — and an audience poll showed attendees worry more about existential risks of AI rather than global competition.
Whatever the concern, lawmakers didn’t offer much reassurance in the form of proposals. Instead, they said they’re still working on basic questions. “Are we going to do a broad-based approach with a new agency? Potentially like the EU has done? Or are we going to adopt a sectoral approach, where we empower our existing sectoral regulators to regulate AI within their sectoral spaces?” said Rep. Jay Obernolte (R-Calif.), vice chair of the Congressional Artificial Intelligence Caucus.
Sen. Todd Young (R-Ind.) said he thinks it’s “very likely that we’ll pass at least some narrow pieces of an AI regulatory regime” in the current Congress, but he was vague on the details, aside from saying Americans should be more concerned about the use of automated weapons than in-app AI entertainment. Rep. Ted Lieu (D-Calif.) said any AI legislation is facing an extra obstacle in the current shutdown fight in Congress. Sen. Ed Markey (D-Mass.) called for tightening AI safety measures for children and teens, just after he asked Meta to delay its AI chatbots until their effects on young people were studied, as first reported by POLITICO’s Rebecca Kern.
Ironically, all this talk from Washington revealed exactly how much Silicon Valley is still in the driver’s seat when it comes to writing our AI future, at least for the moment. As POLITICO’s Daniella Cheslow wrote after the summit’s close:
“With AI regulation still fluid, industry players are making their own suggestions, and regulators are relying in part on their goodwill… Several technologists made comments that showed they are operating in a regulation vacuum,” citing among other things a VP for chipmaker Qualcomm saying “quite a few of us have our own set of guardrails.”
Twitter’s slow-but-steady evolution away from its former self continued last night, as Elon Musk announced he disbanded a team focused on stamping out disinformation during elections.
“Oh you mean the ‘Election Integrity’ Team that was undermining election integrity?” Musk wrote in an X post. “Yeah, they’re gone.” The Information reported that several staffers in Ireland working on fighting disinformation were fired this week. (The Information also reported that Musk previously said he would expand the team.)
The move is fully in keeping with Musk’s overall laissez-faire philosophy on speech, showing a tendency to err on the side of allowing false and hateful messages on the platform in the spirit of open discourse. It also comes hot on the heels of a warning from European Union officials that rampant false information, especially on X, is plaguing the Slovakian elections, just days before the votes are cast.
Which jersey are you wearing in the (rhetorical) AI wars?
In an op-ed for the New York Times, two technologists break down the debate over AI development and policy into three main camps: “Doomsayers,” obsessed with the potential existential risk of AI; “Reformers,” progressives more concerned about how it might entrench existing inequalities in society; and “Warriors,” foreign policy hawks who see it as a tool of competition with China.
“These factions are in dialogue not only with the public but also with one another. Sometimes, they trade letters, opinion essays or social threads outlining their positions and attacking others’ in public view,” write Bruce Schneier and Nathan Sanders.
The authors say we should do more than just follow how these groups jockey for power, but also to “Look past the immediate claims and actions of the players to the greater implications of their points of view,” they write. “This isn’t really a debate only about A.I. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.”
- Meet the slate of AI “personalities” Meta is now weaving through its apps.
- A little-known company is ensuring that the digital future arrives computationally fast.
- Google is now allowing teens to sign up for its generative AI search experience.
- A professor is now urging legal protections for… our thoughts.
- Read a (relatively) rare pro-generative AI take from an author of books.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); Nate Robson ([email protected]) and Daniella Cheslow ([email protected]).
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.
Source: https://www.politico.com/
Comment(s)