1 hora atrás 2

Musk Warns of Killer AI — While He and the Rest of Silicon Valley Cash In on AI That Kills

The bitter courtroom brawl between Elon Musk and Sam Altman captivating the tech industry this week revolves in no small part around fears that artificial intelligence technologies both men are building could spiral out of control and exterminate humanity. Such far-looking scenarios obscure the fact that tech companies are enlisting to kill today.

Musk’s break with OpenAI, which he co-founded in 2015, is in a sense a lawsuit about safety. He contends that Altman betrayed the company’s original nonprofit mission of safely and responsibly pursuing artificial intelligence for the public benefit by converting it into the revenue-maximizing behemoth it has become. According to Musk, the stakes of this are existential for the human race: “It could kill us all,” he testified on Tuesday. “We don’t want to have a ‘Terminator’ outcome.”

The AI safety community frequently invokes these dystopian scenarios to both warn the public about the technology’s risks and implicitly boast of its great power. While such a science-fiction future may lay ahead, these warnings overlook the deadly present. Artificial intelligence is already targeting humans with the blessing of Musk and his rivals.

Musk and others who caution about an uprising of sentient killer machines are anticipating the emergence of “artificial general intelligence,” an ill-defined form of superior machine reasoning that may never come to pass. But their fear that AI could kill us all is less hypothetical for those living in places targeted by the Trump administration’s global wars. In Iran, for instance, Anthropic’s Claude AI model “suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance,” according to the Washington Post.

“ There’s a real danger of Skynet-like outcomes even without a Skynet-style takeover.”

“The risks of integrating frontier AI into the nation’s most lethal capabilities are already existential, both for civilians swept up in the violence and destruction of AI-enabled wars, and rank-and-file troops that have to live with the consequences of potentially unsafe weapons they can’t control,” Amoh Toh, senior counsel at Brennan Center’s Liberty and National Security Program, told The Intercept. “Existing AI models are already pushing policymakers and militaries toward nuclear escalation — there’s a real danger of Skynet-like outcomes even without a Skynet-style takeover.”

Silicon Valley has widely embraced AI military contracts despite its worries over lethal AI. Amazon, OpenAI, Musk’s xAI, and Microsoft all earn money from selling large language model services to the Pentagon. Even Anthropic, accused of “betrayal” by War Secretary Pete Hegseth and declared a national supply chain risk for mounting the smallest of opposition to the Pentagon’s terms, is still keen to participate in the national kill chain. “Anthropic has much more in common with the Department of War than we have differences,” CEO Dario Amodei wrote in a blog post a week after the United States bombed an elementary school in Iran, killing more than 100 children.

Google offers a telling illustration of the industry’s increasing coziness with selling AI to the military. Following a 2018 employee revolt over Project Maven, a contract to help target Pentagon airstrikes, CEO Sundar Pichai pledged his company would swear off the business of killing. He wrote in a company blog post that Google would not pursue deals that could cause harm, including applications whose “principal purpose or implementation is to cause or directly facilitate injury to people.” He added: “These are not theoretical concepts, they are concrete standards that will actively govern our research and product development and will impact our business decisions.”

After watching AI help wage a war that has already killed over 1,700 Iranian civilians, Google this week sent a clear message: We want in. In a deal that makes explicit the extent to which company leadership has abandoned its AI principles, Google agreed to provide AI services to the Pentagon that allow for “classified workloads,” sensitive military work that encompasses tasks like intelligence analysis and targeting airstrikes, The Information reported.

Executives say they’re terrified of the technology killing by accident, while wholly supportive of using it to kill on purpose.

According to the tech news outlet, the deal allows the U.S. military to use Google’s AI models for “any lawful government purpose” — a carveout that could allow any uses the administration deems legal. Take, for example, the Trump administration’s Operation Southern Spear, the ongoing aerial assassination program against civilian boats accused of drug trafficking that has killed more than 180 people to date. The campaign has been widely condemned as illegal under both international and U.S. law, but the administration has deemed its own actions legal through a Department of Justice memo that remains secret. On Friday, the Pentagon announced additional “lawful operational use” deals with Nvidia, Microsoft, and Amazon as well.

The Google contract reportedly includes a toothless and unenforceable provision gesturing at concerns over autonomous and spying. “We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight,” the clause reportedly states.

“‘Don’t regulate us or it’ll kill innovation.’ … The reality of Google’s work with the military is it’s part of a tech-military ecosystem that’s killing people today.”

“When I worked at Google, they would spend a lot of time punting into the future, promising a future that would never come,” said William Fitzgerald, a former Google employee who helped organize the 2018 worker-led campaign against the Maven contract. “‘Don’t regulate us or it’ll kill innovation.’ The talking point is the same today. The reality of Google’s work with the military is it’s part of a tech-military ecosystem that’s killing people today.”

Google spokesperson Kate Dreyer did not respond to questions about the contract’s language, instead touting how the company’s military work applies “to areas like logistics, cybersecurity, diplomatic translation, fleet maintenance, and the defense of critical infrastructure.”

There is little evidence the people in charge find this technology enticing because of its diplomatic translation prowess. In a January address to Musk’s employees at SpaceX, another Pentagon contractor, Hegseth explained how “an embrace of AI” would make the military “more lethal.”

Musk and Altman, though foes at the moment, can at least find common ground in their support of Hegseth. Musk, a longtime defense contractor, similarly wraps himself in the flag, tweeting in 2023, “I will fight for and die in America.” Altman, who once expressed skepticism toward military work, now frames OpenAI’s mission in terms of patriotic nationalism. (In 2024, The Intercept sued OpenAI in federal court over the company’s use of copyrighted articles to train its chatbot ChatGPT. The case is ongoing.)

Between Musk’s courtroom visions of the apocalypse and Google’s plunge into classified workloads, the week’s news illustrates the disjointed state of AI industry ethics, where executives say they’re terrified of the technology killing by accident, while wholly supportive of using it to kill on purpose.

Though AI executives clearly find this a virtuous revenue stream, some of the people who actually built the technology do not. Andreas Kirsch, a research scientist at Google’s pioneering DeepMind laboratory that produced much of the work on which xAI and Anthropic rely, responded to this week’s news with dismay: “I’m speechless at Google signing a deal to use our AI models for classified tasks. Frankly, it is shameful,” he wrote on X. Alex Turner, a DeepMind colleague of Kirsch’s, described the contract in a single word: “Shameful.”

Leia o artigo inteiro

Do Twitter

Comentários

Aproveite ao máximo as notícias fazendo login
Entrar Registro