Palantir Technologies CEO Alex Karp says that the US government should be working on producing AI weapons despite valid concerns.
In a New York Times op-ed, Karp made comparisons to the Manhattan Project and said that bold action similar to that taken by Oppenheimer and his team was vital to national interests. Palantir produces target selection, mission planning, and satellite reconnaissance software tools that are currently used by the US defense force.
The company has already integrated AI into its Artificial Intelligence Platform (AIP) which has both commercial and defense applications.
The challenge that companies like Palantir face is that to develop AI weapons they need talented engineers, and they’re hard to find.
Karp bemoaned the reluctance of many Silicon Valley engineers to work on defense projects, saying that their moral reasoning was misguided and even unpatriotic. Compounding his frustration is the lengths regulators are going to in an effort to put the brakes on AI development.
While acknowledging the very real known and unknown dangers, Karp said the US should be at the forefront of AI weapon development because its enemies did not share its reticence.
“Our adversaries will not pause to indulge in theatrical debates about the merits of developing technologies with critical military and national security applications. They will proceed,” he said.
Weapons are designed to be dangerous, but we don’t even really understand just how dangerous AI weapons could be. But Karp insists that creating dangerous AI weapons is essential for keeping the West safe.
He quoted Thomas Schelling, a Harvard and Yale economics and game theory professor, who said, “To be coercive, violence has to be anticipated…The power to hurt is bargaining power. To exploit it is diplomacy — vicious diplomacy, but diplomacy.”
Karp says that the AI threat we should be more concerned about is how to keep AI in check if it is allowed to integrate with the power grid, air traffic control, or defense and intelligence networks.
That valid concern doesn’t really seem to support his argument, though. In stating the obvious regarding AI as a weapon he said “The ability of software to facilitate the elimination of an enemy is a precondition for its value to the defense and intelligence agencies with which we work.”
We’re concerned about letting AI loose on power grids or air traffic control because we worry about what might happen if operators lose control of it.
Creating powerful AI weapons to combat your enemies only offers protection if you get to decide who the enemy is. When the weapon has the potential to make that determination itself then we’re all in trouble.