Google says its AI won't be used for weapons, surveillance

Google says its AI won't be used for weapons, surveillance
© Getty

Google said Thursday that it would not let its artificial intelligence (AI) tools be used for deadly weapons or surveillance.

The tech giant made the pronouncement while unveiling its new AI principles, while saying that it would continue to contract with the government and military.

“These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions,” Google CEO Sundar Pichai wrote in a post.


“We recognize that such powerful technology raises equally powerful questions about its use. As a leader in AI, we feel a deep responsibility to get this right,” he continued.

The company outlined seven principles for how it uses AI, including avoiding “creating or reinforcing unfair bias” and proceeding “where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.”

Pichai explained that Google would not use its AI for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people” nor to support “technologies that gather or use information for surveillance violating internationally accepted norms of human rights.”

The principles' release comes after massive backlash against Google’s Project Maven, an AI drone warfare program that it contracts to the Pentagon.

The company announced last week that will not renew its Project Maven contract amid pressure from employees and backlash from other groups like the Tech Workers Coalition, a group of tech industry workers and labor and community organizers.

More than 4,000 Google employees signed a petition protesting Google’s contract, and some staffers resigned over it.