Google's weaponized AI hypocrisy problem

Google's weaponized AI hypocrisy problem
© Getty Images

Google has an employee revolt and a hypocrisy problem on its hands. Google is against using artificial intelligence (AI) for our Department of Defense. Project Maven was designed to ease the tremendous burden of having humans sit in front of computer screens and review massive amounts of drone video for actionable insights.

Google isn’t apparently against working with the Chinese government on their development of AI. The memo that foreshadowed Google’s withdrawal from Project Maven was authored by Dr. Fei-Fei Li, the chief scientist for AI at Google Cloud. Her words were captured in an email obtained by the New York Times.

“Avoid at ALL COSTS any mention or implication of AI. Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google.”


Apparently that doesn’t apply when Google opens up a new AI center in China. The same Dr. Li said in the announcement “Besides publishing its own work, the Google AI China Center will also support the AI research community by funding and sponsoring AI conferences and workshops, and working closely with the vibrant Chinese AI research community.”

That’s the same "vibrant" community that routinely shares AI research with its military.

According to Bob Work, former Deputy Undersecretary of Defense and architect of Project Maven, “’anything that’s going on in that center is going to be used by China’s military’” based on its strategy of ‘civil-military fusion,’ where every major Chinese company and university has an obligation to share information with the Chinese government.”

A couple of weeks ago, I had a far-ranging discussion with Bob Work. We touched on Project Maven, China, and Google. Although that’s a subject for a future column, it’s clear this is an issue of enormous magnitude and importance to have the ability to defend our country. 

President Xi of China has left no doubt as to the intentions of China. In June of 2017, China released an aggressive plan that seeks to grow their AI development to $59 billion by 2025. Their aim is clear, as well as their targets: The United States, Google and Microsoft.

Unlike the United States, Russia and China have fewer moral qualms about using AI in their weapons systems. Last year President Vladimir Putin predicted “whoever becomes the leader in the [artificial intelligence] sphere will become ruler of the world.” It doesn’t take a tremendous leap of logic to figure out Russia and China would gladly look to claim that mantle.

According to an article from the Center for a New American Security, “AI is a high-level priority within China’s national agenda for military-civil fusion, and this strategic approach could enable the PLA to take full advantage of private sector progress in AI to enhance its military capabilities.”

Report after report, article after article, analysis after analysis makes it clear China will use commercially developed AI for military purposes. I’m still waiting for the employees in the new Google AI center in China to write a similar petition demanding Google withdraw from doing work with the People’s Liberation Army. Except you won’t hear it. Dissent isn’t allowed, especially when large corporate profits are involved.

The irony in another Google escapade is also worth noting. Over 1,400 employees just recently signed a letter protesting Google’s work on a ‘censor-friendly’ search engine. The letter, obtained by The New York Times, highlighted “Google’s apparent willingness to abide by China’s censorship requirement” which in turn raises “urgent moral and ethical issues.” The employees went on to note “Currently we do not have the information required to make ethically-informed decisions about our work, our projects, and our employment.”

I wonder if that includes searching for what really happened in Tiananmen Square. Over 10,000 student pro-democracy activists were killed by the People’s Liberation Army, and there’s no outrage from Google employees? Yet not a single person has been killed using AI (at least that we know of) and Google decides to punt Project Maven into the trash bin under the guise of social responsibility.

Again, I’m still waiting for the letter from the Google employees in China that also raise ‘ethical and moral’ issues with the development of AI that will one day make its way into LAWS-Lethal Autonomous Weapons Systems. The first step is conventional weapons.

There are indications that China has begun incorporating “AI technologies into its next-generation conventional missiles and missile defense intelligence, surveillance, and reconnaissance systems to enhance their precision and [lethality].”

Does this mean the American people are against developing AI that would be eventually used in warfare? If you believe Google is the only bellwether, you might. But a recent Brookings Institution survey discovered something a little different.

“Thirty percent of adult internet users believe AI technologies should be developed for warfare, 39 percent do not, and 32 percent are unsure, according to a survey undertaken by researchers at the Brookings Institution. However, if adversaries already are developing such weapons, 45 percent believe the United States should do so, 25 percent do not, and 30 percent don’t know.”

I wonder if you can search for this in China.

Morgan Wright is an expert on cybersecurity strategy, cyberterrorism, identity theft and privacy. He previously worked as a senior advisor in the U.S. State Department Antiterrorism Assistance Program and as senior law enforcement advisor for the 2012 Republican National Convention. Follow him on Twitter @morganwright_us.