Robots are in our future. Will policymakers keep denying that?
© Getty Images

You’d be forgiven for getting worried about how many smart folks seem to be warning lately that robots are taking all our jobs. Bill Gates calls for the government to tax machines to offset the costs of job losses. Stephen Hawking warns that artificial intelligence (AI) could end the human species. Elon Musk says we need to merge with AI, or become obsolete.

While economists and technologists generally agree that AI can lead to job losses in some industries for at least the short run, there’s significant disagreement about the longer-term net effects of automation on jobs. Rather than base public policy efforts simply on speculative fears, policymakers should recognize AI’s potential to empower workers and enhance their productivity, including in key sectors like cybersecurity.

ADVERTISEMENT
When a computer beat chess grandmaster Garry Kasparov in 1997, some predicted it would mean the end of chess. Instead, the game evolved to accommodate teams that combine human and machine players. These “centaur” chess teams now regularly outperform either AI or humans alone.

 

Indeed, the end goal of most AI applications — from voice recognition to email spam detection — always has been to augment human abilities, not to supplant them. Rather than pitting man against machine, AI is ushering in a new age of man plus machine. According to consultancy McKinsey & Co., robotics, machine learning and artificial intelligence will combine to raise global labor productivity by 0.8 percent to 1.4 percent a year between now and 2065.

Among these changes, AI is revolutionizing the way cybersecurity professionals do their jobs. By referring potential problems to employees and helping them to prioritize threats, these technologies promise a fundamental shift toward effective AI-human pairs in cybersecurity. At a recent hearing before the Senate Commerce Committee, IBM Vice President of Threat Intelligence Caleb Barlow testified about the company's efforts in the emerging field of “cognitive security,” deploying the AI-based program Watson to analyze data and find patterns and anomalies.

The average organization faces about 200,000 cybersecurity threat alerts each day, overwhelming the experts who try to screen them. By flagging the signals that indicate real threats and marrying machines’ data-driven insights with human intuition and decision making, artificial intelligence can help security teams better allocate their resources and efforts.

ForAllSecure, a company that applies AI to find and patch known vulnerabilities in the world's existing software, won a $2 million prize last year in a competition sponsored by the Defense Advanced Research Projects Agency. Since 44 percent of malware is targeted at vulnerabilities that are between two and four years old, an automatic patching system could make an organization significantly more secure. In the Pentagon’s pilot “bug bounty” program, a company called HackerOne used automated tools to help ethical hackers discover flaws in their public-facing websites.

Crucially, these breakthroughs may help alleviate the shortage of qualified workers in cyber defense. More than 200,000 U.S. cybersecurity jobs currently are vacant and job postings have spiked 74 percent since 2010, according to Bureau of Labor Statistics data. Cisco estimates the number of unfilled cybersecurity openings to be around 1 million worldwide, and they could top 2 million by 2019.

The scary future we need to guard against is one in which we don’t come to enjoy the breakthroughs these human-AI teams could make. IBM also recently teamed up with the XPRIZE Foundation to run a global competition among human-machine teams to find solutions to systemic problems in areas like education, healthcare and the environment. Policymakers will have to decide whether they will let AI transform the way we work, or slow development in the name of protecting traditional jobs or cushioning workers.

Anne Hobson is a technology policy fellow at the R Street Institute.

Daniel Oglesby is a research assistant at the R Street Institute.


The views expressed by contributors are their own and are not the views of The Hill.