Hackback in black


Imagine that a random car is periodically driving across your front yard, leaving tire treads and gouges on your otherwise pristine lawn. How would you handle it? You might set up a surveillance camera to capture an image of the license plate and driver and then share the image with the police. You might install a fence. You might also consider scattering several boxes of nails across your lawn to puncture the tires of the intruding car, if you thought the risks and legal consequences from the chance that a guest or a child retrieving an innocently-misdirected ball might be harmed in the process were worth it. But you probably wouldn’t hire a group of contractor “lawn ninjas” wielding deadly weapons to hide in the bushes, follow the car to its destination, install video and audio surveillance devices on it, slash its tires, and smash its windows.

The law does not consider these four responses as functional equivalents: the first two are likely entirely reasonable, the third requires precautions of safety, but the fourth involves multiple crimes and dangerous vigilantism. These distinctions are well-founded in brick-and-mortar world criminal and civil law – and they should transfer into the digital world of information security.

{mosads}Yet “hackback” legislation (the Active Cyber Defense Certainty Act “ACDC Act”) has lost sight of these legal distinctions. The ACDC Act creates an exception to the Computer Fraud and Abuse Act (“CFAA”) for various retaliatory acts by the “victim” of a “cyberattack” against the person the victim believes is the perpetrator of the computer intrusion. While it is understandable that companies want to actively defend their systems from computer intrusions, this approach is problematic because it condones security vigilantism, underestimates the difficulty of correct attribution, and ignores underlying problems in the CFAA.

In essence, as drafted, the law would encourage private conduct that usurps the role of law enforcement. It also may compound the problem of security vigilantism by underestimating the technical difficulty of correct attack attribution. Attack attribution in internet-mediated criminality is complex. However, even our primitive lawn-ninja hypothetical can illustrate some types of attribution challenges. For example, the driver of the car may not be the owner of the car. The driver might be a car thief. He might be a malicious neighbor who periodically rents a car for purposes of property destruction. Destroying and bugging the car in retaliation may be destroying property belonging to someone other than the perpetrator of the lawn intrusion.

It is common for sophisticated attackers to put in attribution “buffers,” making internet-mediated attacks less traceable. For example, in 2016, the Mirai botnet, a botnet of hundreds of thousands of Internet of Things consumer devices such as DVRs and webcams, was used to attack major websites such as Twitter and Reddit with a distributed denial of service attack (“DDoS”). The consumers whose devices were compromised by Mirai and leveraged in the botnet were innocent third parties caught up in the attack. Mirai highlights the types of attribution challenges glossed over by the bill: when the allegedly-attacking devices are attached to businesses, homes or even the bodies of consumers, the functional results of an aggressive “hackback” regime will cause re-victimization of innocent, technologically unsophisticated third parties.

As we connect billions of (often vulnerable) devices to the internet – autonomous cars, internet-connected medical devices, hospital systems and the like – misattribution and un-nuanced attribution plus “hackback” is a recipe for catastrophe. It could result in “hackbacks” that brick – render inoperable – innocent consumers’ DVRs, internet cameras, routers, smart appliances and even body-attached medical devices.

Moreover, if a company is the victim of a major security compromise due to its own lax security practices, it may have more incentive to try to cover up these governance failures through overzealous and poorly-considered “hackbacks.” In other words, legalizing “hackbacks” may create a world in which consumers must bear the risk of bad security practices and of being collateral damage in private internet conflicts.

Finally, the ACDC Act obfuscates the underlying problems of the CFAA instead of correcting them. We do not need to authorize vigilantism to improve security. We do need to start with updating the straining statutory framework of the CFAA to better mesh with today’s technological security reality and giving agencies additional tools and resources for enforcement of existing law. We need stronger security defenses. But a better approach is one that encourages security research by clarifying the CFAA with brighter lines. The best path forward isn’t by passing a law that will open new pathways for retaliatory hacking with little oversight.

Terrell McSweeny is an FTC Commissioner and Andrea Matwyshyn is a Northeastern University Law Professor.


The Hill has removed its comment section, as there are many other forums for readers to participate in the conversation. We invite you to join the discussion on Facebook and Twitter.

More Technology News

See All
See all Hill.TV See all Video

Most Popular

Load more


See all Video