Before we regulate Big Tech, let’s make sure we don’t hurt national security
With the laudable goals of promoting competition and outlawing unfair tactics in the online world, Congress is considering antitrust bills that would regulate how companies such as Apple, Amazon, Facebook (Meta), Google (Alphabet) and Microsoft treat other businesses that use their digital platforms. Regardless of the extent of regulation, everyone recognizes that the platforms must be kept free from malware and spyware.
But some of the provisions in the bills might inadvertently undercut the ability to do that. Endangering cybersecurity on our major social media and commercial online platforms, which are critical to almost every aspect of our personal and commercial lives, is a national security risk.
We should be even more wary about these risks now in view of President Biden’s statement in late March that Russia was “exploring options for potential cyberattacks” — an unprecedented cyber warning by a commander in chief. Apart from the Russian cyber threat, we’re already struggling in our cyber battle against the spy agencies and ransomware gangs operating in China, North Korea and Iran – with American schools, hospitals and businesses everyday suffering data thefts and ransomware attacks. We need to make sure our nation’s cyber infrastructure is in a better position to fend off increasingly sophisticated attacks.
Yet there’s concern that the two bills recently reported out by the Senate Judiciary Committee – the American Innovation and Choice Online Act (S.2992) and the Open App Markets Act (S.2710) – could in fact increase cybersecurity risks. The bills would restrict some operating abilities of the Big Tech platforms and force them to redesign their systems, for example, to allow business users (which could include foreign entities) to have more open access to the platform’s proprietary software, thus theoretically encouraging interoperability and cutting down the competitive benefits of that software.
The bills also seek to block certain steps the platforms take to enforce their rules, thus giving business users greater operating freedom on the platforms; and to shield such users’ own data from analysis by the platforms, thus preventing the platforms from taking unfair competitive advantage of their users’ data.
While those abilities can be abused by the platform owners, they are also, at least in some respects, essential. That’s because in order to prevent the installation of malware or spyware, to discover technical vulnerabilities that could be exploited by foreign countries or to uncover and stop disinformation on their platforms, the platforms in many cases rely on the very abilities that would be banned.
Mandating that a third party has the right to connect and operate seamlessly with a platform’s own systems could, for example, mean that the platform couldn’t scan for or block malicious code; the platform does indeed have to “discriminate” against bad software. Yet, depending on the interpretation of the bills’ provisions, that might be unintentionally outlawed. Obviously, curtailing a platform’s ability to prevent a computer virus from infecting that platform or its users, or allowing disinformation to be posted and disseminated, can’t be good for our national security.
To be fair, the bills contain exceptions allowing some business practices to continue where necessary for cybersecurity. But it’s not clear that the exceptions are comprehensive enough, or will be properly interpreted by the regulatory agencies and the courts, to ensure that online platforms will indeed be able to shield themselves from cyber maliciousness. We shouldn’t have to take that risk, however small.
This isn’t to say that Big Tech should be immune from scrutiny or regulation. Indeed, revelations over the past few years about some of the capabilities and operations of several companies have highlighted some intrusive and abusive practices that most people think should be banned.
So, there is a reasonable public policy argument for the belated imposition of some restrictions on an industry that was very deliberately allowed to grow without restraint and indeed with special legal protections (such as Section 230 of the Communications Decency Act, which exempted online platforms from liability for what their users posted).
Yet the way Congress is seeking to impose these restrictions creates potential problems. Because of the possibility that a given technologic practice or ability could be equally used for good purposes (say, creating additional or safer functionality for users) as well as for bad (unfairly blocking users’ ability to make selections or forcing them to use more expensive options), it’s hard to lay down a rule that simply bans a particular practice or ability.
Moreover, the bills use antitrust tools that aren’t well suited to solving complex problems that call for fine distinctions in business practice and online speech. Regulating technology to achieve subjective outcomes – such as more and fair competition – is intrinsically difficult and made even more challenging by its constant innovation. By contrast, regulating for objective outcomes – such as reducing the number of deaths due to impure drugs or automobile crashes – is more straightforward. So given the acknowledged difficulties of legislating in this area, it’s especially important to ensure that proposed rules don’t have unintended consequences.
The reality is that, for social and political reasons, some regulation of Big Tech is appropriate and seemingly inevitable. Consequently, we are in a debate between proponents who claim that the online platforms have exaggerated the risks of the bills and minimized the value of the security exceptions, and opponents who say that well-meaning but overbroad legislation could bring more harm than what is sought to be cured.
There’s some merit to both positions. But when national security is at stake, it’s hard to see why we should take the risk, especially when there is an easy way of minimizing that risk by undertaking what should have been done at a much earlier stage of the legislative process: a national security review of the proposed legislation.
When the cyber criminals and spy agencies of countries such as Russia, China, North Korea and Iran discover a computer network they want to get into, they probe the network looking for ways to get in. Similarly, network owners do “penetration testing” on their own systems in the hope of discovering vulnerabilities before those cyber adversaries do. We know what those countries are going to do when they confront networks that will be forced to comply with any of these proposed bills, namely look for ways to exploit the new rules.
Thus, we should do our own “penetration testing” first by establishing some review by the federal government of exactly what vulnerabilities are likely to be created, even if unintentionally, by any law in this area before it becomes effective. This could be done, for example, by asking the national cyber director to coordinate an expedited review with the Departments of Justice, Homeland Security and Defense, the Office of the Director of National Intelligence and the intelligence and other relevant committees in Congress, reporting on potential technical problems with the bills and possible solutions.
The risk of not doing so is neither trivial nor one we should take; and the only risk of doing such a review is a short delay. In this case, that’s the right risk to take.
Glenn S. Gerstell is a senior adviser at the Center for Strategic & International Studies. He served as general counsel of the National Security Agency from 2015 to 2020.
The Hill has removed its comment section, as there are many other forums for readers to participate in the conversation. We invite you to join the discussion on Facebook and Twitter.