Do federal agencies make the grade on cybersecurity?
© Getty

March may be spring break for students, but for government agencies, it's time for their annual cybersecurity report cards. Last week, the Office of Management and Budget (OMB) issued scores and grades for federal agencies' cybersecurity efforts for 2015. For most agencies, the results were not good. Out of the 24 large agencies evaluated, the General Services Administration was the only agency to earn an A grade (a 91, on a scale of zero to 100). This was a significant decrease from 2014, when eight agencies earned A grades.

While most will focus on the top-level scores, there are three important takeaways.

First, the government needs to modernize its data collection process. Today, chief information security officers in large multinational businesses are able to provide cybersecurity metrics and measurements to senior executives in real time. Board members regularly consume these metrics during quarterly meetings. They know their jobs are on the line if the company doesn't execute properly on its cybersecurity initiatives.

ADVERTISEMENT
The OMB must move toward automated reporting of agency data in order to continuously evaluate the effectiveness of agency cybersecurity programs. Today, the OMB relies on an antiquated collection process that prevents it from providing more frequent updates to members of Congress. Without more frequent measurements, it is hard for senior agency officials and members to conduct robust oversight.

Modernizing the data collection process will help address the second point: The government needs to reassess the accuracy of the data that it is collecting. BitSight Technologies (of which I am vice president of business development) observes indicators of compromise, examples of poor hygiene and poor user practices taking place on government networks every day. Some of our real-time observations conflict with the data manually reported by agencies to the OMB.

For example, agencies are graded on their automated capability to detect and block software and screen Web pages to make objectionable content unavailable to the user. This capability should prevent employees from downloading unwanted, malicious software programs that could result in breaches. Though several agencies receive very high grades in these categories, we have observed those same agencies downloading large quantities of unwanted software such as adware and grayware, as well as potentially infected software applications through peer-to-peer file sharing.

Agencies are also graded on the percentage of email systems that can identify "spoofed" incoming or outgoing messages. This reduces the likelihood that an employee will be tricked into opening a message that appears to come from a trusted source. Though many agencies state that they are implementing anti-spoofing technologies on 100 percent of email traffic, we observe that some of these agencies have actually implemented ineffective controls or have not implemented them at all.

The difference between our observations and manually reported data is not necessarily the result of an intentional misrepresentation. Instead, it suggests that having an automated collection process is critical to ensuring that policymakers have the most accurate, objective data possible.

Third, the government must create more useful metrics to better evaluate the effectiveness of an agency's program. Some metrics that the government uses to evaluate agencies are not terribly useful in assessing the security effectiveness of an agency. For instance, in the category of "Malware Defense," agencies are judged on whether they have deployed intrusion prevention and antivirus technology. But what if the technology is deployed, but improperly configured? This could leave the network unprotected. Similarly, the report cites as a strong metric that "100% of agencies completed Indicators of Compromise scans by July 31, 2015." While these scans are certainly a best practice, a better performance metric would have focused on how many machines were fixed after compromise was identified.

Better metrics exist. Most experts agree that measuring the time from initial breach to detection/resolution is the most important and relevant cybersecurity metric. This one metric measures an organization's capability to identify, detect, respond and recover. Many private-sector security professionals use this "golden" metric in board-level reporting. The Obama administration recently listed breach detection and incident response time as one of its five priorities for cybersecurity. Future OMB reports should incorporate these and other "timeliness" metrics in order to truly evaluate an agency's performance.

Another area where metrics are necessary is that of third-party contractors and business associates. As we've learned repeatedly, from cyberattacks affecting the OPM and also Target and other Fortune 1000 companies, third-party vendors can pose significant risk to organizational security. Although tens of thousands of third-party contractors hold sensitive data or perform services for the government, agencies collect no metrics on the cybersecurity performance of their third-party contractors. The OMB should address this gaping hole in future reports.

Olcott is vice president of business development for BitSight Technologies. He previously served as legal adviser to the Senate Commerce Committee and as counsel to the House of Representatives Homeland Security Committee.