Another lesson is that systems should operate even when degraded. “You have to assume—and people have a difficult time with this, because it takes the security blanket away—that you are being penetrated at this moment,” notes Keith Rhodes, chief technology officer for Qinetiq North America's services and solutions group. “If we try to protect everything all the time, we will fail. Guns, guards, gates and the Maginot line” will not be enough, he says.
Targets need to define “what the critical information is,” Rhodes says. “I need to understand that there are things in my order of battle that I have to give up. I'll always lose information—but what information do I really, really not want to lose?”
Jensen points to two contrasting historical examples from World War II and afterward: “We cracked the German code and the Germans never found out about it. But the Rosenbergs gave away the atom bomb. One was compromised and the other was not.” The difference may have been culture rather than technology.
That information may be critical because it concerns classified technology, but it may also be critical because (like ALIS) its unimpeded flow is essential to operations. “You need a small collection of people to state what is critical, then define alternate paths in a time of stress,” Rhodes says.
Despite guards and backups, however, the biggest threat “is the careless user or the lazy system administrator,” says Alex Cochran, director of cyber and signals intelligence analysis for BAE Systems. As with DRC and Qinetiq, this is driving the development of security training aimed at the mass of computer users.
“My greatest shortfall in my Army career was having officers who understood planning and could integrate cyber with it,” notes Cochran. Most training courses, adds BAE's director of tradecraft advancement, Robert Tomes, “are focused on ones and zeroes, information assurance, malware and so on. What's missing are the rest of the folks who write code, policymakers, risk analysts and operational planners.”
As the BAE experts observe, studies have shown that less than 5% of cyber-threat warnings in industry originate with corporate security departments. For that reason, the company has launched training programs aimed at users—in a sense, converting as many people as possible into “cyber-reservists” who understand the threat enough to resist and report attacks. They are also active on the Internet, day to day, meshing with another emerging cyberdefense concept: the need to carry the battle “outside the castle walls” to detect hostile activity.
“The operators and users are another sensor,” says Qinetiq's Rhodes. “If they see a system anomaly and they have not been educated, they say, 'that's just how the system runs.'” Training, he says, should tell the user, “there is information that is important here. Keep your eyes open for this or that scenario.” He compares such training to scenes from the movie The Matrix: “There may be something on the screen that looks like gibberish, until you're trained to see it.”
Both Rhodes and Jensen advise that social media are powerful tools for “spearphishing” exploits, where an insider is identified, profiled and targeted with email that appears to come from a routine correspondent but contains a malware payload. “It's not going away,” says Rhodes. “And I'm no lawyer, but the price of employment is not going to be the passwords to all your social media accounts.”
Qinetiq's training, Rhodes says, “teaches people to take the same care in social media as they do going on vacation—the equivalent of stopping the paper and holding the mail.” A message that an employee is out of town to visit a subcontractor, plus a location message, plus a LinkedIn profile and Google Maps, could “point to a nondescript building in the middle of nowhere” and clue an adversary to the existence of a secret program.