Saturday, January 24, 2015

The Next Version of

Longtime TaoSecurity Blog readers are likely to remember me mentioning This is a Web site that returns nothing more than

uid=0(root) gid=0(root) groups=0(root)

This content triggers a Snort intrusion detection system alert, due to the signature

alert ip any any -> any any (msg:"GPL ATTACK_RESPONSE id check returned root"; content:"uid=0|28|root|29|"; fast_pattern:only; classtype:bad-unknown; sid:2100498; rev:8;)

You can see the Web page in Firefox, and the alert in Sguil, below.

A visit to this Web site is a quick way to determine if your NSM sensor sees what you expect it to see, assuming you're running a tool that will identify the activity as suspicious. You might just want to ensure your other NSM data records the visit, as well.

Site owner Chas Tomlin emailed me today to let me know he's adding some new features to You can read about them in this blog post. For example, you could download a malicious .exe, or other files.

Chas asked me what other sorts of tests I might like to see on his site. I'm still thinking about it. Do you have any ideas?

Friday, January 23, 2015

Is an Alert Review Time of Less than Five Hours Enough?

This week, FireEye released a report titled The Numbers Game: How Many Alerts are too Many to Handle? FireEye hired IDC to survey "over 500 large enterprises in North America, Latin America, Europe, and Asia" and asked director-level and higher IT security practitioners a variety of questions about how they manage alerts from security tools. In my opinion, the following graphic was the most interesting:

As you can see in the far right column, 75% of respondents report reviewing critical alerts in "less than 5 hours." I'm not sure if that is really "less than 6 hours," because the next value is "6-12 hours." In any case, is it sufficient for organizations to have this level of performance for critical alerts?

In my last large enterprise job, as director of incident response for General Electric, our CIO demanded 1 hour or less for critical alerts, from time of discovery to time of threat mitigation. This means we had to do more than review the alert; we had to review it and pass it to a business unit in time for them to do something to contain the affected asset.

The strategy behind this requirement was one of fast detection and response to limit the damage posed by an intrusion. (Sound familiar?)

Also, is it sufficient to have fast response for only critical alerts? My assessment is no. Alert-centric response, which I call "matching" in The Practice of Network Security Monitoring, is only part of the operational campaign model for a high-performing CIRT. The other part is hunting.

Furthermore, it is dangerous to rely on accurate judgement concerning alert rating. It's possible a low or moderate level alert is more important than a critical alert. Who classified the alert? Who wrote it? There are a lot of questions to be answered.

I'm in the process of doing research for my PhD in the war studies department at King's College London. I'm not sure if my data or research will be able to answer questions like this, but I plan to investigate it.

What do you think?

Try the Critical Stack Intel Client

You may have seen in my LinkedIn profile that I'm advising a security startup called Critical Stack. If you use Security Onion or run the Bro network security monitoring platform (NSM), you're ready to try the Critical Stack Intel Client.

Bro is not strictly an intrusion detection system that generates alerts, like Snort. Rather, Bro generates a range of NSM data, including session data, transaction data, extracted content data, statistical data, and even alerts -- if you want them.

Bro includes an intelligence framework that facilitates integrating various sources into Bro. These sources can include more than just IP addresses. This Bro blog post explains some of the options, which include:


This Critical Stack Intel Client makes it easy to subscribe to over 30 threat feeds for the Bro intelligence framework. The screen capture below shows some of the feeds:

Visit and follow the wizard to get started. Basically, you begin by creating a Collection. A Collection is a container for the threat intelligence you want. Next you select the threat intelligence Feeds you want to populate your collection. Finally you create a Sensor, which is the system where you will deploy the threat intelligence Collection. When done you have an API key that your client will use to access the service.

I wrote a document explaining how to move beyond the wizard and test the client on a sensor running Bro -- either Bro by itself, or as part of the Security Onion NSM distro.

The output of the Critical Stack Intel Client will be new entries in an intel.log file, stored with other Bro logs.

If Bro is completely new to you, I discuss how to get started with it in my latest book The Practice of Network Security Monitoring.

Please take a look at this new free software and let me know what you think.

Thursday, January 22, 2015

Notes on Stewart Baker Podcast with David Sanger

Yesterday Steptoe and Johnson LLP released the 50th edition of their podcast series, titled Steptoe Cyberlaw Podcast - Interview with David Sanger. Stewart Baker's discussion with New York Times reporter David Sanger (pictured at left) begins at the 20:15 mark. The interview was prompted by the NYT story NSA Breached North Korean Networks Before Sony Attack, Officials Say. I took the following notes for those of you who would like some highlights.

Sanger has reported on the national security scene for decades. When he saw President Obama's definitive statement on December 19, 2014 -- "We can confirm that North Korea engaged in this attack [on Sony Pictures Entertainment]." -- Sanger knew the President must have had solid attribution. He wanted to determine what evidence had convinced the President that the DPRK was responsible for the Sony intrusion.

Sanger knew from his reporting on the Obama presidency, including his book Confront and Conceal: Obama's Secret Wars and Surprising Use of American Power, that the President takes a cautious approach to intelligence. Upon assuming his office, the President had little experience with intelligence or cyber issues (except for worries about privacy).

Obama had two primary concerns about intelligence, involving "leaps" and "leaks." First, he feared making "leaps" from intelligence to support policy actions, such as the invasion of Iraq. Second, he worried that leaks of intelligence could "create a groundswell for action that the President doesn't want to take." An example of this second concern is the (mis)handling of the "red line" on Syrian use of chemical weapons.

In early 2009, however, the President became deeply involved with Olympic Games, reported by Sanger as the overall program for the Stuxnet operation. Obama also increased the use of drones for targeted killing. These experiences helped the President overcome some of his concerns with intelligence, but he was still likely to demand proof before taking actions.

Sanger stated in the podcast that, in his opinion, "the only way" to have solid attribution is to be inside adversary systems before an attack, such that the intelligence community can see attacks in progress. In this case, evidence from inside DPRK systems and related infrastructure (outside North Korea) convinced the President.

(I disagree that this is "the only way," but I believe it is an excellent option for performing attribution. See my 2009 post Counterintelligence Options for Digital Security for more details.)

Sanger would not be surprised if we see more leaks about what the intelligence community observed. "There's too many reporters inside the system" to ignore what's happening, he said. The NYT talks with government officials "several times per month" to discuss reporting on sensitive issues. The NYT has a "presumption to publish" stance, although Sanger held back some details in his latest story that would have enabled the DPRK or others to identify "implants in specific systems."

Regarding the purpose of announcing attribution against the DPRK, Sanger stated that deterrence against the DPRK and other actors is one motivation. Sanger reported meeting with NSA director Admiral Mike Rogers, who said the United States needs a deterrence capability in cyberspace. More importantly, the President wanted to signal to the North Koreans that they had crossed a red line. This was a destructive attack, coupled with a threat of physical harm against movie goers. The DPRK has become comfortable using "cyber weapons" because they are more flexible than missiles or nuclear bombs. The President wanted the DPRK to learn that destructive cyber attacks would not be tolerated.

Sanger and Baker then debated the nature of deterrence, arms control, and norms. Sanger stated that it took 17 years after Hiroshima and Nagasaki before President Kennedy made a policy announcement about seeking nuclear arms control with the Soviet Union. Leading powers don't want arms control, until their advantage deteriorates. Once the Soviet Union's nuclear capability exceeded the comfort level of the United States, Kennedy pitched arms control as an option. Sanger believes the nuclear experience offers the right set of questions to ask about deterrence and arms control, although all the answers will be different. He also hopes the US moves faster on deterrence, arms control, and norms than shown by the nuclear case, because other actors (China, Russia, Iran, North Korea, etc.) are "catching up fast."

(Incidentally, Baker isn't a fan of deterrence in cyberspace. He stated that he sees deterrence through the experience of bombers in the 1920s and 1930s.)

According to Sanger, the US can't really discuss deterrence, arms control, and norms until it is willing to explain its offensive capabilities. The experience with drone strikes is illustrative, to a certain degree. However, to this day, no government official has confirmed Olympic Games.

I'd like to thank Stewart Baker for interviewing David Sanger, and I thank David Sanger for agreeing to be interviewed. I look forward to podcast 51, featuring my PhD advisor Dr Thomas Rid.

Thursday, January 15, 2015

FBI Is Part of US Intelligence Community

Are you surprised to learn that the FBI is officially part of the United States Intelligence Community? Did you know there's actually a list?

If you visit the Intelligence Community Web site at, you can learn more about the IC. The member agencies page lists all 17 organizations.

The FBI didn't always emphasize an intelligence role. The Directorate of Intelligence appeared in 2005 and was part of the National Security Branch, as described here.

Now, as shown on the latest organizational chart, Intelligence is a peer with the National Security Branch. Each has its own Executive Assistant Director. NSB currently houses a division for Counterterrorism, a division for Counterintelligence, and directorate for Weapons of Mass Destruction.

You may notice that there is a Cyber Divison within a separate branch for "Criminal, Cyber, Response, and Services." If the Bureau continues to stay exceptionally engaged in investigating and countering cyber crime, espionage, and sabotage, we might see a separate Cyber branch at some point.

The elevation of the Bureau's intelligence function was a consequence of 9-11 and the Intelligence Reform and Terrorism Prevention Act of 2004.

If you want to read a book on the IC, Jeffrey Richelson publishes every few years on the topic. His sixth edition dates to 2011. I read an earlier edition, and noticed his writing is fairly dry.

Mark Lowenthal's book is also in its sixth edition. I was able to find my review of the fourth edition, if you want my detailed opinion.

In general these books are suitable for students and participants in the IC. Casual readers will probably not find them exciting enough. Reading them and related .gov sites will help keep you up to date on the nature and work of the IC, however.

With this information in mind, it might make more sense to some why the FBI acted both as investigator for recent intrusions and as a spokesperson for the IC.

Cass Sunstein on Red Teaming

On January 7, 2015, FBI Director James Comey spoke to the International Conference on Cyber Security at Fordham University. Part of his remarks addressed controversy over the US government's attribution of North Korea as being responsible for the digital attack on Sony Pictures Entertainment.

Near the end of his talk he noted the following:

We brought in a red team from all across the intelligence community and said, “Let’s hack at this. What else could be explaining this? What other explanations might there be? What might we be missing? What competing hypothesis might there be? Evaluate possible alternatives. What might we be missing?” And we end up in the same place.

I noticed some people in the technical security community expressing confusion about this statement. Isn't a red team a bunch of hackers who exploit vulnerabilities to demonstrate defensive flaws?

In this case, "red team" refers to a group performing the actions Director Comey outlined above. Harvard Professor and former government official Cass Sunstein explains the sort of red team mentioned by Comey in his new book Wiser: Getting Beyond Groupthink to Make Groups Smarter. In this article published by Fortune, Sunstein and co-author Reid Hastie advise the following as one of the ways to avoid group think to improve decision making:

Appoint an adversary: Red-teaming

Many groups buy into the concept of devil’s advocates, or designating one member to play a “dissenting” role. Unfortunately, evidence for the efficacy of devil’s advocates is mixed. When people know that the advocate is not sincere, the method is weak. A much better strategy involves “red-teaming.”

This is the same concept as devil’s advocacy, but amplified: In military training, red teams play an adversary role and genuinely try to defeat the primary team in a simulated mission. In another version, the red team is asked to build the strongest case against a proposal or plan. Versions of both methods are used in the military and in many government offices, including NASA’s reviews of mission plans, where the practice is sometimes called a “murder board.”

Law firms have a long-running tradition of pre-trying cases or testing arguments with the equivalent of red teams. In important cases, some law firms pay attorneys from a separate firm to develop and present a case against them. The method is especially effective in the legal world, as litigators are naturally combative and accustomed to arguing a position assigned to them by circumstance. A huge benefit of legal red teaming is that it can helpt clients understand the weaknesses of their side of a case, often leading to settlements that avoid the devastating costs of losing at trial.

One size does not fit all, and cost and feasibility issues matter. But in many cases, red teams are worth the investment. In the private and public sectors, a lot of expensive mistakes can be avoided with the use of red teams.

Some critics of the government's attribution statements have ignored the fact that the FBI took this important step. An article in Reuters, titled In cyberattacks such as Sony strike, Obama turns to 'name and shame', add some color to this action:

The new [name and shame] policy has meant wresting some control of the issue from U.S. intelligence agencies, which are traditionally wary of revealing much about what they know or how they know it.

Intelligence officers initially wanted more proof of North Korea's involvement before going public, according to one person briefed on the matter. A step that helped build consensus was the creation of a team dedicated to pursuing rival theories - none of which panned out.

If you don't trust the government, you're unlikely to care that the intelligence community (which includes the FBI) red-teamed the attribution case. Nevertheless, it's important to understand the process involved. The government and IC are unlikely to release additional details, unless and until they pursue an indictment similar to the one against the PLA and five individuals from Unit 61398 last year.

Thanks to Augusto Barros for pointing me to the new "Wiser" book.

Tuesday, January 13, 2015

Does This Sound Familiar?

I read the following in the 2009 book Streetlights and Shadows:
Searching for the Keys to Adaptive Decision Making by Gary Klein. It reminded me of the myriad ways operational information technology and security processes fail.

This is a long excerpt, but it is compelling.

== Begin ==

A commercial airliner isn't supposed to run out of fuel at 41,000 feet. There are too many safeguards, too many redundant systems, too many regulations and checklists. So when that happened to Captain Bob Pearson on July 23, 1983, flying a twin-engine Boeing 767 from Ottawa to Edmonton with 61 passengers, he didn't have any standard flight procedures to fall back on.

First the fuel pumps for the left engine quit. Pearson could work around that problem by turning off the pumps, figuring that gravity would feed the engine. The computer showed that he had plenty of fuel for the flight.

Then the left engine itself quit. Down to one engine, Pearson made the obvious decision to divert from Edmonton to Winnipeg, only 128 miles away. Next, the fuel pumps on the right engine went.

Shortly after that, the cockpit warning system emitted a warning sound that neither Pearson nor the first officer had ever heard before. It meant that both the engines had failed.

And then the cockpit went dark. When the engines stopped, Pearson lost all electrical power, and his advanced cockpit instruments went blank, leaving him only with a few battery-powered emergency instruments that were barely enough to land; he could read the instruments because it was still early evening.

Even if Pearson did manage to come in for a landing, he didn't have any way to slow the airplane down. The engines powered the hydraulic system that controlled the flaps used in taking off and in landing. Fortunately, the designers had provided a backup generator that used wind power from the forward momentum of the airplane.

With effort, Pearson could use this generator to manipulate some of his controls to change the direction and pitch of the airplane, but he couldn't lower the flaps and slats, activate the speed brakes, or use normal braking to slow down when landing. He couldn't use reverse thrust to slow the airplane, because the engines weren't providing any thrust. None of the procedures or flight checklists covered the situation Pearson was facing.

  Pearson, a highly experienced pilot, had been flying B-767s for only three months-almost as long as the airplane had been in the Air Canada fleet. Somehow, he had to fly the plane to Winnipeg. However, "fly" is the wrong term. The airplane wasn't flying. It was gliding, and poorly. Airliners aren't designed to glide very well-they are too heavy, their wings are too short, they can't take advantage of thermal currents. Pearson's airplane was dropping more than 20 feet per second.

Pearson guessed that the best glide ratio speed would be 220 knots, and maintained that speed in order to keep the airplane going for the longest amount of time. Maurice Quintal, the first officer, calculated that they wouldn't make it to Winnipeg. He suggested instead a former Royal Canadian Air Force base that he had used years earlier. It was only 12 miles away, in Gimli, a tiny community originally settled by Icelanders in 1875.1 So Pearson changed course once again.

Pearson had never been to Gimli but he accepted Quintal's advice and headed for the Gimli runway. He steered by the texture of the clouds underneath him. He would ask Winnipeg Central for corrections in his heading, turn by about the amount requested, then ask the air traffic controllers whether he had made the correct turn. Near the end of the flight he thought he spotted the Gimli runway, but Quintal corrected him.

As Pearson got closer to the runway, he knew that the airplane was coming in too high and too fast. Normally he would try to slow to 130 knots when the wheels touched down, but that was not possible now and he was likely to crash.

Luckily, Pearson was also a skilled glider pilot. (So was Chesley Sullenberger, the pilot who landed a US Airways jetliner in the Hudson River in January of 2009. We will examine the Hudson River landing in chapter 6.) Pearson drew on some techniques that aren't taught to commercial pilots. In desperation, he tried a maneuver called a slideslip, skidding the airplane forward in the way ice skaters twist their skates to skid to a stop.

He pushed the yoke to the left, as if he was going to turn, but pressed hard on the right rudder pedal to counter the turn. That kept the airplane on course toward the runway. Pearson used the ailerons and the rudder to create more drag. Pilots use this maneuver with gliders and light aircraft to produce a rapid drop in altitude and airspeed, but it had never been tried with a commercial jet. The slide-slip maneuver was Pearson's only hope, and it worked.

  When the plane was only 40 feet off the ground, Pearson eased up on the controls, straightened out the airplane, and brought it in at 175 knots, almost precisely on the normal runway landing point. All the passengers and the crewmembers were safe, although a few had been injured in the scramble to exit the plane after it rolled to a stop.

The plane was repaired at Gimli and was flown out two days later. It returned to the Air Canada fleet and stayed in service another 25 years, until 2008.2 It was affectionately called "the Gimli Glider."

The story had a reasonably happy ending, but a mysterious beginning. How had the plane run out of fuel? Four breakdowns, four strokes of bad luck, contributed to the crisis.

Ironically, safety features built into the instruments had caused the first breakdown. The Boeing 767, like all sophisticated airplanes, monitors fuel flow very carefully. It has two parallel systems measuring fuel, just to be safe. If either channel 1 or channel 2 fails, the other serves as a backup.

However, when you have independent systems, you also have to reconcile any differences between them. Therefore, the 767 has a separate computer system to figure out which of the two systems is more trustworthy. Investigators later found that a small drop of solder in Pearson's airplane had created a partial connection in channel 2. The partial connection allowed just a small amount of current to flow-not enough for channel 2 to operate correctly, but just enough to keep the default mode from kicking in and shifting to channel 1.

The partial connection confused the computer, which gave up. This problem had been detected when the airplane had landed in Edmonton the night before. The Edmonton mechanic, Conrad Yaremko, wasn't able to diagnose what caused the fault, nor did he have a spare fuel-quantity processor. But he had figured out a workaround. If he turned channel 2 off, that circumvented the problem; channel 1 worked fine as long as the computer let it.

The airplane could fly acceptably using just one fuel-quantity processor channel. Yaremko therefore pulled the circuit breaker to channel 2 and put tape over it, marking it as inoperative. The next morning, July 23, a crew flew the plane from Edmonton to Montreal without any trouble.

The second breakdown was a Montreal mechanic's misguided attempt to fix the problem. The Montreal mechanic, jean Ouellet, took note of the problem and, out of curiosity, decided to investigate further. Ouellet had just completed a two-month training course for the 767 but had never worked on one before. He tinkered a bit with the faulty Fuel Quantity Indicator System without success. He re-enabled channel 2; as before, the fuel gauges in the cockpit went blank. Then he got distracted by another task and failed to pull the circuit breaker for channel 2, even though he left the tape in place showing the channel as inoperative. As a result, the automatic fuel-monitoring system stopped working and the fuel gauges stayed blank.

  A third breakdown was confusion about the nature of the fuel gauge problem. When Pearson saw the blank fuel gauges and consulted a list of minimum requirements, he knew that the airplane couldn't be flown in that condition. He also knew that the 767 was still very new-it had first entered into airline service in 1982. The minimum requirements list had already been changed 55 times in the four months that Air Canada had been flying 767s. Therefore, pilots depended more on the maintenance crew to guide their judgment than on the lists and manuals.

Pearson saw that the maintenance crews had approved this airplane to keep flying despite the problem with the fuel gauges. Pearson didn't understand that the crew had approved the airplane to fly using only channel 1. In talking with the pilot who had flown the previous legs, Pearson had gotten the mistaken impression that the airplane had just flown from Edmonton to Ottawa to Montreal with blank fuel gauges. That pilot had mentioned a "fuel gauge problem." When Pearson climbed into the cockpit and saw that the fuel gauges were blank, he assumed that was the problem the previous pilot had encountered, which implied that it was somehow acceptable to continue to operate that way.

The mechanics had another way to provide the pilots with fuel information. They could use a drip-stick mechanism to measure the amount of fuel currently stored in each of the tanks, and they could manually enter that information into the computer. The computer system could then calculate, fairly accurately, how much fuel was remaining all through the flight.

In this case, the mechanics carefully determined the amount of fuel in the tanks. But they made an error when they converted that to weight. This error was the fourth breakdown.

Canada had converted to the metric system only a few years earlier, in 1979. The government had pressed Air Canada to direct Boeing to build the new 767s using metric measurements of liters and kilograms instead of gallons and pounds-the first, and at that time the only, airplane in the Air Canada fleet to use the metric system. The mechanics in Montreal weren't sure about how to make the conversion (on other airplanes the flight engineer did that job, but the 767 didn't use a flight engineer), and they got it wrong.

In using the drip-stick measurements, the mechanics plugged in the weight in pounds instead of kilograms. No one caught the error. Because of the error, everyone believed they had 22,300 kg of fuel on board, the amount needed to get them to Edmonton, but in fact they had only a little more than 10,000 kg-less than half the amount they needed.

  Pearson was understandably distressed by the thought of not being able to monitor the fuel flow directly. Still, the figures had been checked repeatedly, showing that the airplane had more fuel than was necessary. The drip test had been repeated several times, just to be sure.

That morning, the airplane had gotten approval to fly from Edmonton to Montreal despite having fuel gauges that were blank. (In this Pearson was mistaken; the airplane used channel 1 and did have working fuel gauges.) Pearson had been told that maintenance control had cleared the airplane.

The burden of proof had shifted, and Pearson would have to justify a decision to cancel this flight. On the basis of what he knew, or believed he knew, he couldn't justify that decision. Thus, he took off, and everything went well until he ran out of fuel and both his engines stopped.

== End ==

This story is an example that one cannot build "unhackable systems." I also believe this story demonstrates that operational and decision-based failures will continue to plague technology. It is no use building systems that theoretically "have no vulnerabilities" so long as people operate and make decisions based on use of those systems.

If you liked this post, I've written about engineering disasters in the past.

You can but the book which published this story at

Friday, January 09, 2015

Edward Tufte ReTweeted My Blog Post

If you've been a TaoSecurity Blog reader for a while, you may remember how the writing and speaking of Edward Tufte changed the way I taught classes and delivered presentations.

I wrote about the Tufte class I attended in June 2008 in my post The Best Single Day Class Ever.

Since then I've written a few other Tufte posts here. The most recent post, from 2012, was Netanyahu Channels Tufte at United Nations. I explained how Prime Minister Netanyahu literally drew a red line on a diagram during a speech to the world.

Today in my Twitter feed I saw that Edward Tufte himself reTweeted my link to that 2012 story. I am so thrilled that he read it, and presumably knows that his work changed my professional life and how I interact with audiences. Thank you sir.

And yes, this does sound like a "fan boy" post. I still recommend you take his one day course, whenever it's offered nearby. I see he will be in the DC area 31 March - 2 April 2015.

Thursday, January 08, 2015

Daniel Ellsberg on Secrets

Daniel Miessler just wrote a post about his attitude toward attribution. I'm not going to comment about it, but I wanted to provide the source of the story he mentioned, along with the specific excerpt. It's from Secrets by Daniel Ellsberg.

Kevin Drum posted the same excerpt in 2010, but I'm going to print it here for my reference.

As an intro, Ellsberg was working for RAND, and approached Henry Kissinger at a party in 1968. Ellsberg begins:

    "Henry, there's something I would like to tell you, for what it's worth, something I wish I had been told years ago. You've been a consultant for a long time, and you've dealt a great deal with top secret information. But you're about to receive a whole slew of special clearances, maybe fifteen or twenty of them, that are higher than top secret.

    "I've had a number of these myself, and I've known other people who have just acquired them, and I have a pretty good sense of what the effects of receiving these clearances are on a person who didn't previously know they even existed. And the effects of reading the information that they will make available to you.

    "First, you'll be exhilarated by some of this new information, and by having it all — so much! incredible! — suddenly available to you. But second, almost as fast, you will feel like a fool for having studied, written, talked about these subjects, criticized and analyzed decisions made by presidents for years without having known of the existence of all this information, which presidents and others had and you didn't, and which must have influenced their decisions in ways you couldn't even guess. In particular, you'll feel foolish for having literally rubbed shoulders for over a decade with some officials and consultants who did have access to all this information you didn't know about and didn't know they had, and you'll be stunned that they kept that secret from you so well.

    "You will feel like a fool, and that will last for about two weeks. Then, after you've started reading all this daily intelligence input and become used to using what amounts to whole libraries of hidden information, which is much more closely held than mere top secret data, you will forget there ever was a time when you didn't have it, and you'll be aware only of the fact that you have it now and most others don't....and that all those other people are fools.

    "Over a longer period of time — not too long, but a matter of two or three years — you'll eventually become aware of the limitations of this information. There is a great deal that it doesn't tell you, it's often inaccurate, and it can lead you astray just as much as the New York Times can. But that takes a while to learn.

    "In the meantime it will have become very hard for you to learn from anybody who doesn't have these clearances. Because you'll be thinking as you listen to them: 'What would this man be telling me if he knew what I know? Would he be giving me the same advice, or would it totally change his predictions and recommendations?' And that mental exercise is so torturous that after a while you give it up and just stop listening. I've seen this with my superiors, my colleagues....and with myself.

    "You will deal with a person who doesn't have those clearances only from the point of view of what you want him to believe and what impression you want him to go away with, since you'll have to lie carefully to him about what you know. In effect, you will have to manipulate him. You'll give up trying to assess what he has to say. The danger is, you'll become something like a moron. You'll become incapable of learning from most people in the world, no matter how much experience they may have in their particular areas that may be much greater than yours."

    ....Kissinger hadn't interrupted this long warning. As I've said, he could be a good listener, and he listened soberly. He seemed to understand that it was heartfelt, and he didn't take it as patronizing, as I'd feared. But I knew it was too soon for him to appreciate fully what I was saying. He didn't have the clearances yet.

I appreciate this text on several levels. Having been cleared since 1991, and having been trained as a professional military intelligence officer, I understand the powers and limitations of classified information.

If anyone claims superior knowledge only because their source is classified, you must beware. That person is falling into one of Ellsberg's traps.

This is a very subtle point. What I'm saying is this: if you were to read any document, and give it more credibility simply because it is marked (S), then you are failing to appreciate the problems inherent in many parts of the intelligence community and its consumer base. (This 2013 story called it the secrecy heuristic and warned about the problem after conducting scientific experiments to measure it.)

On the other hand, if you see any sort of "secret" (i.e., non-public) report, and you trust the producer of the intelligence, then you recognize that any handling markings are there to keep the information out of the hands of the adversary. The classification level or "secrecy" does not inherently provide a reliability or trustworthiness ranking.

Note the terms I highlighted. A report is a product of an intelligence process. It is only as good as all of those elements. This is why trust is the key issue in the attribution debate.

Also note that this warning applies to information that is not strictly "classified" by government entities. It could apply to any sort of non-public information.

I have more to say on this topic, but this is my fourth post today.

On a short related note, I didn't invent the term "Sony truther." I read it in Gizmodo's December 24th story Meet the Sony Hack Truthers and Tweeted about it that day.

Attribution and Declassifying Current Satellite Imagery

I listened to a great Webinar by Rick Holland today about digital threat intelligence. During the talk he mentioned the precedent of declassifying satellite imagery as an example of an action the government could take with respect to "proving" DPRK attribution.

Rick is a former military intelligence analyst like me, and I've had similar thoughts this week. They were heightened by this speech excerpt from FBI Director James Comey yesterday:

[F]olks have suggested that we have it wrong. I would suggest—not suggesting, I’m saying—that they don’t have the facts that I have—don’t see what I see—but there are a couple things I have urged the intelligence community to declassify that I will tell you right now.

I decided to look online for events where the US government declassified satellite imagery in order to support a policy decision. I am excluding cases where the government declassified imagery well after the event. I'm including a few cases where satellites were not yet operational, so air breathing reconnaissance assets took the photos. Based on that examination I formed these conclusions.

First, high-end satellite imagery is like signals intelligence (SIGINT) against hard targets. They are near the apex of protected sources and methods. Both are expensive to develop, deploy, and maintain. If spy satellite photos are released, they are often "degraded" to hide their actual resolution.

Second, the US IC doesn't declassify information very often. When you read about "declassified satellite imagery," it's likely you are seeing photographs taken by commercial satellites like Digital Globe. I found numerous examples online, with supposedly "declassified imagery" bearing commercial logos.

Third, when the US IC does declassify information, it usually withholds the source. If a source is mentioned, the method least likely to hinder future collection is cited as the origin. In other words, the IC may have a source inside a foreign government, and a source who corroborated the information after defecting to a US embassy. If the US decides to reveal the intelligence revealed by both sources, and feels the need to provide its origin, the IC will cite the defector. The foreign government already knows about the defector, but hopefully will remain unaware of the spy still in its midst.

Finally, as publicly stated, the US intelligence community considers North Korea to be a "very hard target." This 2011 Bloomberg article spells out the problems getting information about the DPRK. That means that if the US IC has ways to gather intelligence on the DPRK, those are some of the most important sources and methods to the entire IC. They are not going to burn those sources and methods to try (and fail) to satisfy a few dozen critics posting Tweets or blog posts.

Declassifying satellite imagery is a decent public example of the intelligence "gain-loss" decision that the IC and administration must make. They are historically exceptional reluctant to reveal sources and methods. I expect that if the FBI releases more information on their DPRK case, it is more likely to be associated with a criminal maneuver, similar to the PLA indictment of May 2014.

The following are related sources which you may enjoy visiting:

Incentives for Breaking Operational Security?

Thanks Adam Segal for posting a link to a fascinating Wall Street Journal piece titled Sony Hackers May Have Left Deliberate Clues, Expert Says. From the story by Jeyup S. Kwaak:

Apparent slip-ups by the hackers of Sony Pictures that have helped convince U.S. investigators the hackers are North Koreans have a precedent, and may even have been deliberate to win domestic kudos, according to a top cybersecurity expert and former senior North Korean official.

The head of a group of hacking experts that have analyzed previous suspected North Korean cyberattacks on South Korea said a record of a North Korean Internet address was also left in a 2013 attack on Seoul because a detour through Chinese servers was briefly suspended, exposing the origin of the incursion...

Choi Sang-myung, who is also an adviser to Seoul’s cyberwarfare command, said... [w]hile it was impossible to prove whether the hackers left evidence by mistake or on purpose, that they didn’t fully cover their tracks could mean North Koreans wanted to be known...

That theory is supported by Jang Jin-sung, a former official in North Korea’s propaganda unit, who says North Korean hackers likely have an incentive to leave some evidence because officials often secure promotions after a successful attack against enemies.

“People fiercely compete to prove their loyalty” after an order is given, he said. “They must leave proof that they did it.”

These are fascinating comments, from people who understand the DPRK hacking scene better than critics of the FBI attribution statements.

This theory shows DPRK intruders may have incentives for breaking operational security ("OPSEC" or "opsec"), and that they were not just "sloppy" as mentioned by FBI Director Comey yesterday.

In the 2013 Mandiant APT1 report I suggested the following language be used:

These actors have made poor operational choices, facilitating our research and allowing us to track their activities.

In the case of Chinese PLA Unit 61398, I think the OPSEC failures were unintentional. Others theorized differently, but the Chinese have fewer incentives to reveal themselves. They want information, above any other consideration. They would rather not have victims know who is stealing their trade secrets, commercial data, and sensitive information.

Intrusions into critical infrastructure, confirmed in an open hearing in November 2014 by NSA Director Mike Rogers, might be a different case. If a nation state is trying to signal power to an adversary, it will want the adversary to know the perpetrator.

In this case of DPRK intrusions, North and South Korean sources explain that DPRK hackers have tangible incentives to reveal their identities. Apparently hacking for the government is one ticket to a marginally better life in North Korea, as reported by Newsweek.

All of this demonstrates that technical indicators are but one element of attribution. Personal, not just national, incentives, facing the individual intrusion operators, should be part of the attribution equation too.

Remember to read Attributing Cyber Attacks by my KCL professor Thomas Rid, and fellow PhD student Ben Buchanan, for the best modern report available on attribution issues.


Update: Thanks to Steven Andres for pointing out a link mistake.

Happy 12th Birthday TaoSecurity Blog

Today, 8 January 2015, is the 12th birthday of TaoSecurity Blog!

I wrote my first post on 8 January 2003 while working as an incident response consultant for Foundstone. Kevin Mandia was my boss. Today I am Chief Security Strategist at FireEye, still working for Kevin Mandia. (It's a small world.)

With 2945 posts published, I am still blogging -- but much less. Why the drop over the years? I "blame" my @taosecurity Twitter account. With almost 30,000 followers, easy posting from mobile devices, and greater interactivity, Twitter is an addictive platform.

Second, blogging used to be the primary way I could share my ideas with the community. These days, speaking and writing are a big part of my professional duties. I try to track these reports here.

Third, time is precious, and blogging often takes a back seat. I'd rather spend time with my family, research my PhD, work with start-ups, collaborate with think tanks, and so on.

However, I still plan to keep blogging in 2015. Twitter's only a 140 character platform, and some days I have the time and inclination to share a few thoughts beyond what I've said or written for work.

Thanks again to Google for providing me this free platform for the past 12 years.