Wednesday, December 23, 2015

A Brief History of the Internet in Northern Virginia

Earlier today I happened to see a short piece from the Bloomberg Businessweek "The Year Ahead: 2016" issue, titled The Best Places to Build Data Centers. The text said the following:

Cloud leaders including Amazon.com, Microsoft, Google, IBM, and upstart DigitalOcean are spending tens of billions of dollars to construct massive data centers around the world. Microsoft alone puts its total bill at $15 billion. There are two main reasons for the expansion: First, the companies have to set up more servers near the biggest centers of Internet traffic growth. Second, they increasingly have to wrestle with national data-privacy laws and customer preferences, either by storing data in a user’s home country, or, in some cases, avoiding doing just that.

The article featured several maps, including the one at left. It notes data centers in "Virginia" because "the Beltway has massive data needs." That may be true, but it does not do justice to the history of the Internet in Northern Virginia (NoVA), nor does it explain why there are so many data centers in NoVA. I want to briefly note why there is so much more to this story.

In brief, there are so many data centers in NoVA because, 25 years or so ago, early Internet companies located in the area and also decided to connect their networks in NoVA. Key players included America Online (AOL), which built its headquarters in Loudoun County in the early 1990s. About the same time, in 1992, Internet pioneers from several local companies decided to connect their networks and build what became known as MAE-East. A year later, the National Science Foundation awarded a contract designating MAE-East as one of four Network Access Points. Later in the 1990s Equinix arrived and contributed to the growth in data center and network connectivity that continues through the present.

Essentially, NoVA demonstrated real-life "network effects" -- with networks cross-connecting to each other in Ashburn and Loudoun County, it made sense for new players to gain access to those connections. Companies built data centers there because the network connections offered the best performance for their customers. The "Beltway" and its "massive data needs" were not the reason.

If you would like to know more, I recommend reading Andrew Blum's book Tubes: A Journey to the Center of the Internet. Yes, Blum is referring to those "tubes," which he investigates via in-person visits to notable Internet locations and refreshing historical research. Along the way, Blum charts the growth of NoVA as an Internet hub, in some ways, "the" Internet hub.

Thursday, December 10, 2015

Domain Creep? Maybe Not.

I just read a very interesting article by Sydney Freedberg titled DoD CIO Says Spectrum May Become Warfighting Domain. That basically summarizes what you need to know, but here's a bit more from the article:

Pentagon officials are drafting new policy that would officially recognize the electromagnetic spectrum as a “domain” of warfare, joining land, sea, air, space, and cyberspace, Breaking Defense has learned. 

The designation would mark the biggest shift in Defense Department doctrine since cyberspace became a domain in 2006. With jamming, spoofing, radio, and radar all covered under the new concept, it could potentially bring new funding and clear focus to an area long afflicted by shortfalls and stovepipes.

The new electromagnetic spectrum domain would be separate from cyberspace, although there’s considerable overlap between the two... 

But the consensus among officials and experts seems to be that the electromagnetic spectrum world — long divided between electronic warriors and spectrum managers — is so technologically complex and bureaucratically fragmented by itself it must be considered its own domain, without trying to conflate it with cyberspace.

My initial reaction to this move is mixed. History and definitions provide some perspective.

One of the big differences between the civilian and military views of "cyberspace" has been, prior to this story, the military's more expansive view.

The formerly classified National Military Strategy for Cyberspace Operations, published in 2006, defined cyberspace as

A domain characterized by the use of electronics and the electromagnetic spectrum to store, modify, and exchange data via networked systems and associated physical infrastructures. (emphasis added)

The NMS-CO in a sense embedded cyberspace within EMS. That document also signaled DoD's formal recognition of cyberspace as a domain. By associating EMS with cyberspace, DoD thought of cyberspace in larger terms than civilian counterparts. In addition to activities involving computers, now cyberspace theoretically incorporated electronic warfare and other purely military functions with little or no relationship with civilian activities.

Army Doctrine Reference Publication No. 3-0 published in 2012 introduced the term "cyber electromagnetic activities" (CEMA). It defined CEMA as

Activities leveraged to seize, retain, and exploit an advantage over adversaries and enemies in both 
cyberspace and the electromagnetic spectrum, while simultaneously denying and degrading adversary and enemy use of the same and protecting the mission command system. Cyber electromagnetic activities consist of cyberspace operations, electronic warfare, and electromagnetic spectrum operations.

This Army publication separates cyberspace and EMS, and created "CEMA" as an umbrella over both.

The more recent  Joint Publication 3-12R, published in 2013, drops explicit mention of the EM spectrum. It defines cyberspace as

A global domain within the information environment consisting of the interdependent networks of information technology infrastructures and resident data, including the Internet, telecommunications networks, computer systems, and embedded processors and controllers.

With the definitions and their evolution out of the way, consider what it means for cyberspace to be separate from EMS.

In my opinion, cyberspace has always been more about the content, and less the infrastructure. In other words, it's the information that matters, not necessarily the containers. I first appreciated this distinction when I was stationed at Air Intelligence Agency, where we helped publish Air Force Doctrine Document 2-5: Information Operations in August 1998. Page 3 states

The Air Force believes information operations include actions taken to gain, exploit, defend, or attack [GEDA] information and information systems. (emphasis added)

*Note that document doesn't use the term "cyber" very much. When describing information warfare, it states

Information warfare involves such diverse activities as psychological operations, military deception, electronic warfare, both physical and information (“cyber”) attack, and a variety of defensive activities and programs.

In any case, the "GEDA" concept stuck with me all these years. I think the focus on the information, rather than the infrastructure, is conceptually useful. Consider: would there be "cyberspace" if it contained no information? The answer might be yes, but would anyone care to use it? It's the information that makes "cyberspace" what it is, I believe.

In this sense, separating the physical aspect of EMS seems to make sense. However, what does that mean for other physical aspects of manipulating information? EMS seems most tangible when considering radio and other radio frequency (RF) topics. How does that concept apply to cables or servers or other devices? Are they part of EMS? Do they "stay" with "cyberspace"?

Finally, I am a little worried that a reason from creating EMS as a sixth domain could be because it is " technologically complex and bureaucratically fragmented," as described in the article excerpt. "Creating" a military domain should not be done to solve problems of complexity or bureaucracy. Domains should be used as constructs to improve the clarity of thinking around warfighting, at least in the military world.

Addendum: When reading Joint Publication 3-13: Information Operations for this post, I saw the following figure:


It is one way to show that DoD considers Information Operations to be a much larger concept than you might consider. IO is often neglected in the "cyber" discussions, but with the ideas concerning EMS, IO might be hot again.

Saturday, December 05, 2015

Not So Fast! Boyd OODA Looping Is More Than Speed

The name "John Boyd" and the term "OODA Loop" are probably familiar to many of the readers of this blog. I've mentioned one or the other in 2006, 2007, 2009 (twice), and 2014. Boyd was a fighter pilot in the Korean war and revolutionized thinking on topics like fighter design and military strategy. His OODA loop -- an acronym for Observe, Orient, Decide, Act -- is the contribution that escaped from the military sphere into other fields of thought. In a world that has finally realized prevention eventually fails, the need for a different strategy is being appreciated.

I've noticed an increasing number of vendors invoke Boyd and his OODA loop as an answer. Unfortunately, they fixate on the idea of "speed." They believe that victory over an adversary results from operating one's OODA loop faster than an opponent. In short, if we do something faster than the adversary, we win and they lose. While there is some value to this approach, it is not representative of Boyd's thought and misses key elements of his contribution.

Before continuing I'd like to mention a recent talk on OODA within the security community that didn't fall into the "speed rules" trap. At the last Security Onion conference, Martin Holste presented Security Event Data in the OODA Loop Model. His spoken remarks reflected the issues I raise in this post, and for a hint in his Prezi material you see statements like "At higher levels, OODA speed is less important than accurate mental models." I was glad to see Martin avoid the speed trap in his talk!

The best reference for gaining a deep appreciation for Boyd's strategic thought is the book Science, Strategy and War: The Strategic Theory of John Boyd by Frans P.B. Osinga. I have the Kindle and paperback editions. The Kindle version is readable, but you may have trouble with some of the tables and figures.

The following is a selection of quotes from the book, re-ordered, highlighted, and lightly edited to capture the author's message on properly appreciating Boyd and OODA.

[T]he common view that the OODA loop model, interpreted as an argument that victory goes to the side that can decide most efficiently, falls short of the mark in capturing the meaning and breadth of Boyd’s work...

The first misconception about the OODA loop concerns the element of speed. The rapid OODA looping idea suggests a focus on speed of decision making, and ‘out-looping’ the opponent by going through consecutive OODA cycles faster. This is not incorrect, indeed, Boyd frequently suggested as much, [however]...

Whereas rapid OODA looping is often equated with superior speed in decision making, Boyd employs the OODA loop model to show how organisms evolve and adapt.

[U]ncertainty as the key problem organisms and organizations have to surmount...

One may react very fast to unfolding events, but if one is constantly surprised nevertheless, apparently one has not been able to turn the findings of repeated observations and actions into a better appreciation of the opponent, i.e. one has not learned but instead has continued to operate on existing orientation patterns...

[T]he abstract aim of Boyd’s method is to render the enemy powerless by denying him the time to mentally cope with the rapidly unfolding, and naturally uncertain, circumstances of war, and only in the most simplified way, or at the tactical level, can this be equated with the narrow, rapid OODA loop idea...

This points to the major overarching theme throughout Boyd’s work: the capability to evolve, to adapt, to learn, and deny such capability to the enemy...

It is not absolute speed that counts; it is the relative tempo or a variety in rhythm that counts. Changing OODA speed becomes part of denying a pattern to be recognized...

The way to play the game of interaction and isolation is [for our side] to spontaneously generate new mental images that match up with an unfolding world of uncertainty and change...

In order to avoid predictability and ensuring adaptability to a variety of challenges, it is essential [for our side] to have a repertoire of orientation patterns and the ability to select the correct one according to the situation at hand while denying the opponent the latter capability...

[In Boyd's words, one should] "operate inside [an] adversary’s observation-orientation-decision-action loops to enmesh [the] adversary in a world of uncertainty, doubt, mistrust, confusion, disorder, fear, panic, chaos . . . and/or fold adversary back inside himself so that he cannot cope with events/efforts as they unfold...

[We should ask ourselves] how do we want our posture to appear to an adversary, i.e., what kind of mental picture do we want to generate in his mind?

Designing one’s defense on this basis is obviously quite a departure [from current methods].

My take on these points is the following: Boyd's OODA loop is more about affecting the adversary than it is about one's own operations. Our side should take actions to target the adversary's OODA loop such that his cycle becomes slower than ours, due to the adversary's difficulty in properly matching his mental images of the world with what is actually happening in the world. On our side, we want to be flexible and nurture a variety of mental images that better match what is happening in the world, which will enable more efficient OODA loops.

In brief, the OODA concept has a speed component, but it is much more about coping with perceptions of reality, on the part of the adversary and ourselves. This approach can be used offensively and defensively.

What might this look like in the security world? That is worth one or more future posts.

If you'd like to learn more, in addition to Osinga's book I recommend reading Boyd: The Fighter Pilot Who Changed the Art of War by Robert Coram and listening to the Pattern of Conflict videos on YouTube. All 14 parts occupy about 6 1/2 hours. I recommend extracting them to audio format and listening to them on a long drive or flight. I listened to them driving to and from the aforementioned Security Onion conference. There is really no substitute to listening to the master at work. It brings the books to life when you have Boyd's voice and mannerisms playing in your mind.

Saturday, November 28, 2015

Seven Tips for Personal Online Security

Last year I wrote Seven Tips for Small Business Security, but recently I decided to write this new post with a different focus. I realized some small businesses are in some ways indistinguishable from individuals, such that advice for personal online security would be more appropriate for some small businesses. In other words, some businesses are scaled such that one or a few people are the entire business. In that spirit, I offer the following suggestions for individuals and these small businesses.

1. Protect your email. Email is the number one resource most of us possess, for three reasons. First, imagine that you forget your password to just about any Web site. How do you recover it? It's likely you request a password reset, and you get an email. Now, if you no longer control your email, an attacker can reset your passwords and take control of your Web accounts. How does an attacker know what accounts you own? That is answered by the second key to email: content. A quick check of your emails will reveal the organizations with which you do business. The content can also provide means to access other accounts. The third reason email is so critical is that it is essentially your online identity. An attacker can use your email to impersonate you and try to gain access to those that trust you.

So, how should you protect your email? I offer four recommendations. First, select a provider who gives you plenty of insight into how your account is used. Would you get an alert when someone logs into your account from a foreign country, for example? Second, select a provider who offers two-factor authentication. This means you can choose to log in with more than just a username and password. Third, select a provider who has experience with confronting and defeating intruders, and who takes actions to continuously improve their security. For consumers, I prefer Gmail. Of course, I am not of fan of being monetized by Alphabet and Google, but the trade-off is worth it for most of us.

My last recommendation is to limit what you store in email. Don't transmit or store sensitive information, like your personally identifiable information (Social Security number, etc.), in your email. As a thought experiment, imagine what it would look like to have your email published online. What would be the consequences? Try to address those concerns by removing such content from your email.

2. If you don't need it, delete it. This general rule applies to applications and data. If you don't need Java or Flash or other applications on your PC, phone, or tablet, remove them. The less software on your device, the better. For data, be judicious about what you store in digital form. Anything stored on a device or in the cloud can be read, copied, changed, or deleted by an attacker. My post “If you can’t protect it, don’t collect it” offers more on this topic.

3. Patch the software you keep. If you use Windows, run a modern version such as Windows 7 or newer, and install patches regularly, for the operating system and applications. On Windows it can be tough to identify just what needs to be updated. A free tool that can help is SUMo, the Software Update Monitor. Download the "lite" version and run it to see what needs to be updated. Pay attention to applications from Adobe, like Flash, Reader, and such. Remember tip 2!

4. Run a modern Web browser. For general consumers, the best Web browser in my opinion is Google Chrome. Make sure it is set to auto-update so you are running the latest version. Install an ad-blocker like Adblock Plus.

5. Back up your data. Research and implement a way to back up the data on your devices. This can be a complicated issue. For example, you may keep sensitive data on your laptop or PC, and you fear putting it in the cloud. One way to address that concern is to store that data in encrypted form on your laptop or PC, such that when it is stored in the cloud it is also encrypted.

Some may argue that certain cloud providers will encrypt your data for you, so why encrypt it locally first? My answer: if an attacker gains access to your cloud backup username and password, he can access your cloud backup provider and download your data, regardless of whether the cloud provider encrypts it or not. If the attacker finds your most sensitive data encrypted within the cloud backup, that means he needs to beat the encryption you applied on your own. Like all the measures in this post, nothing is foolproof. However, introducing challenges to the adversary is the key to security.

Furthermore, don't confuse cloud storage with backup. If you store data in Google Drive, or other locations, don't consider that a backup. I recommend adding a real backup provider to your configuration.

On a related note, enable full-device encryption on devices you are likely to lose. This applies most likely to your phone and tablet. The danger you are trying to mitigate here is physical loss or theft of your device. Be sure you enable a numeric pin such that a thief can't simply log into your lost or stolen device. I am also a fan of services that let you remotely locate your lost or stolen device, such that you can either find them or wipe them at a distance.

6. Buy Apple phones and tablets and keep them up-to-date. This looks like a blatant advertisement for Apple, but I promise you I am not an Apple fan boy. The fact of the matter is that Apple iPhones and iPads, when running the latest versions of the iOS software, provide the best combination of features and security available to the general consumer. They are easiest to operate and to update. Updating iOS and the installed apps is exceptionally easy. Furthermore, the best metric we have regarding software security shows that exploits for iOS devices cost far more than other software or platforms. This means it is tougher for intruders to break into devices running iOS.

7. Consider a password manager, but not for every Web site. Nothing is (or should be) absolute in security. Password managers are applications that assist users with storing, supplying, and even generating usernames and passwords for Web sites and other applications. They are an improvement over using the same username and password at multiple Web sites. However, when using a password manager, you run the risk of a flaw in that manager being used by an attacker to access your username and passwords! It sounds like a tough situation, but in general the benefits of the password manager outweigh the risks. If you choose a password manager, select one that offers two factor authentication, such that accessing your usernames and passwords requires you to enter a numeric code. Also, don't put your most sensitive accounts in the manager. For example, in deference to point 1, don't store your email username and password in the manager.

Bonus: Be vigilant. Wherever you can introduce alerts about how your accounts and data are being used, enable them. For example, does your credit card offer the option to email you when a purchase is made? Perhaps you only care about overseas purchases, or purchases above a certain amount, or at gas stations. The point is to put your service providers to work for you, such that they give you information that informs your security posture. If you learn of a suspicious event and react in time, you can potentially limit or eliminate the damage through swift personal response.

There are many other considerations for individuals, especially with respect to resisting targeted attacks. I didn't address resisting social engineering, phishing, and the like, but I believe that is well-covered elsewhere. To counter the general opportunistic attacker, these are the steps I would recommend to individuals and small businesses.

Wednesday, October 28, 2015

A Different Spin on the Air War Against IS

Sunday evening 60 Minutes aired a segment titled Inside the Air War. The correspondent was David Martin, whose biography includes the fact that he served as a naval officer during the Vietnam War. The piece concluded with the following exchange and commentary:

On the day we watched the B-1 strike, that same bomber was sent to check out a report of a single ISIS sniper firing from the top of a building.

Weapons officer: The weapon will time out directly in between the two buildings.

This captain was one of the weapons officers in the cockpit.

David Martin: B-1 bomber.

Weapons officer: Yes sir.

David Martin: All that technology.

Weapons officer: Yes sir.

David Martin: All that fire power. One sniper down on the ground.

I thought the captain's next words were right on target:

Weapons officer: Sir, I think if it was you or me on the ground getting shot at by that sniper we would take any asset available to make sure we were no longer getting, you know, engaged by that sniper. So, if I get a call and they say they're getting shot at, and there's potential loss of friendly life, I am absolutely gonna drop a weapon on that sniper.

It's clear that Mr Martin was channeling the Vietnam experience of heavily trained pilots flying multi-million dollar airplanes, dropping millions of dollars of bombs on the Ho Chi Minh trail, trying to stop porters carrying supplies on their backs and on bicycles. I understand that concern and I share that theme. However, I'd like to offer another interpretation.

The ability to dynamically retask in-air assets is a strength of American airpower. This episode involved retasking a B-1 that had already completed its primary mission. By putting that asset to use again, it alleviated the need to launch another aircraft.

However:

By the time the B-1 arrived overhead the sniper was gone.

Weapons officer: What we did, however, find though was a tunnel system. So, in this case we dropped weapons on all the entry points that were associated with that tunnel.

Six 500-pound bombs.

Weapons officer: It was actually a perfect shack on the target.

This could be interpreted as a failure, because the sniper wasn't killed. However, in another example of retasking and dynamic intelligence, the B-1 was able to destroy a tunnel system. This again prevented the launch of another aircraft to accomplish that new mission.

These are features of the 60 Minutes story that were not conveyed by the on-air narrative, but which I observed based on my Air Force experience. It doesn't change the strategic questions concerning the role of airpower in theatre, but it is important to recognize the flexibility and dynamism offered by these incidents.

Monday, October 19, 2015

South Korea Signs Up to Cyber Theft Pledge

On Friday the Obama administration secured its second win toward establishing a new norm in cyberspace. The Joint Fact Sheet published by the White House includes the following language:

"no country should conduct or knowingly support cyber-enabled theft of intellectual property, trade secrets, or other confidential business information with the intent of providing competitive advantages to its companies or commercial sectors;" (emphasis added)

This excerpt, as well as other elements of the agreement, mirror words which I covered in my Brookings piece To Hack, Or Not to Hack? I recommend reading that article to get my full take on the importance of this language, including the bold elements.

It's likely many readers don't think of South Korea as an economic threat to the US. While South Korean operations are conducted at a fraction of the scale of their Chinese neighbors, ROK spies still remain busy. In January Shane Harris wrote a great story titled Our South Korean Allies Also Hack the U.S.—and We Don’t Seem to Care. It contains gems like the following:

From 2007 to 2012, the Justice Department brought charges in at least five major cases involving South Korean corporate espionage against American companies. Among the accused was a leading South Korean manufacturer that engaged in what prosecutors described as a “multi-year campaign” to steal the secret to DuPont’s Kevlar, which is used to make bulletproof vests...

All of the cases involved corporate employees, not government officials, but the technologies that were stolen had obvious military applications. South Korean corporate spies have targeted thermal imaging devices and prisms used for guidance systems on drones...

But South Korea has gone after commercial tech, as well. A 2005 report published by Cambridge University Press identified South Korea as one of five countries, along with China and Russia, that had devoted “the most resources to stealing Silicon Valley technology.”

I commend the administration for securing a "cyber theft pledge" from another country. Whether it will hold is another issue. Just today there is reporting claiming that China is still targeting US companies in order to benefit Chinese companies. I believe it is too soon to make a judgment.

I'm also watching to see which countries besides the US approach China, asking for similar "cyber theft pledges." With President Xi visiting the UK soon, will we see Prime Minister Cameron ask that China stop stealing commercial secrets from UK companies?

On a related note, I've encountered several people recently who were not aware of the excellent annual Targeting US Technologies report series by the US Defense Security Service. They are posted here. The most recent was published in August 2015.

Monday, October 05, 2015

For the PLA, Cyber War is the Battle of Triangle Hill

In June 2011 I wrote a blog post with the ever polite title China's View Is More Important Than Yours. I was frustrated with the Western-centric, inward-focused view of many commentators, which put themselves at the center of debates over digital conflict, neglecting the possibility that other parties could perceive the situation differently. I remain concerned that while Western thinkers debate war using Western, especially Clausewitzian, models, Eastern adversaries, including hybrid Eastern-Western cultures, perceive war in their own terms.

I wrote in June 2011:

The Chinese military sees Western culture, particularly American culture, as an assault on China, saying "the West uses a system of values (democracy, freedom, human rights, etc.) in a long-term attack on socialist countries...

Marxist theory opposes peaceful evolution, which... is the basic Western tactic for subverting socialist countries" (pp 102-3). They believe the US is conducting psychological warfare operations against socialism and consider culture as a "frontier" that has extended beyond American shores into the Chinese mainland.

The Chinese therefore consider control of information to be paramount, since they do not trust their population to "correctly" interpret American messaging (hence the "Great Firewall of China"). In this sense, China may consider the US as the aggressor in an ongoing cyberwar.

Today thanks to a Tweet by Jennifer McArdle I noticed a May 2015 story featuring a translation of a People's Daily article. The English translation is posted as Cybersovereignty Symbolizes National Sovereignty.

I recommend reading the whole article, but the following captures the spirit of the message:

Western hostile forces and a small number of “ideological traitors” in our country use the network, and relying on computers, mobile phones and other such information terminals, maliciously attack our Party, blacken the leaders who founded the New China, vilify our heroes, and arouse mistaken thinking trends of historical nihilism, with the ultimate goal of using “universal values” to mislead us, using “constitutional democracy” to throw us into turmoil, use “colour revolutions” to overthrow us, use negative public opinion and rumours to oppose us, and use “de-partification and depoliticization of the military” to upset us.

This article demonstrates that, four years after my first post, there are still elements, at least in the PLA, who believe that China is fighting a cyber war, and that the US started it.

I thought the last line from the PLA Daily article was especially revealing:

Only if we act as we did at the time of the Battle of Triangle Hill, are riveted to the most forward position of the battlefield and the fight in this ideological struggle, are online “seed machines and propaganda teams”, and arouse hundreds and thousands in the “Red Army”, will we be able to be good shock troops and fresh troops in the construction of the “Online Great Wall”, and will we be able to endure and vanquish in this protracted, smokeless war.

The Battle of Triangle Hill was an engagement during the Korean War, with Chinese forces fighting American, South Korean, Ethiopian, and Colombian forces. Both sides suffered heavy losses over a protracted engagement, although the Chinese appear to have lost more and viewed their attrition strategy as worthwhile. It's ominous this PLA editorial writer decided to cite a battle between US and Chinese forces to communicate his point about online conflict, but it should make it easier for American readers to grasp the seriousness of the issue in Chinese minds.

Saturday, October 03, 2015

Personal Info Stolen? Seven Response Steps

Yesterday on Bloomberg West, host Emily Chang reported on a breach that affected her personally identifiable information (PII). She asked what she should do now that she is a victim of data theft. This is my answer.

First, I recommend changing passwords for any accounts associated with the breached entities.

Second, if you used the same passwords from the breached entities at unrelated sites, change passwords at those other sites.

Third, if any of those entities offer two factor authentication, enable it. This likely involves getting a code via text message or using an app that generates codes.

Fourth, read Brian Krebs' post How I Learned to Stop Worrying and Embrace the Security Freeze. It's a personal decision to go all the way to enable a security freeze. I recommend everyone who has been a PII or credit data theft victim, at the minimum, to enable a "fraud alert." Why? It's free, and you can sign up online with one credit bureau and the others will enable it as well. The downside is that it expires 90 days later, unless you re-enable it. So, set a reminder in your calendar app to renew before the 90 days expire.

Fifth, create a schedule to periodically check your credit reports. Theft victims usually get credit monitoring for free, but everyone should take advantage of AnnualCreditReport.com, the FTC-authorized place to order credit reports, once per year, for free. For example, get one bureau's report in January, a second in May, the third in September, and repeat with the first the next January.

Sixth, visit your credit, investing, and bank Web sites, and enable every kind of monitoring and alerting you can handle. I like to know about every purchase, withdrawal, deposit, etc. via email. Also keep a close eye on your statements for odd purchases.

Last, secure your email. Email is the key to your online existence. Use a provider that takes security seriously and provides two factor authentication.

Good luck!

Tuesday, September 29, 2015

Attribution: OPM vs Sony

I read Top U.S. spy skeptical about U.S.-China cyber agreement based on today's Senate Armed Services Committee hearing titled United States Cybersecurity Policy and Threats. It contained this statement:

U.S. officials have linked the OPM breach to China, but have not said whether they believe its government was responsible.

[Director of National Intelligence] Clapper said no definite statement had been made about the origin of the OPM hack since officials were not fully confident about the three types of evidence that were needed to link an attack to a given country: the geographic point of origin, the identity of the "actual perpetrator doing the keystrokes," and who was responsible for directing the act.

I thought this was interesting for several reasons. First, does DNI Clapper mean that the US government has not made an official statement regarding attribution for China and OPM because all "three types of evidence" are missing, or do we have one, or perhaps two? If that is the case, which elements do we have, and not have?

Second, how specific is the "actual perpetrator doing the keystrokes"? Did DNI Clapper mean he requires the Intelligence Community to identify a named person, such that the IC knows the responsible team?

Third, and perhaps most importantly, contrast the OPM case with the DPRK hack against Sony Pictures Entertainment. Assuming that DNI Clapper and the IC applied these "three types of evidence" for SPE, that means the attribution included the geographic point of origin, the identity of the "actual perpetrator doing the keystrokes," and the identity of the party directing the attack, which was the DPRK. The DNI mentioned "broad consensus across the IC regarding attribution," which enabled the administration to apply sanctions in response.

For those wondering if the DNI is signalling a degradation in attribution capabilities, I direct you to his statement, which says in the attribution section:

Although cyber operations can infiltrate or disrupt targeted ICT networks, most can no longer assume their activities will remain undetected indefinitely. Nor can they assume that if detected, they will be able to conceal their identities. Governmental and private sector security professionals have made significant advances in detecting and attributing cyber intrusions.

I was pleased to see the DNI refer to the revolution in private sector and security intelligence capabilities.

Sunday, September 13, 2015

Good Morning Karen. Cool or Scary?

Last month I spoke at a telecommunications industry event. The briefer before me showed a video by the Hypervoice Consortium, titled Introducing Human Technology: Communications 2025. It consists of a voiceover by a 2025-era Siri-like assistant, speaking to her owner, "Karen." The assistant describes what's happening with Karen's household. 15 seconds into the video, the assistant says:

The report is due today. I've cleared your schedule so you can focus. Any attempt to override me will be politely rebuffed.

I was already feeling uncomfortable with the scenario, but that is the point at which I really started to squirm. I'll leave it to you to watch the rest of the video and report how you feel about it.

My general conclusion was that I'm wary of putting so much trust in a platform that is likely to be targeted by intruders, such that they can manipulate so many aspects of a person's life. What do you think?

By the way, the briefer before me noted that every vision of the future appears to involve solving the "low on milk problem."

Monday, September 07, 2015

Are Self-Driving Cars Fatally Flawed?

I read the following in the Guardian story Hackers can trick self-driving cars into taking evasive action.

Hackers can easily trick self-driving cars into thinking that another car, a wall or a person is in front of them, potentially paralysing it or forcing it to take evasive action.

Automated cars use laser ranging systems, known as lidar, to image the world around them and allow their computer systems to identify and track objects. But a tool similar to a laser pointer and costing less than $60 can be used to confuse lidar...

The following appeared in the IEEE Spectrum story Researcher Hacks Self-driving Car Sensors.

Using such a system, attackers could trick a self-driving car into thinking something is directly ahead of it, thus forcing it to slow down. Or they could overwhelm it with so many spurious signals that the car would not move at all for fear of hitting phantom obstacles...

Petit acknowledges that his attacks are currently limited to one specific unit but says, “The point of my work is not to say that IBEO has a poor product. I don’t think any of the lidar manufacturers have thought about this or tried this.” 

I had the following reactions to these stories.

First, it's entirely possible that self-driving car manufacturers know about this attack model. They might have decided that it's worth producing cars despite the technical vulnerability. For example, there is no defense in WiFi for jamming the RF spectrum. There are also non-RF jamming methods to disrupt WiFi, as detailed here. Nevertheless, WiFi is everywhere, but lives usually don't depend on it.

Second, researcher Jonathan Petit appears to have tested an IBEO Lux lidar unit and not a real self-driving car. We don't know, from the Guardian or IEEE Spectrum articles at least, how a Google self-driving car would handle this attack. Perhaps the vendors have already compensated for it.

Third, these articles may undermine one of the presumed benefits of self-driving cars: that they are supposed to be safer than human drivers. If self-driving car technology is vulnerable to an attack not found in driver-controlled cars, that is a problem.

Fourth, does this attack mean that driver-controlled cars with similar technology are also vulnerable, or will be? Are there corresponding attacks for systems that detect obstacles on the road and trigger the brakes before the driver can physically respond?

Last, these articles demonstrate the differences between safety and security. Safety, in general, is a discipline designed to improve the well-being of people facing natural, environmental, mindless threats. Security, in contrast, is designed to counter intelligent, adaptive adversaries. I am predisposed to believe that self-driving car manufacturers have focused on the safety aspects of their products far more than the security aspects. It's time to address that imbalance.

Friday, August 14, 2015

Top Ten Books Policymakers Should Read on Cyber Security

I've been meeting with policymakers of all ages and levels of responsibility during the last few months. Frequently they ask "what can I read to better understand cyber security?" I decided to answer them collectively in this quick blog post.

By posting these, I am not endorsing everything they say (with the exception of the last book). On balance, however, I think they provide a great introduction to current topics in digital security.

  1. Cybersecurity and Cyberwar: What Everyone Needs to Know by Peter W. Singer and Allan Friedman
  2. Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon by Kim Zetter
  3. @War: The Rise of the Military-Internet Complex by Shane Harris
  4. China and Cybersecurity: Espionage, Strategy, and Politics in the Digital Domain by  Jon R. Lindsay, Tai Ming Cheung, and Derek S. Reveron
  5. Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World by Bruce Schneier
  6. Spam Nation: The Inside Story of Organized Cybercrime-from Global Epidemic to Your Front Door by Brian Krebs
  7. Future Crimes: Everything Is Connected, Everyone Is Vulnerable and What We Can Do About It by Marc Goodman
  8. Chinese Industrial Espionage: Technology Acquisition and Military Modernisation by William C. Hannas, James Mulvenon, and Anna B. Puglisi 
  9. Cyber War Will Not Take Place by Thomas Rid
  10. The Practice of Network Security Monitoring: Understanding Incident Detection and Response by Richard Bejtlich (use code NSM101 to save 30%; I prefer the print copy!)

Enjoy!

Friday, August 07, 2015

Effect of Hacking on Stock Price, Or Not?

I read Brian Krebs story Tech Firm Ubiquiti Suffers $46M Cyberheist just now. He writes:

Ubiquiti, a San Jose based maker of networking technology for service providers and enterprises, disclosed the attack in a quarterly financial report filed this week [6 August; RMB] with the U.S. Securities and Exchange Commission (SEC). The company said it discovered the fraud on June 5, 2015, and that the incident involved employee impersonation and fraudulent requests from an outside entity targeting the company’s finance department.

“This fraud resulted in transfers of funds aggregating $46.7 million held by a Company subsidiary incorporated in Hong Kong to other overseas accounts held by third parties,” Ubiquiti wrote. “As soon as the Company became aware of this fraudulent activity it initiated contact with its Hong Kong subsidiary’s bank and promptly initiated legal proceedings in various foreign jurisdictions. As a result of these efforts, the Company has recovered $8.1 million of the amounts transferred.”

Brian credits Brian Honan at CSO Online, with noticing the disclosure yesterday.

This is a terrible crime that I would not wish upon anyone. My interest in this issue has nothing to do with Ubiquiti as a company, nor is it intended as a criticism of the company. The ultimate fault lies with the criminals who perpetrated this fraud. The purpose of this post is to capture some details for the benefit of analysis, history, and discussion.

The first question I had was: did this event have an effect on the Ubiquiti stock price? The FY fourth quarter results were released at 4:05 pm ET on Thursday 6 August 2015, after the market closed.

The "Fourth Quarter Financial Summary: listed this as the last bullet:

"GAAP net income and diluted EPS include a $39.1 million business e-mail compromise ("BEC") fraud loss as disclosed in the Form 8-K filed on August 6, 2015"

I assume the Form 8-K was published simultaneously, with earnings.

Next I found the following in this five day stock chart.


5 day UBNT Chart (3-7 August 2015)

You can see the gap down from Thursday's closing price, on the right side of the chart. Was that caused by the fraud charge?

I looked to see what the financial press had to say. I found this Motley Fool article titled Why Ubiquiti Networks, Inc. Briefly Fell 11% on Friday, posted at 12:39 PM (presumably ET). However, this article had nothing to say about the fraud.

Doing a little more digging, I saw Seeking Alpha caught the fraud immediately, posting Ubiquiti discloses $39.1M fraud loss; shares -2.9% post-earnings at 4:24 PM (presumably ET).  They noted that "accounting chief Rohit Chakravarthy has resigned." I learned that the company was already lacking a chief financial officer, so Mr. Chakravarthy was filling the role temporarily. Perhaps that contributed to the company falling victim to the ruse. Could Ubiquiti have been targeted for that reason?

I did some more digging, but it looks like the popular press didn't catch the issue until Brian Honan and Brian Krebs brought attention to the fraud angle of the earnings release, early today.

Next I listened to the archive of the earnings call. The call was a question-and-answer session, rather than a statement by management followed by Q and A. I listened to analysts ask about head count, South American sales, trademark names, shipping new products, and voice and video. Not until the 17 1/2 minute mark did an analyst ask about the fraud.

CEO Robert J. Pera said he was surprised no one had asked until that point in the call. He said he was embarrassed by the incident and it reflected "incredibly poor judgement and incompetence" by a few people in the accounting department.

Finally, returning to the stock chart, you see a gap down, but recovery later in the session. The market seems to view this fraud as a one-time event that will not seriously affect future performance. That is my interpretation, anyway. I wish Ubiquiti well, and I hope others can learn from their misfortune.

Update: I forgot to add this before hitting "post":

Ubiquiti had FY fourth quarter revenues of $145.3 million. The fraud is a serious portion of that number. If Ubiquiti had earned ten times that in revenue, or more, would the fraud have required disclosure?

The disclosure noted:

"As a result of this investigation, the Company, its Audit Committee and advisors have concluded that the Company’s internal control over financial reporting is ineffective due to one or more material weaknesses."

That sounds like code for a Sarbanes-Oxley issue, so I believe they would have reported anyway, regardless of revenue-to-fraud proportions.

Tuesday, July 21, 2015

Going Too Far to Prove a Point

I just read Hackers Remotely Kill a Jeep on the Highway - With Me in It by Andy Greenberg. It includes the following:

"I was driving 70 mph on the edge of downtown St. Louis when the exploit began to take hold...

To better simulate the experience of driving a vehicle while it’s being hijacked by an invisible, virtual force, Miller and Valasek refused to tell me ahead of time what kinds of attacks they planned to launch from Miller’s laptop in his house 10 miles west. Instead, they merely assured me that they wouldn’t do anything life-threatening. Then they told me to drive the Jeep onto the highway. “Remember, Andy,” Miller had said through my iPhone’s speaker just before I pulled onto the I-40 on-ramp, “no matter what happens, don’t panic.”

As the two hackers remotely toyed with the air-conditioning, radio, and windshield wipers, I mentally congratulated myself on my courage under pressure. That’s when they cut the transmission.

Immediately my accelerator stopped working. As I frantically pressed the pedal and watched the RPMs climb, the Jeep lost half its speed, then slowed to a crawl. This occurred just as I reached a long overpass, with no shoulder to offer an escape. The experiment had ceased to be fun.

At that point, the interstate began to slope upward, so the Jeep lost more momentum and barely crept forward. Cars lined up behind my bumper before passing me, honking. I could see an 18-wheeler approaching in my rearview mirror. I hoped its driver saw me, too, and could tell I was paralyzed on the highway.


“You’re doomed!” Valasek shouted, but I couldn’t make out his heckling over the blast of the radio, now pumping Kanye West. The semi loomed in the mirror, bearing down on my immobilized Jeep.

I followed Miller’s advice: I didn’t panic. I did, however, drop any semblance of bravery, grab my iPhone with a clammy fist, and beg the hackers to make it stop...

After narrowly averting death by semi-trailer, I managed to roll the lame Jeep down an exit ramp, re-engaged the transmission by turning the ignition off and on, and found an empty lot where I could safely continue the experiment." (emphasis added)

I had two reactions to this article:

1. It is horrifying that hackers can remotely take control of a vehicle. The auto industry has a lot of work to do. It's unfortunate that it takes private research and media attention to force a patch (which has now been published.) Hopefully a combination of Congressional attention, product safety laws, and customer pressure will improve the security of the auto industry before lives and property are affected.

2. It is also horrifying to conduct a hacking "experiment" on I-40, with vehicles driving at 60 or more MPH, carrying passengers. It's not funny to put lives at risk, whether they are volunteers like the driver/author or other people on the highway.

Believing it is ok reflects the same juvenile thinking that motivated another "researcher," Chris Roberts, to apparently "experiment" with live airplanes, as reported by Wired and other news outlets.

Hackers are not entitled to jeopardize the lives of innocent people in order to make a point. They can prove their discoveries without putting others, who have not consented to be guinea pigs, at risk.

It would be a tragedy if the first death by physical-digital convergence occurs because a "security researcher" is "experimenting" in order to demonstrate a proof of concept.

Tuesday, June 30, 2015

My Security Strategy: The "Third Way"

Over the last two weeks I listened to and watched all of the hearings related to the OPM breach. During the exchanges between the witnesses and legislators, I noticed several themes. One presented the situation facing OPM (and other Federal agencies) as confronting the following choice:

You can either 1) "secure your network," which is very difficult and going to "take years," due to "years of insufficient investment," or 2) suffer intrusions and breaches, which is what happened to OPM.

This struck me as an odd dichotomy. The reasoning appeared to be that because OPM did not make "sufficient investment" in security, a breach was the result.

In other words, if OPM had "sufficiently invested" in security, they would not have suffered a breach.

I do not see the situation in this way, for two main reasons.

First, there is a difference between an "intrusion" and a "breach." An intrusion is unauthorized access to a computing resource. A breach is the theft, alteration, or destruction of that computing resource, following an intrusion.

It therefore follows that one can suffer an intrusion, but not suffer a breach.

One can avoid a breach following an intrusion if the security team can stop the adversary before he accomplishes his mission.

Second, there is no point at which any network is "secure," i.e., intrusion-proof. It is more likely one could operate a breach-proof network, but that is not completely attainable, either.

Still, the most effective strategy is a combination of preventing as many intrusions as possible, complemented by an aggressive detection and response operation that improves the chances of avoiding a breach, or at least minimizes the impact of a breach.

This is why I call "detection and response" the "third way" strategy. The first way, "secure your network" by making it "intrusion-proof," is not possible. The second way, suffer intrusions and breaches, is not acceptable. Therefore, organizations should implement a third way strategy that stops as many intrusions as possible, but detects and responds to those intrusions that do occur, prior to their progression to breach status.

My Prediction for Top Gun 2 Plot

We've known for about a year that Tom Cruise is returning to his iconic "Maverick" role from Top Gun, and that drone warfare would be involved. A few days ago we heard a few more details in this Collider story:

[Producer David Ellison]: There is an amazing role for Maverick in the movie and there is no Top Gun without Maverick, and it is going to be Maverick playing Maverick. It is I don’t think what people are going to expect, and we are very, very hopeful that we get to make the movie very soon. But like all things, it all comes down to the script, and Justin is writing as we speak.

[Interviewer]; You’re gonna do what a lot of sequels have been doing now which is incorporate real use of time from the first one to now.

ELLISON and DANA GOLDBERG: Absolutely...

ELLISON:  As everyone knows with Tom, he is 100% going to want to be in those airplanes shooting it practically. When you look at the world of dogfighting, what’s interesting about it is that it’s not a world that exists to the same degree when the original movie came out. This world has not been explored. It is very much a world we live in today where it’s drone technology and fifth generation fighters are really what the United States Navy is calling the last man-made fighter that we’re actually going to produce so it’s really exploring the end of an era of dogfighting and fighter pilots and what that culture is today are all fun things that we’re gonna get to dive into in this movie.

What could the plot involve?

First, who is the adversary? You can't have dogfighting without a foe. Consider the leading candidates:

  • Russia: Maybe. Nobody is fond of what President Putin is doing in Ukraine.
  • Iran: Possible, but Hollywood types are close to the Democrats, and they will not likely want to upset Iran if Secretary Kerry secures a nuclear deal.
  • China: No way. Studios want to release movies in China, and despite the possibility of aerial conflict in the East or South China Seas, no studio is going to make China the bad guy. In fact, the studio will want to promote China as a good guy to please that audience.
  • North Korea: No way. Prior to "The Interview," this was a possibility. Not anymore!
My money is on an Islamic terrorist group, either unnamed, or possibly Islamic State. They don't have an air force, you say? This is where the drone angle comes into play.

Here is my prediction for the Top Gun 2 plot.

Oil tankers are trying to pass through the Gulf of Aden, or maybe the Strait of Hormuz, carrying their precious cargo. Suddenly a swarm of small, yet armed, drones attack and destroy the convoy, setting the oil ablaze in a commercial and environmental disaster. The stock market suffers a huge drop and gas prices skyrocket.

The US Fifth Fleet, and its Chinese counterpart, performing counter-piracy duties nearby, rush to rescue the survivors. They set up joint patrols to guard other commercial sea traffic. Later the Islamic group sends another swarm of drones to attack the American and Chinese ships. This time the enemy includes some sort of electronic warfare-capable drones that jam US and Chinese GPS, communications, and computer equipment. (I'm seeing a modern "Battlestar Galactica" theme here.) American and Chinese pilots die, and their ships are heavily damaged. (By the way, this is Hollywood, not real life.)

The US Navy realizes that its "net-centric," "technologically superior" force can't compete with this new era of warfare. Cue the similarities with the pre-Fighter Weapons School, early Vietnam situation described in the first scenes at Miramar in the original movie. (Remember, a 12-1 kill ratio in Korea, 3-1 in early Vietnam due to reliance on missiles and atrophied dogfighting skills, back to 12-1 in Vietnam after Top Gun training?)


The US Navy decides it needs to bring back someone who thinks unconventionally in order to counter the drone threat and resume commercial traffic in the Gulf. They find Maverick, barely hanging on to a job teaching at a civilian flight school. His personal life is a mess, and he was kicked out of the Navy during the first Gulf War in 1991 for breaking too many rules. Now the Navy wants him to teach a new generation of pilots how to fight once their "net-centric crutches" disappear.

You know what happens next. Maverick returns to the Navy as a contractor. Top Gun is now the Naval Strike and Air Warfare Center (NSAWC) at NAS Fallon, Nevada. The Navy retired his beloved F-14 in 2006, so there is a choice to be made about what aircraft awaits him in Nevada. I see three possibilities:

1) The Navy resurrects the F-14 because it's "not vulnerable" to the drone electronic warfare. This would be cool, but they aren't going to be able to fly American F-14s due to their retirement. CGI maybe?

2) The Navy flies the new F-35, because it's new and cool. However, the Navy will probably not have any to fly. CGI again?

3) The Navy flies the F-18. This is most likely, because producers could film live operations as they did in the 1980s.

Beyond the aircraft issues, I expect themes involving relevance as one ages, re-integration with military culture, and possibly friction between members of the joint US-China task force created to counter the Islamic threat.

In the end, thanks to the ingenuity of Maverick's teaching and tactics, the Americans and Chinese prevail over the Islamic forces. It might require Maverick to make the ultimate sacrifice, showing he's learned that warfare is a team sport, and that he really misses Goose. The Chinese name their next aircraft carrier the "Pete Mitchell" in honor of Maverick's sacrifice. (Forget calling it the "Maverick" -- too much rebellion for the CCP.)

I'm looking forward to this movie.

Saturday, June 27, 2015

Hearing Witness Doesn't Understand CDM

This post is a follow up to this post on CDM. Since that post I have been watching hearings on the OPM breach.

On Wednesday 24 June a Subcommittee of the House Committee on Homeland Security held a hearing titled DHS’ Efforts to Secure .Gov.

A second panel (starts in the Webcast around 2 hours 20 minutes) featured Dr. Daniel M. Gerstein, a former DHS official now with RAND, as its sole witness.

During his opening statement, and in his written testimony, he made the following comments:

"The two foundational programs of DHS’s cybersecurity program are EINSTEIN (also called EINSTEIN 3A) and CDM. These two systems are designed to work in tandem, with EINSTEIN focusing on keeping threats out of federal networks and CDM identifying them when they are inside government networks.

EINSTEIN provides a perimeter around federal (or .gov) users, as well as select users in the .com space that have responsibility for critical infrastructure. EINSTEIN functions by installing sensors at Web access points and employs signatures to identify cyberattacks.

CDM, on the other hand, is designed to provide an embedded system of sensors on internal government networks. These sensors provide real-time capacity to sense anomalous behavior and provide reports to administrators through a scalable dashboard. It is composed of commercial-off-the-shelf equipment coupled with a customized dashboard that can be scaled for administrators at each level." (emphasis added)

All of the text in bold is false. CDM is not "identifying [threats] when they are in inside government networks." CDM is not "an embedded system of sensors on internal government networks" looking for threat actors.

Why does Dr. Gerstein so misunderstand the CDM program? The answer is found in the next section of his testimony, reproduced below.

"CDM operates by providing

          federal departments and agencies with capabilities and tools that identify
          cybersecurity risks on an ongoing basis, prioritize these risks based upon
          potential impacts, and enable cybersecurity personnel to mitigate the
          most significant problems first. Congress established the CDM program
          to provide adequate, risk-based, and cost-effective cybersecurity and
          more efficiently allocate cybersecurity resources." (emphasis added)

The indented section is reproduced from the DHS CDM Website, as footnoted in Dr. Gerstein's statement.

The answer to my question of misunderstanding involves two levels of confusion.

The first level of confusion is a result of the the CDM description, which confuses risks with vulnerabilities. Basically, the CDM description should say vulnerabilities instead of risks. CDM, now known as Continuous Diagnostics and Mitigation, is a "find and fix flaws (i.e., vulnerabilities) faster" program.

In other words, the CDM description should say:

"CDM gives federal departments and agencies with capabilities and tools that identify cybersecurity vulnerabilities on an ongoing basis, prioritize these vulnerabilities based upon potential impacts, and enable cybersecurity personnel to mitigate the most significant problems first."

The second level of confusion is a result of Dr. Gerstein confusing risks with threats. It is clear that when Dr. Gerstein reads the CDM description and its mention of "risks," he thinks CDM is looking for threat actors. CDM does not look for threat actors; CDM looks for vulnerabilities. Vulnerabilities are flaws in software or configuration that make it possible for intruders to gain unauthorized access.

As I wrote in my CDM post, we absolutely need the capability to find and fix flaws faster. We need CDM. However, do not confuse CDM with the operational capability to detect and remove threat actors. CDM could be deployed across the entire Federal government, but it would be an accident if a security analyst noticed an intruder using a CDM tool.

Essentially, the government needs to implement My Federal Government Security Crash Program to detect and remove threat actors.

It is critical that staffers, lawmakers, and the public understand what is happening, and not be lulled into a false sense of security due to misunderstanding these concepts.

Saturday, June 20, 2015

The Tragedy of the Bloomberg Code Issue

Last week I Tweeted about the Bloomberg "code" issue. I said I didn't know how to think about it. The issue is a 28,000+ word document, enough to qualify as a book, that's been covered by news outlets like the Huffington Post.

I approached the document with an open mind. When I opened my mail box last week, I didn't expect to get a 112 page magazine devoted to explaining the importance of software to non-technical people. It was a welcome surprise.

This morning I decided to try to read some of the issue. (It's been a busy week.) I opened the table of contents, shown at left. It took me a moment, but I realized none of the article titles mentioned security.

Next I visited the online edition, which contains the entire print version and adds additional content. I searched the text for the word "security." These are the results:

Security research specialists love to party.

I have been asked if I was physical security (despite security wearing very distinctive uniforms),” wrote Erica Joy Baker on Medium.com who has worked, among other places, at Google.

Can we not rathole on Mailinator before we talk overall security?

We didn’t talk about password length, the number of letters and symbols necessary for passwords to be secure, or whether our password strategy on this site will fit in with the overall security profile of the company, which is the responsibility of a different division. 

Ditto many of the security concerns that arise when building websites, the typical abuses people perpetrate.

“First, I needed to pass everything through the security team, which was five months of review,” TMitTB says, “and then it took me weeks to get a working development environment, so I had my developers sneaking out to Starbucks to check in their code. …”

In Fortran, and I ask to see your security clearance.

If you're counting, that's eight instances of "security" in seven sentences. There's no mention of "software security." There's a small discussion about "e-mail validation," but it's printed to show how broken software development meetings can be.

Searching for "hack" yields two references to "Hacker News" and this sentence talking about the perils of the PHP programming language:

Everything was always broken, and people were always hacking into my sites.

There is one result for "breach," but it has nothing to do with security incidents. The only time the word "incident" appears is in a sentence talking about programming conference attendees behaving badly.

In brief, a 112 page magazine devoted to the importance of software has absolutely nothing useful to say about software security. Arguably, it says absolutely nothing on software security.

When someone communicates, what he or she doesn't say can be as important as what he or she does say.

In the case of this magazine, it's clear that software security is not on the minds of the professional programmer who wrote the issue. It's also not a concern of the editor or any of the team that contributed to it.

From what I have seen, that neglect is not unique to Bloomberg.

That is the tragedy of the Bloomberg code issue, and it remains a contributing factor to the decades of breaches we have been suffering.

Friday, June 19, 2015

Air Force Enlisted Ratings Remain Dysfunctional

I just read Firewall 5s are history: Quotas for top ratings announced in Air Force Times. It describes an effort to eliminate the so-called "firewall 5" policy with a new "forced distribution" approach:

The Air Force's old enlisted promotion system was heavily criticized by airmen for out-of-control grade inflation that came with its five-point numerical rating system. There were no limits on how many airmen could get the maximum: five out of five points [aka "firewall 5"]. As a result nearly everyone got a 5 rating.

As more and more raters gave their airmen 5s on their EPR [ Enlisted Performance Report], the firewall 5 became a common occurrence received by some 90 percent of airmen. And this meant the old EPR was effectively useless at trying to differentiate between levels of performance...

Under the new system, [Brig. Gen. Brian Kelly, director of military force management policy] said in a June 12 interview at the Pentagon, the numerical ratings are gone — and firewall 5s will be impossible...

The quotas — or as the Air Force calls them, "forced distribution" — will be one of the final elements to be put in place in the service's massive overhaul of its enlisted promotion process, which has been in the works for three years...

Only the top 5 percent, at most, of senior airmen, staff sergeants and technical sergeants who are up for promotion to the next rank will be deemed "promote now" and get the full 250 EPR points...

The quotas for the next tier of airmen — who will be deemed "must promote" and will get 220 out of 250 EPR points — will differ based on their rank. Kelly said that up to 15 percent of senior airmen who are eligible for promotion to staff sergeant can receive a "must promote" rating, and up to 10 percent of staff sergeants and tech sergeants up for promotion to technical and master sergeant can get that rating, and the accompanying 220 points.

The next three ratings — "promote," "not ready now" and "do not promote" — will each earn airmen 200, 150 and 50 points, respectively. But there will be no limit on how many airmen can get those ratings. (emphasis added)

I am not an expert on the enlisted performance rating system. In some ways, I think the EPR is superior to the corresponding system for officers, because enlisted personnel take tests whose scores influence their promotion potential.

However, upon reading this story, it reminded me of my 2012 post How to Kill Teams Through "Stack Ranking", which cited a Vanity Fair article about Microsoft's old promotion system:

[Author Kurt] Eichenwald’s conversations reveal that a management system known as “stack ranking” — a program that forces every unit to declare a certain percentage of employees as top performers, good performers, average, and poor — effectively crippled Microsoft’s ability to innovate.

“Every current and former Microsoft employee I interviewed — every one — cited stack ranking as the most destructive process inside of Microsoft, something that drove out untold numbers of employees,” Eichenwald writes.

This sounds uncomfortably like the new Air Force enlisted "forced distribution" system.

I was also reminded of another of my 2012 posts, Bejtlich's Thoughts on "Why Our Best Officers Are Leaving", which stressed the finding that

[V]eterans were shocked to look back at how “archaic and arbitrary” talent management was in the armed forces. Unlike industrial-era firms, and unlike the military, successful companies in the knowledge economy understand that nearly all value is embedded in their human capital. (emphasis added)

I am sure the Air Force is doing what it thinks is right by changing the EPR system. However, it's equivalent to making changes in a centrally planned economy, without abandoning central planning.

It's time the Air Force, and the rest of the military, discard their centrally-planned, promote-the-paper (instead of the person), involuntary assignment process.

In its place I recommend one that openly and competitively advertises and offers positions; gives pay, hiring, and firing authority to the local manager; and adopts similar aspects of sound private sector personnel management.

Today's knowledge economy demands that military personnel be treated as unique individuals, not industrial age interchangeable parts. Our military talent is one of the few competitive advantages we possess over peer rivals. We must not squander it with dysfunctional promotion systems.


Saturday, June 13, 2015

Redefining Breach Recovery

For too long, the definition of "breach recovery" has focused on returning information systems to a trustworthy state. The purpose of an incident response operation was to scope the extent of a compromise, remove the intruder if still present, and return the business information systems to pre-breach status. This is completely acceptable from the point of view of the computing architecture.

During the last ten years we have witnessed an evolution in thinking about the likelihood of breaches. When I published my first book in 2004, critics complained that my "assumption of breach" paradigm was defeatist and unrealistic. "Of course you could keep intruders out of the network, if you combined the right controls and technology," they claimed. A decade of massive breaches have demonstrated that preventing all intrusions is impossible, given the right combination of adversary skill and persistence, and lack of proper defensive strategy and operations.

We need to now move beyond the arena of breach recovery as a technical and computing problem. Every organization needs to think about how to recover the interests of its constituents, should the organization lose their data to an adversary. Data custodians need to change their business practices such that breaches are survivable from the perspective of the constituent. (By constituent I mean customers, employees, partners, vendors -- anyone dependent upon the practices of the data custodian.)

Compare the following scenarios.

If an intruder compromises your credit card, it is fairly painless for a consumer to recover. There is a $50 or less financial penalty. The bank or credit card company handles replacing the card. Credit monitoring and related services are generally adequate for limiting damage. Your new credit card is as functional as the old credit card.

If an intruder compromises your Social Security number, recovery may not be possible. The financial penalties are unbounded. There is no way to replace a stolen SSN. Credit monitoring and related services can only alert citizens to derivative misuse, and the victim must do most of the work to recover -- if possible. The citizen is at risk wherever other data custodians rely on SSNs for authentication purposes.

This SSN situation, and others, must change. All organizations who act as data custodians must evaluate the data in their control, and work to improve the breach recovery status for their constituents. For SSNs, this means eliminating their secrecy as a means of authentication. This will be a massive undertaking, but it is necessary.

It's time to redefine what it means to recover from a breach, and put constituent benefit at the heart of the matter, where it belongs.

Wednesday, June 10, 2015

My Federal Government Security Crash Program

In the wake of recent intrusions into government systems, multiple parties have been asking for my recommended courses of action.

In 2007, following public reporting on the 2006 State Department breach, I blogged When FISMA BitesInitial Thoughts on Digital Security Hearing. and What Should the Feds Do. These posts captured my thoughts on the government's response to the State Department intrusion.

The situation then mirrors the current one well: outrage over an intrusion affecting government systems, China suspected as the culprit, and questions regarding why the government's approach to security does not seem to be working.

Following that breach, the State Department hired a new CISO who pioneered the "continuous monitoring" program, now called "Continuous Diagnostic Monitoring" (CDM). That CISO eventually left State for DHS, and brought CDM to the rest of the Federal government. He is now retired from Federal service, but CDM remains. Years later we're reading about another breach at the State Department, plus the recent OPM intrusions. CDM is not working.

My last post, Continuous Diagnostic Monitoring Does Not Detect Hackers, explained that although CDM is a necessary part of a security program, it should not be the priority. CDM is at heart a "Find and Fix Flaws Faster" program. We should not prioritize closing and locking doors and windows while there are intruders in the house. Accordingly, I recommend a "Detect and Respond" strategy first and foremost.

To implement that strategy, I recommend the following, three-phase approach. All phases can run concurrently.

Phase 1: Compromise Assessment: Assuming the Federal government can muster the motivation, resources, and authority, the Office of Management and Budget (OMB), or another agency such as DHS, should implement a government-wide compromise assessment. The compromise assessment involves deploying teams across government networks to perform point-in-time "hunting" missions to find, and if possible, remove, intruders. I suspect the "remove" part will be more than these teams can handle, given the scope of what I expect they will find. Nevertheless, simply finding all of the intruders, or a decent sample, should inspire additional defensive activities, and give authorities a true "score of the game."

Phase 2: Improve Network Visibility: The following five points include actions to gain enhanced, enduring, network-centric visibility on Federal networks. While network-centric approaches are not a panacea, they represent one of the best balances between cost, effectiveness, and minimized disruption to business operations.

1. Accelerate the deployment of Einstein 3A, to instrument all Federal network gateways. Einstein is not the platform to solve the Federal government's network visibility problem, but given the current situation, some visibility is better than no visibility. If the inline, "intrusion prevention system" (IPS) nature of Einstein 3A is being used as an excuse for slowly deploying the platform, then the IPS capability should be disabled and the "intrusion detection system" (IDS) mode should be the default. Waiting until the end of 2016 is not acceptable. Equivalent technology should have been deployed in the late 1990s.

2. Ensure DHS and US-CERT have the authority to provide centralizing monitoring of all deployed Einstein sensors. I imagine bureaucratic turf battles may have slowed Einstein deployment. "Who can see the data" is probably foremost among agency worries. DHS and US-CERT should be the home for centralized analysis of Einstein data. Monitored agencies should also be given access to the data, and DHS, US-CERT, and agencies should begin a dialogue on whom should have ultimately responsibility for acting on Einstein discoveries.

3. Ensure DHS and US-CERT are appropriately staffed to operate and utilize Einstein. Collected security data is of marginal value if no one is able to analyze, escalate, and respond to the data. DHS and US-CERT should set expectations for the amount of time that should elapse from the time of collection to the time of analysis, and staff the IR team to meet those requirements.

4. Conduct hunting operations to identify and remove threat actors already present in Federal networks. Now we arrive at the heart of the counter-intrusion operation. The purpose of improving network visibility with Einstein (for lack of an alternative at the moment) is to find intruders and eliminate them. This operation should be conducted in a coordinated manner, not in a whack-a-mole fashion that facilitates adversary persistence. This should be coordinated with the "hunt" mission in Phase 1.

5. Collect metrics on the nature of the counter-intrusion campaign and devise follow-on actions based on lessons learned. This operation will teach Federal network owners lessons about adversary campaigns and the unfortunate realities of the state of their enterprise. They must learn how to improve the speed, accuracy, and effectiveness of their defensive campaign, and how to prioritize countermeasures that have the greatest impact on the opponent. I expect they would begin considering additional detection and response technologies and processes, such as enterprise log management, host-based sweeping, modern inspection platforms with virtual execution and detonation chambers, and related approaches.

Phase 3. Continuous Diagnostic Monitoring, and Related Ongoing Efforts: You may be surprised to see that I am not calling for an end to CDM. Rather, CDM should not be the focus of Federal security measures. It is important to improve Federal security through CDM practices, such that it becomes more difficult for adversaries to gain access to government computers. I am also a fan of the Trusted Internet Connection program, whereby the government is consolidating the number of gateways to the Internet.

Note: I recommend anyone interested in details on this matter see my latest book, The Practice of Network Security Monitoring, especially chapter 9. In that chapter I describe how to run a network security monitoring operation, based on my experiences since the late 1990s.