First principles - red team blue team operations.
By Rick Howard
Sep 14, 2020

CSO Perspectives is a weekly column and podcast where Rick Howard discusses the ideas, strategies and technologies that senior cybersecurity executives wrestle with on a daily basis.

First principles - red team blue team operations.

Listen to the audio version of this story.

Note: This is the eleventh essay in a planned series that discusses the development of a general purpose cybersecurity strategy using the concept of first principles to build a strong and robust infosec program. The other essays are listed here:

In 1986, I was a young captain in the U.S. Army. The powers that be assigned me as the signal officer to a field artillery battalion for the express purpose to help the unit be successful at its National Training Center rotation. This was many years before the first Gulf War (January 1991,) before we sent troops to Kosovo (1999,) and long before the madness of 9/11 kicked in and its resulting 20 plus years of combat operations. In this time window, life in the military was pretty sweet. Nobody was shooting at us; President Reagan loved us; and we were flush with cash. The big baddies on the international front were the Soviets and all the military experts thought that the next big military confrontation was going to be between the Bears and Uncles Sam in a massive tank-on-tank battle in Europe à la Tom Clancy’s Red Storm Rising. The National Training Center in Fort Irwin California, a thousand square miles of formidable desert terrain, is where the military went to train for military operations of that scale. And it is where I got my first taste of Red Team/Blue Team operations, or in military parlance, OPFOR (Opposing Force) exercises. What that meant was that instead of going to the field and pretending that an enemy was attacking you, the Army actually put an opposing force in the field to fight against you using the tactics of the Soviet Army. It was, and is, a brilliant idea. Practice your skills and make mistakes in a highly controlled but pressure-filled environment so that you don’t make those mistakes when lives are on the line. Catch me in the bar at the next in-person DEFCON conference and I will share war stories about how the OPFOR ran circles around us for two weeks, basically kicking our ass at every turn. 

Red-teaming and road to sainthood: thinking like the Adversary.

The Roman Catholic Church may have invented the concept in 1587 when Pope Sixtus V assigned the job of Devil’s advocate during the beatification process of St. Lawrence Justinian (1381-1456). The Advocatus Diaboli was to be the opposing force, the red team, to make sure that, according to Ellen Loyd of Ancient Pages, 

“no person received the honors of sainthood recklessly and too fast. Every potential weakness or objection to the saints’ canonization was raised and evaluated in order to ensure that only those who were truly worthy would be raised to the dignity of the altars.”

President Reagan used the concept as early as 1982 by forming a red team panel designed to anticipate every conceivable way the Soviets might try to go around the arms control treaty. After the 1990 bombing of Pan Am 103, a Presidential Commission directed the FAA to create a red team to replicate typical terrorist tactics, techniques, and procedures. As chairman of the nine-member Commission to Assess the Ballistic Missile Threat to the United States in 1998, Donald Rumsfeld used a red team approach to examine the same data available to the intelligence community to identify alternative scenarios.

In 2003, I was the commander of the Army’s Computer Emergency Response Team (ACERT) and we had discovered that the Chinese government was all over our networks like white on rice. It kind of felt like NTC all over again with the Chinese running circles around us in multiple operations around the world. We lumped together all of that activity under a military umbrella code name called TITAN RAIN. I love cool sounding code names. I think it is the reason I love cybersecurity so much. We have cool names for everything. This time though, it wasn’t an exercise and it wasn’t a dusty desert environment. It was our unclassified day-to-day working electronic network called the NIPRNET. In response, we, the blue team, built a defensive plan to counter the TITAN RAIN offensive campaign plan. Before we deployed it though, we wanted to test it. We emulated the entire NIPRNET on a cyber range in San Antonio, deployed the blue team’s defensive plan on it, and told our in-house red team to use TITAN RAIN’s tactics, techniques, and procedures to break through. When the red team couldn’t get it done, the Army leadership gave us a green light to deploy the blue team’s defensive plan on the NIPRNET.     

Kriegsspiel and the military practice blue forces against red forces.

The origin of the red team and blue team names to indicate adversary and good-guy activity respectively isn't a random choice. We have the Prussian Army to thank for that. According to Peter Attia over at Media,     

  “In the early 19th century, the Prussian army adopted war games to train its officers. One group of officers developed a battle plan, and another group assumed the role of the opposition, trying to thwart it. Using a tabletop game called Kriegsspiel (literally “wargame” in German), resembling the popular board game Risk, blue game pieces stood in for the home team—the Prussian army—since most Prussian soldiers wore blue uniforms. Red blocks represented the enemy forces—the red team—and the name has stuck ever since.”

Since the Prussians wore blue uniforms in real life, and they invented Kriegsspiel, then the blue team became the good guys ever since. I personally think that the red team blue team description is more closely related to the other famous Milton Bradley board game about war called Stratego, but if Peter thinks that Risk is a closer match, I’m not going to fight him on it.

Red-teaming and the related origins of pentesting.  

Fast forward again from the Prussians in the 1820s to the 1960s when mainframe computers started to come on line. It didn’t take long for computer experts to realize that they were vulnerable to abuse. Early designers of mainframes didn’t conceive of anything close to a threat model. They were still mostly concerned with getting the 1s and 0s moving in the right direction. At maybe the first cybersecurity conference ever, hosted by the System Development Corporation in California in 1965, 15,000 mainframe operators from around the world discussed all the ways in which these new machines could be “penetrated” by unsavory people. By the late 1960s and early 1970s, elite computer operators were passing around a paper authored by Dr. Willis Ware and others, called the Willis paper, that according to William Hunt at the College of William and Merry, 

“... showed how spies could actively penetrate computers, steal or copy electric files and subvert the devices that normally guard topsecret information. The study touched off more than a decade of quiet activity by elite groups of computer scientists working for the Government who tried to break into sensitive computers. They succeeded in every attempt.”       

These were the first penetration testers. In 1971, the U.S. Air Force contracted James Anderson to run Tiger Teams against their MULTICS operating system, the precursor to UNIX. His 1972 after action report described a methodology to penetrate and compromise those systems which is fundamentally the basis for all penetration testing even today.

In 2020, the big difference between penetration testers and red teamers is that, in general, the penetration testers are supposed to find any flaw in the system, similar to the Devil’s advocate, and they are allowed to pursue any courses of action that present themselves during the test. They are trying to find ways to reduce the attack surface by finding previously unknown weaknesses. In this regard, conducting penetration tests fall under the zero trust strategy umbrella. Network defenders aren’t trying to stop a specific adversary with a penetration test. They are actively trying to find holes in the deployed defensive posture.

Red teamers, on the other hand, generally follow known adversary attack campaigns. For example, according to the Mitre ATT&CK framework, the adversary group known as Cobalt Spider uses 31 attack techniques and five software tools to compromise its victims. The red team that attempts to verify that an organization’s network is protected against Cobalt Spider can use only those 31 attack techniques and five software tools and nothing else. It is similar to my TITAN RAIN days back on the army. Since they are specifically looking to make sure that the Cobalt Spider attack campaign won’t be successful against their own networks, these red teaming operations would fall under the intrusion kill chain strategy umbrella.

The blue team is the normal day-to-day internal infosec team. In addition to their normal day job of protecting their organization, they also take on the additional task of trying to detect and prevent the red team from successfully emulating Cobalt Spider. Sometimes, network defenders call this opposing force exercise a purple team operation, the red and the blue mixed together. By adding this blue team element to the read team operations, the internal infosec team gains a couple of additional benefits. The first big one is that the blue team gets to practice its incident response team against a real adversary. Then, when the exercise is over, they get to ask the adversary what they did in response to the blue team’s efforts. You don’t get that opportunity in the real world when Cobalt Spider really comes knocking. A second benefit is the individual training opportunity for the newbies and mid-tier analysts on the infosec team. They can sit in the SOC all day long watching alerts fly by their screens and they are learning very little. But, you put them on a read team blue team exercise and just watch how fast their cyber expertise grows. That kind of training is invaluable.

The concept of red teaming has been around since at least the 1500s. It hit the IT space in the form of penetration testing in the 1960s and 1970s just as mainframe computers started to become useful for governments and the commercial space. Ever since, we used penetration tests to reduce the attack surface of our computers and networks in a zero trust kind of way. In the early 2000s, the idea of a combined red team blue team exercise, or purple team exercise if you prefer, became popular to test our defenses against known adversary attack campaigns in an intrusion kill chain kind of way. This also had the added benefits of exercising our incident response teams and accelerating the training of our newbie and mid-tier analysts in the SOC. Red team blue team operations are an essential item in the infosec tool kit and will greatly improve our chances of reducing the probability of a material impact to the business due to a cyber event. 

Red-team/blue-team timeline.

The roots of Red Teaming run very deep: from the Roman Catholic Church's "Office of the Devil's Advocate," to the Kriegsspiel of the Prussian General Staff and to the secretive AMAN organization, Israel's Directorate of Military Intelligence.

Early 19th Century.

The Prussian army adopted a war games called Kriegsspiel (literally “wargame” in German) to train its officers. One group of officers developed a battle plan, and another group assumed the role of the opposition, trying to thwart it. Blue game pieces stood in for the home team—the Prussian army—since most Prussian soldiers wore blue uniforms. Red blocks represented the enemy forces—the red team—and the name has stuck ever since.

World War II.

British Field Marshal Bernard Montgomery relied upon junior officers to study German Field Marshal Irwin Rommel in Africa and Europe, then evaluated the Allies' plans.

Cold War: early 1970s.

The US Navy established the SSBN Security Program to identify potential vulnerabilities that the Soviet Union might exploit. SS denotes submarine, the B denotes ballistic missile, and the N denotes that the submarine is nuclear powered. 

Falsification in the philosophy of science: 1972.

Philosopher of science Karl Popper wrote “In science we need to form parties, as it were, for and against any theory that is being subjected to serious scrutiny.” 

National Security Decision Directive on red-teaming for arms control: 1982.

President Ronald Reagan signed a National Security Decision Directive to create a permanent "Red Team" to challenge US verification capabilities, assumptions, and policies in order to anticipate how, in what ways, and for what purposes, the Soviets might try to avoid compliance with the provisions of arms control agreements. 

National Security Decision Directive on red-teaming, again, for arms control, recognizing evasion and deception: 1984.

President Ronald Reagan signed a National Security Decision Directive to create a Red Team review panel to consider and anticipate possible Soviet noncompliance, concealment, and deception activity. 

Red-teaming for terrorist TTPs: 1990.

The Presidential Commission on the bombing of Pan Am 103 directed the FAA to create a red team to replicate typical terrorist tactics, techniques, and procedures.

Deliberate cultivation of opposing perspectives: 1998.

Secretary Rumsfeld chaired the Ballistic Missile Threat Committee that examined the same data available to the intelligence community but identified alternative paths adversaries might take and came to different conclusions about the threat. 

TRADOC's Red Franchise: 1999.

The US Army established a Red Franchise organization within its Training and Doctrine Command (TRADOC) to guide Army training, concept and force development, experimentation, and transformation.

"Alternative analysis," post-9/11: 12 September 2001.

Around midnight, then-Director of Central Intelligence George Tenet decided to form a group of contrarian thinkers to challenge conventional wisdom in the intelligence community and mitigate the threat of additional surprises through “alternative analysis.” 

Defense Science Board recommends red-teaming, 2003.

Red teams in the military got a boost after a 2003 Defense Science Board recommended increasing the use of red teams to help guard against the shortcomings that led up to 11 September 2001.

The Army Directed Studies Office, 2004.

Largely in response to the 2003 Defense Science Review Board recommendations, the Army stood up its Service-level red team, the Army Directed Studies Office (ADSO)

Red Team University's first graduating class, 2006.

The first class graduated from the Red Team University course at Fort Leavenworth's University of Foreign Military and Cultural Studies, as the war in Iraq entered its fourth year.

CIA's shocking Red Cell, 2012.

Then-CIA Director Gen. David Petraeus directed the Red Cell to “take on our most difficult challenges” and “shock us.”

Recommended reading.

2020 Red and Blue Team Survey Reveals Positive Trends,” by SAM HUMPHRIES, exabeam. 

3 Situations That Call for a Red Team,” by Lisa Earle McLeod, Huffington Post, 23 November 2013.

Cobalt Group,” Mitre ATT&CK Framework, MITRE, 23 June 2020.

Cybersecurity Red Team Versus Blue Team — Main Differences Explained,” BY SARA JELEN, SECURITYTRAILS BLOG, 7 December 2018.

Devil’s Advocate – Ancient Phrase Traced To The Roman Catholic Church,” by Ellen Lloyd, AncientPages.com, 19 November 2018.

ESTABLISHMENT OF NATIONAL SECURITY COUNCIL ARMS CONTROL VERIFICATION COMMITTEE - NATIONAL SECURITY DEClSTON DIRECTIVE NUMBER 65,” by President Ronald Reagan, the White House, 10 November 1982.

Guide to Red Team Operations,” by Raj Chandel, Hacking Articles, 5 August 2019.

Helpful Red Team Operation Metrics,” by Cedric Owens, Medium, 2 March 2020.

Inside the CIA Red Cell: How an experimental unit transformed the intelligence community,” BY MICAH ZENKO, FP, 30 OCTOBER 2015.

Kriegsspiel – How a 19th Century Table-Top War Game Changed History,” by MilitaryHistoryNow.com, 19 April 2019.

Red Storm Rising,” by Tom Clancy, Putnam Adult, 1986.

Red team,” by Millitary Wikia.org.

Red Team U. creates critical thinkers,” By John Milburn, Associated Press, 18 May 2007.

Red Team Vs Blue Team Testing for Cybersecurity,” by Zbigniew Banach, netsparker, 14 November 2019. 

Red Team: How to Succeed By Thinking Like the Enemy,” by Micah Zenko, Basic Books, 3 November 2015.

Red Team: How to Succeed By Thinking Like the Enemy,” by Micah Zenko, Council on Foreign, 1 November 2015.

Red Teaming: How Your Business Can Conquer the Competition by Challenging Everything,” by Bryce G Hoffman, Crown Business, 16 May 2017.

Red Teams: Strengthening through challenge,” by LtCol Brendan Mulvaney, Marine Corps Gazette, July 2012.

Second public hearing of the National Commission on Terrorist Attacks Upon the United States,” Statement of Bogdan Dzakovic to the National Commission on Terrorist Attacks Upon the United States, 22 May 2003.

"SECURITY IN THE COMPUTING ENVIRONMENT: A Summary of the Quarterly Seminar, Research Security Administrators - June 17, 1965," by Robert L. Dennis, System Development Corporation, for the DEFENSE DOCUMENTATION CENTER DEFENSE SUPPLY AGENCY, 18 August 1966.

SOVIET NONCOMPLIANCE WITH ARMS CONTROL AGREEMENTS - NATIONAL SECURITY DEClSTON DIRECTIVE NUMBER 121,” by President Ronald Reagan, the White House, 14 January 1984.

The Difference Between Red, Blue, and Purple Teams,” By DANIEL MIESSLER, 4 April 2020.

The History Of Penetration Testing,” by Ryan Fahey, infosec.

The importance of red teams,” by PETER ATTIA, Media, 24 May 2020.

THE ORIGINS AND DEVELOPMENT OF THE NATIONAL TRAINING CENTER 1976 - 1984,” by Anne W. Chapman, Office of the Command Historian, US Army Training and Doctrine Command, 1992.

The Red Team,” Chief - Arms Control Intelligence Staff, CIA, 17 January 1984.

The Role and Status of DoD Red Teaming Activities,” by the Office of the Under Secretary of Defense For Acquisition, Technology, and Logistics, September 2003.

Titan Rain - how Chinese hackers targeted Whitehall,” by Richard Norton-Taylor, The Guardian, 4 September 2007.

"US Government Computer Penetration Programs and the Implications for Cyberwar,” by Edward Hunt, IEEE Annals of the History of Computing, IEEE Computer Society, 2012.

Where does red teaming break down?” by David Spark, Allan Alford, and Dan DeCloss, “Defense in Depth” podcast, 3 September 2020.