Rick Howard: In 1986, I was a young captain in the U.S. Army. The powers that be assigned me, a communications officer, to a field artillery battalion for the express purpose to help the unit be successful at its National Training Center rotation. This was many years before the first Gulf War in January of 1991, before we sent troops to Kosovo in 1999 and long before the madness of 9/11 kicked in and its resulting 20-plus years of combat operations. In this time window, life in the military was pretty sweet. Nobody was shooting at us. President Reagan loved us. And we were flush with cash. The big baddies on the international front were the Soviets, and all the military experts on both sides thought that the next big military confrontation was going to be between the Bears and Uncle Sam in a massive tank-on-tank battle in Europe, a la Tom Clancy's "Red Storm Rising."
Rick Howard: The National Training Center in Fort Irwin, Calif., a thousand square miles of formidable desert terrain, is where the military went to train for operations of that scale, and it is where I got my first taste of red team-blue team operations - or in military parlance, OPFOR or Opposing Force exercises. What that meant was that instead of going to the field and pretending that an enemy was attacking you, the Army actually put an opposing force in the field to fight against you, using the tactics of the Soviet army. It was and is a brilliant idea. Practice your skills and make mistakes in a highly controlled but pressure-filled environment so that you don't make those mistakes when lives are on the line. Catch me in the bar at the next in-person DEFCON conference, and I will share war stories about how the OPFOR ran circles around us for two weeks, basically kicking our ass at every turn.
Rick Howard: My name is Rick Howard. You are listening to "CSO Perspectives," my podcast about the ideas, strategies and technologies that senior security executives wrestle with on a daily basis. On this show, I'm talking about red team-blue team operations.
Rick Howard: The Roman Catholic Church may have invented the concept in 1587, when Pope Sixtus V assigned the job of devil's advocate during the beatification process of St. Lawrence Justinian. The advocatus diaboli, in Latin, was to be the opposing force, the red team, to make sure that - according to Ellen Lloyd of Ancient Pages, and I quote - "no person received the honors of sainthood recklessly and too fast. Every potential weakness or objection to the saints' canonization was raised and evaluated to ensure that only those who were truly worthy would be raised to the dignity of the altars," end quote.
Rick Howard: President Reagan used the concept as early as 1982 by forming a red-team panel, designed to anticipate every conceivable way the Soviets might try to go around the arms control treaty. After the 1990 bombing of Pan Am Flight 103, a presidential commission directed the FAA to create a red team to replicate typical terrorist tactics, techniques and procedures. As chairman of the nine-member Commission to Assess the Ballistic Missile Threat to the United States in 1998, Donald Rumsfeld used a red-team approach to examine the same data available to the intelligence community to identify alternative scenarios.
Rick Howard: In 2003, I was the commander of the Army's Computer Emergency Response Team, or ACERT. We had just discovered that the Chinese government was all over our networks, like white on rice. It kind of felt like NTC all over again, with the Chinese running circles around us in multiple operations around the world. We lumped together all of that Chinese hacker activity under a military umbrella code name called Titan Rain. I love cool-sounding code names. I think it's the reason I love cybersecurity so much. We have cool names for everything. This time, though, it wasn't an exercise, and it wasn't a dusty desert environment; it was our unclassified day-to-day working electronic network called the NIPRNET. In response, we - the blue team - built a defensive plan to counter the Titan Rain offensive campaign plan. Before we deployed it, though, we wanted to test it. We emulated the entire NIPRNET on a cyber range in San Antonio, deployed the blue team's defensive plan on it and told our in-house red team to use Titan Rain's tactics, techniques and procedures to break through. When the red team couldn't get it done, the Army's leadership gave us the green light to deploy the blue team's defensive plan on the NIPRNET.
Rick Howard: The origin of the red team and blue team names to indicate adversary and good-guy activity, respectively, isn't a random choice. We have the Prussian Army to thank for that. According to Peter Attia over at Media, quote, "In the early 19th century, the Prussian Army adopted war games to train its officers. One group of officers had developed a battle plan, and another group assumed the role of the opposition trying to thwart it. Using a tabletop game called 'Kriegsspiel' - literally, war game in German - resembling the popular board game 'Risk,' blue-game pieces stood in for the home team, the Prussian Army, since most Prussian soldiers wore blue uniforms. Red blocks represented the enemy forces, the red team, and the name has stuck ever since," end quote. Since the Prussians wore blue uniforms in real life and they invented "Kriegsspiel," then the blue team became the good guys forever more. I personally think that the red team-blue team description is more closely related to the other famous Milton Bradley board game about war called "Stratego." But if Peter thinks that "Risk" is a closer match, I'm not going to fight him on it.
Rick Howard: Fast-forward again from the Prussians in the 1820s to the 1960s, when mainframe computers started to come online. It didn't take long for computer experts to realize that they were vulnerable to abuse. Early designers of mainframes didn't conceive of anything close to a threat model. They were still mostly concerned with getting the ones and zeros moving in the right direction. At maybe the very first cybersecurity conference ever, hosted by the System Development Corporation in California in 1965, 15,000 mainframe operators from around the world discussed all the ways in which these new machines could be - and I'm using air quotes here - "penetrated" by unsavory people. By the late 1960s and early 1970s, elite computer operators were passing around a paper authored by Dr. Willis Ware and others called the Willis paper that, according to William Hunt at the College of William and Mary, quote, "showed how spies could actively penetrate computers" - there's that penetration word again - "steal or copy electric files and subvert the devices that normally guard top-secret information. The study touched off more than a decade of quiet activity by groups of computer scientists working for the government who tried to break into sensitive computers. They succeeded in every attempt," end quote. These were the first penetration testers.
Rick Howard: In 1971, the U.S. Air Force contracted James Anderson to run Tiger Teams against their MULTICS operating systems. MULTICS was the precursor to UNIX. His 1972 after action report described a methodology to penetrate and compromise those systems, which is fundamentally the basis for all penetration testing even today.
Rick Howard: In 2020, the big difference between penetration testers and red teamers is that, in general, the penetration testers are supposed to find any flaw in the system, similar to the devil's advocate, and they are allowed to pursue any course of action that presents themselves during the tests. They are trying to find ways to reduce the attack surface by finding previously unknown weaknesses. In this regard, conducting penetration tests fall under the zero-trust strategy umbrella. Network defenders are not trying to stop a specific adversary with a penetration test; they are actively trying to find holes in the already deployed defensive posture.
Rick Howard: Red teamers, on the other hand, generally follow known adversary attack campaigns. Like example, according to the Mitre ATT&CK framework, the adversary group known as Cobalt Spider - there's another cool code name - they used 31 attack techniques and five software tools to compromise its victims. The red team that attempts to verify that an organization's network is protected against Cobalt Spider can only use those 31 attack techniques and five software tools and nothing else. It is similar to my Titan Rain days back in the Army. Since they are specifically looking to make sure that the Cobalt Spider attack campaign won't be successful against their own networks, these red-teaming operations would fall under the intrusion kill chain strategy umbrella.
Rick Howard: The blue team is the day-to-day internal infosec team. In addition to their normal day job of protecting their organization, they also take on the additional task of trying to detect and prevent the red team from successfully emulating Cobalt Spider. Sometimes network defenders call this opposing force exercise a purple team operation, the red and blue mixed together. By adding this blue-team element to the red-team operations, the internal infosec team gains a couple of additional benefits. The first big one is that the blue team gets to practice its incident response team against a real adversary. Then when the exercise is over, they get to ask the adversary what they did in response to the blue team's efforts. You don't get that opportunity in the real world when Cobalt Spider really comes knocking. A second benefit is the individual training opportunity for the newbies and mid-tier analysts on the infosec team. They can sit in the SOC all day long watching alerts fly by their screens and they are learning very little, but you put them on a red team-blue team exercise and just watch how fast their cyber expertise grows. That kind of training is invaluable.
Rick Howard: The concept of red teaming has been around since at least the 1500s. It hit the IT space in the form of penetration testing in the 1960s and 1970s, just as mainframe computers started to become useful for governments and the commercial space. Ever since, we use penetration tests to reduce the attack surface of our computers and networks in a zero-trust kind of way. In the early 2000s, the idea of a combined red team-blue team exercise, or purple team exercise if you prefer, became popular to test our defenses against known adversary attack campaigns in an intrusion kill chain kind of way. This also had the added benefits of exercising our incident response teams and accelerating the training of our newbie and mid-tier analysts in the SOC. Red team-blue team operations are an essential item in the infosec toolkit and will greatly improve our chances of reducing the probability of a material impact to the business due to a cyber event.
Rick Howard: And that's a wrap. If you agree or disagree with anything I've said, hit me up on LinkedIn or Twitter, and we can continue the conversation there. Next week, I've invited our pool of CyberWire experts to sit around the hash table with me and discuss their red team-blue team operations. Don't miss that. The CyberWire's "CSO Perspectives" is edited by John Petrik and executive produced by Peter Kilpe. Our theme song is by Blue Dot Sessions, remixed by the insanely talented Elliott Peltzman, who also does the show's mixing, sound design and original score. And I am Rick Howard. Thanks for listening.