Jailbreak Brewing hosted the latest of its security summits at its home in Laurel, Maryland on May 20, 2016. The topic this time around was Internet-of-things security, and the presenters—all industry experts—addressed automotive vulnerability research, the history of industrial control system malware (and its uses in the wild), wireless vulnerabilities, the use of OSINT to inform vulnerability research, hacking security cameras, and the way forward for testing IoT systems.
Car hacking: advice to researchers.
Craig Smith, author of The Car Hacker’s Handbook and founder of OpenGarages (“a community of performance tuners, mechanics, security researchers and artists”) is a security industry veteran and a specialist in reverse engineering. He opened the Jailbreak Security Summit with an account of the current state-of-play in automobile hacking.
He began by noting the difficulty of analyzing complex CAN traffic. (“CAN” refers to the controller area network, a vehicle bus standard designed to enable communication among controllers and other devices without the use of a host computer. Contemporary cars may well have between 50 and 100 controllers communicating through the CAN bus.)
One might, Smith said, consider three classes of tools in approaching automotive hacking. On-board diagnostic (OBD) dongles used for self-diagnosis and reporting are not interesting, and automotive tools are only slightly more interesting. Both classes hold little interest for the hacker. The really interesting tools are the dealership tools. These modify systems, have the security tokens to make firmware changes, and interact with the vehicle on the basis of mutual trust.
He described an approach to car hacking that lends itself to significant automation: sniffing setup, recording UDS (universal diagnostic services), and emulation of a vehicle. He demonstrated simulation and attack modes, and showed a capability to recognize VIN numbers. A new tool—CANiverse—a web interface for documenting and sharing how your vehicle works, is among the developments making car hacking easier. You can, Smith note, be up and running with hacking and fuzzing in minutes.
Reflecting on the state of the vulnerability research art, Smith thought that industry skills might well be applied to ways of checking vehicles on the road, to data collection, and (especially) to infotainment systems and their linkages to other systems. He noted that CAN has a number of desirable features in its design. It’s built to facilitate quick updating, and this is good particularly insofar as it enables systems to be rapidly fixed and upgraded without the necessity for costly and imperfectly effective vehicle recalls. (And he noted that encryption probably isn’t the way to go for CAN security.) On balance, wireless updates, he thought, are a benefit. Given that firmware updates are preferable to recalls, Smith urged vulnerability researchers to work with manufacturers. Specifically, he advised researchers to get to know, and to work with, manufacturers’ developers. They, and not (for example) corporate counsel, are the best points of access for those interested in contributing to the safety and security of the automotive Internet-of-things.
ICS malware: its history and prospects.
Rob Caldwell, a manager in Mandiant’s Industrial Control Systems (ICS) consulting practice, gave the conference a look into how various families of malware have targeted ICS.
Caldwell asked the audience’s indulgence for talking about industrial control systems as opposed to the Internet-of-things more generally conceived. Industrial control systems automate the control or monitoring of some industrial process. But the Internet-of-things and ICS have been converging, Caldwell pointed out, for some time—consider, for example, the increased deployment of smart meters by utilities. And so ICS lessons are more relevant than ever for the larger Internet-of-things.
When we consider malware, we should remember that there’s some motivation behind any attack that uses it. These motives, Caldwell pointed out, include simple nuisance, data theft, cybercrime, hacktivism, or, ultimately, destruction of systems or data. There's lots of old malware rolling around as an untargeted nuisance (as Conficker continues to do). But targeted attacks, hitting one specific company or an entire industry vertical, are more interesting. Such attacks are not opportunistic. With that in mind, he reviewed the history of ICS security. It’s convenient to divide it into four eras. The first opened with Stuxnet in 2010. The era of vulnerability discovery followed, running from 2011 to 2013. This was succeeded by 2014’s era of weaponization, followed in its turn by a period of full realization of attack capability, beginning in 2015 and continuing into the present.
He would begin with Stuxnet, “a special snowflake,” in Caldwell’s characterization. Stuxnet had two attack paths: its famous centrifuge destruction, and its less often considered but equally interesting man-in-the-middle operator spoofing.
Stuxnet packaged four Windows 0-days, and focused on replication. It hid within a Windows rootkit, and would also inject itself into anti-virus processes. On ICS side, the malware used a programmable logic controller (PLC) rootkit whose design was very specific to certain Siemens PLCs. (Caldwell recommended Ralph Langner's paper "To Kill a Centrifuge" as an excellent source of background on Stuxnet.) So Stuxnet was designed to reach the centrifuge's PLC, whence it varied centrifuge spin rates to induce, eventually, catastrophic failure.
The next significant ICS malware to appear was Havex (a.k.a. “Peacepipe”). Seen in "Energetic Bear’s" campaign against European ICS companies, Havex inserted itself into programs engineers in targeted enterprises would use. It displayed typical malware functionality. Havex would spawn, scan networks for open C servers, build a report, and send that report to its command-and-control. This enabled the threat actor to understand what, physically, an industrial process was doing. It's unknown whether this was pre-attack reconnaissance or direct theft of intellectual property. It’s also unknown whether Havex was the work of a nation state. A little known attack on a German steel mill in December 2014 may have been an end-result of Havex recon. German authorities have been successfully tight-lipped about the incident, but the attack appears to have prevented operators from shutting down blast furnaces.
Sandworm Team's BlackEnergy has appeared in association with a number of attacks. Sandworm is probably Russian, although how it was connected to the Russian government remains unknown. BlackEnergy paid most of its attention to NATO and to countries that were either former members of the Warsaw Pact (like Poland) or former Soviet republics (like Ukraine, and other countries in the Near Abroad). As its name suggests, this cyber espionage tool was focused on energy sector, where it paid close attention to human-machine interface vulnerabilities. Its most famous use was in the December 2015 Ukrainian grid hack (more on which in a moment.)
Turning to destructive hacks, Caldwell listed the attacks on Saudi Aramco, the various Cryptolocker incursions into medical systems, and the Sony hack. “Shamoon should have tipped us off” to the trend toward destruction as a goal. Used against Saudi Aramco, Shamoon had three components: dropper, wiper, and reporter. The dropper was probably delivered by email. The wiper (which Shamoon “did a fairly respectable job of hiding”) had a configurable date and time to trigger. The reporter actually fired before the wiper. Comparing Shamoon to Stuxnet, Caldwell described Stuxnet as “a guided missile” designed to attack Iran’s Natanz uranium enrichment facility. Shamoon was also highly targeted, but in execution, not design. The attackers fired Shamoon at Saudi Aramco during Ramadan, foreseeably a time of somewhat relaxed vigilance. Shamoon was successful in destroying data: it took five months to rebuild Saudi Aramco's networks. Shamoon also blocked operator insight into processes.
The Ukraine grid hack, to which Caldwell returned at the end of his presentation, seemed motivated by Russian interests in Ukraine. BlackEnergy and Killdisk were the principal means by which the attackers achieved their results. They stole credentials, probably with BlackEnergy, then pivoted to substation control systems. The attack was well and closely planned, even to the point of including a telephony denial-of-service effort to impede responders’ attempts to restore power.
The Ukraine incident, won’t, Caldwell pointed out, be the last of its kind. Just last month, the Bureau of Water & Light in Lansing, Michigan, was phished with ransomware. Network segmentation protected the control systems, but the business systems were hit hard. Fundamentally, he concluded, we need better visibility into industrial control systems. Too often, our notification of an attack is by physical consequence, and that’s too late.
Embedded devices are going to be wireless devices: what this means for security.
Matt Knight, a software engineer and security researcher with Bastille Networks, is a member of the RFStorm threat research team, which works on vulnerabilities in the wireless interfaces that connect the Internet-of-things (IoT).
IoT devices are, Knight observed, basically embedded devices. They have basic CPUs, often run on batteries, and are hard to reach but easy to install. They need, typically, wireless connectivity. Cellular connectivity is here to stay, but 2G connectivity is on its way out. AT&T, for example, will deprecate 2G service in January 2017, and the other major providers have similar plans. Given that 2G networks are about to sunset, providers are competing to develop the infrastructure necessary to replace them. This infrastructure will need to handle the rapidly growing segment of connected embedded devices that make up the Internet-of-things, and its machine-to-machine connections.
Low-power wide area networks (LPWAN) have become an attractive option: they offer broad geographic coverage (with ranges measured in miles), they have low power requirements for their endpoints, and they promise cost-effective use of unlicensed portions of the electromagnetic spectrum. LPWAN technology is similar to cellular, but it’s optimized for the IoT. SIGFOX, Senet, and Actility are some of the major players in LPWAN. LPWAN devices are battery-conscious—SIGFOX advertises ten years’ service from a AA battery, for example. They employ conservative duty cycling, very spare datagrams, and high rate-limitation.
It’s very expensive to buy spectrum in restricted space. Upstarts are effectively priced out of the bidding for unused broadcast spectrum. But LPWAN, for which no expensive license is necessary, is an attractive alternative: anyone can set up a gateway and communicate “for miles.” After describing the basics of LFWAN operations, Knight noted that two LPWANs, LoRa and SIGFOX, are proliferating rapidly with the backing of large industry consortia. They constitute an emerging standard, and, like any emerging standard, they offer a “rich attack surface” for security researchers to explore. He concluded with a long analysis of LoRa, and suggested that researchers might profitably devote their attention to LoRa’s unique PHY layer, exploring it through software-defined radio.
Attacking wireless protocols: recon and exploitation via the KillerBee framework.
Ryan Speers, Director of Applied Research at Ionic Security, took the symposium through IEEE 802.15.4/ZigBee scanning, sniffing, injection, denial-of-service, and cryptographic attacks. ZigBee and 802.15.4 aren’t, he stressed synonymous, but they’re used interchangeably, especially since the ZigBee protocol incorporates 802.15.4.
Speers demonstrated that, contrary to common belief, you don’t need low-level access to exploit systems using these protocols. In fact, it’s possible “to get raw layer-2 packet injection” with simple layer-7 compromise. This requires specially crafted payloads that exploit bit errors using Ionic’s packet-in-packet technique.
He concluded with a discussion of methods for exploiting low-level differences in radio chip and firmware behavior “to craft frames that can be seen by some radios, but not by others in the same vicinity adhering to the same standard.”
Mousejack vulnerabilities: a case study in using OSINT for vulnerability research.
Marc Newlin, a security researcher with Bastille Network’s RFStorm threat research team, focuses on RF/IoT threats in enterprise environments. He presented a demonstration of how open-source intelligence (OSINT) can be used to reverse engineer wireless protocols. We quote from his abstract, which provides a good overview of his presentation.
“IoT devices frequently include obscure RF transceivers with little or no documentation, which can hinder the reverse engineering research process. Fortunately, regulatory bodies like the United States’ FCC contain a wealth of useful information. In order to certify wireless devices for sale in different markets, manufacturers must submit their products to test labs which evaluate the behavior of their RF emissions. The test reports often contain detailed physical layer operating characteristics, including RF channels, modulation, and frequency hopping behavior. By translating regulatory test reports into GNU Radio flow graphs, a researcher is able to focus their efforts on understanding packet formats and protocol behavior instead of grinding away at the physical layer.”
Newlin’s talk showed how he was able to use such information in his research into Mousejack vulnerabilities. OSINT enabled him to evaluate a large number of vulnerable devices much more rapidly and easily than would otherwise have been possible.
The nuts and bolt of hacking security cameras.
Wesley Wineberg, a Senior Security Research Engineer at Microsoft, has an extensive background in the security of “critical infrastructure” technologies. His presentation on security camera hacking took as its premise the observation that such cameras were Internet-of-things devices before people generally recognized that there was such a thing as an Internet-of-things.
Nowadays, security cameras are still called "closed circuit" cameras, but for the most part they aren’t closed circuit at all. Most of them are IP cameras. But businesses using them treat still them as if they were closed circuit, and not networked. We find these cameras linked to physical security networks—those controlling doors, for example—and other company networks as well. They’re often found connected to building control systems (notably HVAC systems) and even to point-of-sale systems.
Attackers after these devices can have many goals, Wineberg pointed out. They may want access to a video stream, they may wish to modify a video stream, they may seek persistent access to the security system, or they may be interesting in pivoting from the camera to other networks.
Reviewing IP camera protocols, Wineberg noted that the while the protocols themselves aren’t necessarily flawed, their implementation often is. He took the symposium through the vulnerability of one particular system, using Sony’s SN140, a widely used commercial camera, as his target for a proof-of-concept hack. This isn’t an easily hacked device (the firmware in particular he found interestingly resistant to unpacking) but it does have a lot of features, capabilities, and functionality. And, he offered in an instructive aside, “feature equals attack surface.”
Much of these attack surfaces are physical attack surfaces: an accessible compact flash card port, Ethernet, video/audio input/output, and so on. After demonstrating how he approached these surfaces, he closed by offering camera users the following advice: restrict physical access to the cameras themselves, keep the cameras off the Internet, and restrict device communications. And remember that most cameras, particularly commercial cameras, have backdoors.
Computer science is at an awkward age. How it might grow up.
Josh Jones, a senior computer scientist at Booz Allen Hamilton, has worked ICS security research and software development for years. He thinks computer science, as a discipline, is at an “awkward age.” As evidence for this he notes the sorts of graphics that appear on the covers of the discipline’s textbooks—bug-eyed Strieberesque space aliens and the like. (He might also have, although he didn’t—perhaps out of kindness, avert to the prevalence of t-shirts bearing various depictions of skulls, etc.) “What calculus textbook,” he asked rhetorically, to an audience that clearly got the joke, “has aliens on its cover?”
So, what are the consequences of being at this awkward age, and how can the community outgrow it? Jones thinks the prevalent build-and-test process could do with an overhaul, especially with respect to the Internet-of-things. The IoT, he argued, “is a lot like an ecosystem, and not just in a ‘markety’ way.” For the IoT to work, it’s got to be seamless, and it’s got to be trusted.
(As an aside, we cannot help noting that “ecosystem” is always used as a benign metaphor in our field, as if an ecosystem is a gently nurtured garden in which everything thrives harmoniously. But an ecosystem, even that represented by a garden, is also a place of carnage: plants compete, bugs eat—and are eaten, and so on. Or consider marine ecosystems—some of its denizens are big filter feeders, others are krill. Which one of us is volunteering to be krill? We’d all welcome some advice on metaphorical clarity here, perhaps from ecologists or conservations biologists themselves.)
But to return to the problem, Jones suggested that with the IoT we know neither what we’re testing nor how to test it. Allowing too many unknowns increases complexity and brings unpredictable results. Thus we achieve, in computer science, a kind of mysticism (not to mention pictures of aliens on the covers of our textbooks). What are we testing? The final code, as delivered, or some version of that code? We need ways of sectioning test units of code we’ll actually deliver. We need to be able to query portions of code. We need to be able to specify and deploy aspects of the build environment, and perform fast monolithic builds. Jones thinks he sees a way forward, though LLVM, using tools like containers, build and testing distribution, and SAT/SMT solvers. He illustrated how we could have repeatable builds, how we might identify programming errors, and how it would become possible to catch potential security vulnerabilities before deployment.