DPRK cyber ops. Poland warns of Russian cyber activity. Twitter’s data incident. A crypto trading exchange is rifled. Ransomware shuts down the Port of Lisbon. Small business opportunities.
Dave Bittner: Recent DPRK cyber operations. Twitter's data incident. 3Commas has been breached. Poland warns of increased Russian offensive cyberactivity. The Port of Lisbon has been hit by ransomware. DHS announces SBIR topics. New additions to the Known Exploited Vulnerabilities Catalog. Ben Yelin on the legal conundrum of AI-generated code. Our guest is Tanya Janca from She Hacks Purple with insights on API security. And newsflash, LockBit says they have a conscience - right. From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Tuesday, January 3, 2023.
Dave Bittner: Good day to you all, and Happy New Year. It is great to be back.
Recent DPRK cyber operations: spying and theft.
Dave Bittner: Researchers at Kaspersky warn that North Korea's BlueNoroff group is using several new methods to deliver malware. BlueNoroff began using .iso and .vhd files to deliver their malware, which allows them to bypass Mark-of-the-Web flags. The threat actor also seems to be testing out other file formats for malware delivery. The threat actor set up multiple domains that impersonate venture capital firms, most of which were located in Japan, but BlueNoroff also impersonated Bank of America. Be sure you know with whom you're dealing.
Twitter’s data incident.
Dave Bittner: At the end of December, it emerged that the data of millions of Twitter users have been stolen and were held for ransom. The hacker who claimed responsibility and goes by the name Ryushi claims to be selling data of over 400 million Twitter users obtained in 2021, BleepingComputer reports. The data were accessible because of a since-patched API vulnerability. Spiceworks reports that the hacker demanded $200,000 in ransom from the social media outlet for the data to be deleted, or if not bought by Twitter, would be sold to buyers willing to fork out $60,000 a copy. Bloomberg reports that Ireland's Data Protection Commission began a probe into Twitter on Friday, December 23.
Dave Bittner: Estonia-based cryptocurrency trading service 3Commas fell victim to a breach at the hands of an anonymous Twitter user that obtained 100,000 API keys belonging to users of 3Commas. Decrypt reports that $22 million in crypto had been stolen through 3Commas' API keys that were compromised. And the company confirmed that it was the source of the leak on Wednesday of last week. The company insisted that the issue lies with phishing attacks that caused users to give up their data. Yuriy Sorokin, co-founder of 3Commas, pushed this idea until Wednesday, when he confirmed on Twitter that the hackers' data is accurate, stating - we are sorry that this has gotten so far and will continue to be transparent in our communications around the situation. CoinDesk reports that the anonymous Twitter user identifying themselves as the hacker published more than 10,000 of the API keys last Wednesday and says that they will be publishing full randomly in the upcoming days.
Poland warns of increased Russian offensive cyber activity.
Dave Bittner: The government of Poland warned over the weekend that Russian cyberattacks against third-party countries that have supported Ukraine during Russia's war can be expected to increase. As one would expect, the statement draws particular attention to the Russian threat to Poland in cyberspace. The Russian target list is expansive, covering a range of sectors, and hacktivist auxiliaries continue to play a significant role in the Russian offensive. The motivation is retaliatory. Polish officials state such incidents in cyberspace are retaliatory actions typical of Russia, which are a response to steps taken by other countries that are unfavorable and inconvenient for the Russian Federation. Hacker groups linked to the Kremlin used ransomware, DDoS and phishing attacks, and the goal of hostile actions coincides with the goals of a hybrid attack - destabilization, intimidation and sowing chaos.
Port of Lisbon hit by ransomware.
Dave Bittner: Portugal's Port of Lisbon sustained a cyberattack that took its website offline, Cybernews reports. The extent of the attack is unclear, though port officials stated that operational activity was not compromised. The LockBit gang has claimed responsibility and also claims to have stolen financial reports, cargo and crew information, customer data, mail correspondence and contracts. The gang is threatening to publish the stolen data if the ransom isn't paid by January 18.
DHS announces SBIR topics.
Dave Bittner: The U.S. Department of Homeland Security last week announced its latest round of solicitations under the Small Business Innovation Research program, the SBIR. Five of them are relevant to cybersecurity, including accurate and real-time hardware-assisted detection of cyberattacks, machine-learning-based integration of alarm resolution sensors, mission critical services server-to-server communication, voice communications and 3GPP standards, reducing order modeling of critical infrastructure protect surfaces and, finally, theoretical classification methodologies to enable detection with predicted signatures.
Dave Bittner: If you're a U.S. business, particularly a cybersecurity startup that's engaged in some R&D, you might well look into the SBIR program and the related Small Business Technology Transfer program. They're Small Business Administration efforts that are used by many federal agencies, including the Department of Homeland Security, the Department of Defense and other departments and independent agencies. Many of them have a strong interest in cybersecurity, and some of their topics, like those DHS announced last week, address cybersecurity.
Dave Bittner: Think of it as angel funding. SBIR has three phases. Phase one is designed to establish the technical merit, feasibility and commercial potential of the proposed research and to determine that the small business is, in fact, able to perform. Phase one awards can range between $50,000 and $250,000 for six months in the case of SBIR or of one year for STTR awards. Phase two awards are designed to build on phase one. They generally amount to $750,000 for two years. The final award, phase three, is interesting in that it brings no direct additional funding. Rather, it involves transition of the R&D into products, processes or services that can be bought and used by the federal government. Some surprisingly large businesses have got their start with SBIR funding. For an overview of the program, see sbir.gov.
New additions to the Known Exploited Vulnerabilities Catalog.
Dave Bittner: The U.S. Cybersecurity and Infrastructure Security Agency on Thursday added two new entries to its Known Exploited Vulnerabilities Catalog. Under Binding Operational Directive (BOD) 22-01, U.S. Federal civilian executive agencies have until January 19, 2023, to check and fix their systems.
LockBit says they have a conscience.
Dave Bittner: And finally, the earlier mentioned LockBit operators claim they're not just big-time criminals - the kinds of gonifs who can mess up operations at a major port - but they're also selective, the crooks with a heart, and so they avoid hitting targets like hospitals. But what, you'll ask, about the ransomware attack against a major Toronto's children's hospital? Well, they have an explanation and even an apology. BleepingComputer reports that the gang released, without charge, a decryptor for the ransomware used against SickKids - that is, the Toronto Hospital for Sick Children. The gang blamed an affiliate, stating, we formally apologize for the attack on SickKids and give back the decryptor for free. The partner who attacked this hospital violated our rules, is blocked and is no longer in our affiliate program. So OK, then. But before you gush with admiration at LockBit's social responsibility, consider - you'd think it wouldn't take a hermeneutical expert to be able to interpret an online name like SickKids as maybe something you'd want to put a no-fire area around.
Dave Bittner: Coming up after the break, Ben Yelin on the legal conundrum of AI-generated code. Our guest is Tanya Janca from She Hacks Purple with insights on API security. Stay with us.
Dave Bittner: Tanya Janca is director of developer relations and community at Bright Security, as well as the founder and CEO of We Hack Purple, an online learning community that revolves around teaching everyone to create secure software. I caught up with Tanya Janca for her insights on where we stand when it comes to API security.
Tanya Janca: A lot of software developers are making more APIs than ever before 'cause they've discovered it's a lot easier to maintain and make sure you have good uptime if you're doing, you know, a whole bunch of pieces rather than one gigantic monolithic application. But unfortunately, malicious actors seem to have really noticed this trend, and so they're focusing on attacking APIs more than ever before. And APIs used to be all sorts of things, right? Like, an API can be on your operating system, like on the host. It can be between computers over the internet. It could be just on a local LAN. There's all sorts of different ways that APIs work. But when people talk about them right now, most of the time what they're talking about is a web API or a web service - so an API that's available over the internet. And so there's lots of types of APIs, but that's the one, mostly, everyone's talking about.
Dave Bittner: And when it comes to vulnerabilities and the ways that the bad guys are coming at them, what are the types of things that you typically see?
Tanya Janca: Definitely, I am seeing a lot of brute-force attacks - basically, bots calling your API a zillion times, calling your API in every way they can think of, fuzzing your API - so figuring out how you're supposed to talk to it. And then it's like, how can I talk to it in a way that is not ideal for that poor, little API on the internet? So I'm seeing a lot of that - like, trying to overwhelm and break your way in. And then I'm also seeing a lot of all the same stuff that works on web apps, except for output encoding - or, I mean, cross-site scripting. So things that don't need a browser to do the attack - like cross-site scripting requires a browser. Every other thing - I'm seeing those attacks. So all sorts of injection attacks, like SQL injection, NoSQL injection, LDAP, etc. So, like, the things that we saw before that were problems are still all happening, plus a lot of bots and overwhelming of people.
Dave Bittner: Are there any common things that you see from the organizations that are being successful here in mitigating these sorts of things? Are there common elements that they do?
Tanya Janca: Definitely companies that, first of all, have an application security team - so that could be one or more people where their entire job is just dedicated to ensuring their organization is releasing more secure software. So that's one thing - like, having one or more people dedicated to that - like, that's their full-time job. The other side of that, which is what the AppSec people spend a ton of their time doing, is making sure that they're following - and by they, I mean the software developers, the DevOps team, the operations folks - following a secure system development lifecycle. So if you're doing waterfall or DevOps or agile - whatever methodology you want to follow to make software - just adding security touchpoints throughout the project and making sure that the thing actually happens.
Tanya Janca: So you don't just say we're going to do a secure code review - you actually ensure that it happens and the things that are found are remediated as per the project's requirements. And so companies that are doing one or both of those tend to have way better results because it's - how do I word this? - it's legitimate. So there's a policy that says you have to do these things. There's support from upper management. But if you just have a software developer who feels security is really important, but they have no authority to actually change anything - they don't have buy-in from the management levels - it's a lot harder to get anything done, Dave.
Dave Bittner: So what are your recommendations, then? I mean, for folks who want to do a better job with this - who want to wrap their arms around it - where are the good places to begin?
Tanya Janca: OK, so I feel that having some sort of secure system development lifecycle for APIs, web apps, IoT - whatever you're building - you know, if you need to gather requirements, have some security requirements. Even if it's just one security requirement to start, it's way better than zero. And then, when you're doing the design, there's a bunch of options you could do. You could do like a whiteboarding exercise, where you draw the architecture and discuss where there could be problems. You could do threat modeling. There's a lot of different things in the design phase that people often say there's no time for. I'm like, you spent six weeks designing it. You didn't have an hour to do a threat model? But it's about having the people on staff to do it. So if each phase has one security thing - just one - the thing you're going to publish will be a lot better. So I always start with that.
Tanya Janca: For APIs specifically, if possible, create some sort of standard or guideline for what the best practices are or the policy where you work. So for instance, if your API is going to be on the internet, I would love for it to be behind an API gateway so it can do authentication and authorization. And what I mean by that is, who are you, and should you even be here? Which sounds silly - but so if everything goes through this gateway, that means you can make sure you know who's connecting and they are who they say they are and that they should be even allowed to connect. And then turning on throttling and resource quotas, which is included in most API gateways, so that you don't have these huge cloud bills from bots just beating up, like, incessantly on your APIs. So that's one good starting point that I would put in a policy. So if it's public-facing, it's got to be behind this. These are the settings we need.
Tanya Janca: You know, if you need a license, the security team will set you up. You know, here's a little guideline we wrote about how to do it - and so if you can develop a policy or guideline or something so that the developers know what you want from the start. And so those are things I would put on there. I also tend to remind software developers about the security things you should do for web apps - they still almost all apply for APIs. So you don't have to do the output and coding, but every other thing we got to do. So we still need you to do logging, monitoring and maybe even alerting on things that seem disconcerting. Oh, did someone try to log in 10 times in under a second? That feels like a bot, not like a person. And so going through those things - like, making sure you follow a secure coding guideline - anyway, I'll go on and on, Dave. I'm sorry. But I feel like if you can have some security steps in your SDLC and you can have guidance for the software developers about what you want to see, you will get way more of what you want in life.
Dave Bittner: That's Tanya Janca from Bright Security and We Hack Purple.
Dave Bittner: And joining me once again is Ben Yelin. He's from the University of Maryland Center for Health and Homeland Security and also my co-host over on the "Caveat" podcast. Welcome back, Ben.
Ben Yelin: Thank you for having me, Dave.
Dave Bittner: So article over on the IEEE Spectrum website, and this is about a class-action suit that's being brought against GitHub Copilot and their parent company, Microsoft, about these claims that these AI engines are basically pirating open-source software. What do you make of this, Ben?
Ben Yelin: So this is really fascinating. We have an issue here that I think is novel and extremely complicated. So Copilot, as probably most of our listeners would know, is an AI pair programmer for software developers and suggests code in real time. But the input is, at least as alleged here, copyrighted material. Somebody has actually developed that - the code that goes into the system that leads to Copilot spitting out suggested code. This is open-source software as well. So obviously, the vision of open source is that anybody can use it and access it. But there are individuals - and that's the nature of this lawsuit - who think that their own creative work in developing these lines of code is being used without attribution. And eventually, if somebody uses the output from Copilot to make a profit, that's going to be a violation of our intellectual property laws.
Ben Yelin: There's another side to this story, though, and I think that's best articulated by Kit Walsh, a staff attorney at the Electronic Frontier Foundation. And Kit argues that training Copilot on public repositories is fair use. Fair use allows for the analytical use of copyrighted work - so for academic purposes, for learning purposes. The question here is whether this counts as fair use under our intellectual property laws. What Kit is saying is that Copilot is ingesting code and creating associations in its own neural net about what tends to follow and appear in what contexts.
Dave Bittner: Right.
Ben Yelin: And that is sort of doing analytical - that's the equivalent of doing analytical work on somebody else's copyright-protected material.
Dave Bittner: Yeah.
Ben Yelin: Really, this could boil down to how much Copilot is reproducing from any given iota - any element of the training data that was used as input. And that's something that's somewhat metaphysical. We might not know exactly how much of the suggested code comes from a distinct piece of data that's somebody else's copyrighted work. So this is a really complicated issue. I'm not sure we're going to get a satisfying resolution for a long time, but I can understand why people who have poured their heart and mind into developing lines of code would be upset by it being used, potentially, to profit somebody else without attribution.
Dave Bittner: Yeah. It strikes me that, at the core of this, is whether or not an AI system can express creativity. And is it - if you're able to input things and it's able to come up with novel solutions based on inspiration from other people's work, to me, that's new work, as opposed to just cutting and pasting some lines of code. That seems pretty clear-cut to me.
Ben Yelin: Right.
Dave Bittner: If you find, you know, some code that you had put in your book about programming in whatever language, and the AI takes it and just pastes it in there and doesn't even change any of the variables, well, we've got an issue, here. But if the AI is inspired by the code you write - as you say, that's a lot fuzzier in my mind.
Ben Yelin: And can an AI even be inspired? Is that a thing?
Dave Bittner: Right.
Ben Yelin: Because, unlike us, you know - you used an example on "Caveat," where we talked about this as well, of going to an art museum, being inspired by Picasso or whomever and going home and coming up with your own painting inspired by his work, even though it's unattributed.
Dave Bittner: Right.
Ben Yelin: And that's a really interesting metaphor. But in that case, you're using your own creativity. You are using the contents of your own mind to turn the inspiration from somebody else into your own distinct creative work. And is that happening with artificial intelligence? It's a hard question to answer. Can a computer have creativity, or are they just digesting pieces of information and spitting them out algorithmically? It's something that I don't think is clearly answerable.
Dave Bittner: Well, I think we all need to go back and watch the "Star Trek: The Next Generation" episode, "Measure of a Man," where Lieutenant Commander Data is put on trial as to whether or not, as a computer, he has the rights of a human being. I think it's all pretty well laid out there.
Ben Yelin: Maybe you and I can turn that into, like...
Dave Bittner: (Laughter).
Ben Yelin: ...A one-act play, where we just do that scene, and we have attorneys on each side arguing the best arguments on behalf of their clients.
Dave Bittner: Yeah.
Ben Yelin: I sense that's a good creative work in our future.
Dave Bittner: Yeah. All right. Well, this one - more to come, for sure, as this develops. And I find it fascinating. Ben Yelin, thanks for joining us.
Ben Yelin: Thank you.
Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our Daily Briefing at thecyberwire.com. The CyberWire podcast is a production of N2K Networks, proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our amazing CyberWire team is Elliott Peltzman, Tre Hester, Brandon Karpf, Eliana White, Puru Prakash, Liz Irvin, Rachel Gelfand, Tim Nodar, Joe Carrigan, Carole Theriault, Maria Varmazis, Ben Yelin, Nick Veliky, Gina Johnson, Milly Lardy, Bennett Moe, Catherine Murphy, Janene Daly, Chris Russell, John Petrik, Jennifer Eiben, Rick Howard, Peter Kilpe, Simone Petrella, and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.