At a glance.
- Google Play boots apps infected with data-harvesting code.
- Report: Fox News database left unprotected.
- NIST on the history of online privacy.
Apps, found to be secretly harvesting user data, ejected from Google Play.
Bleeping Computer reports that Google has pulled dozens of apps from the Google Play store after discovering they contain data harvesting software. The malicious code, which was embedded in a third-party software development kit (SDK), was detected on millions of Android devices and the impacted applications, which include several Muslim prayer apps, a highway-speed-trap detection app, a QR-code reading app, were installed more than 45 million times. The SDK allowed for the collection of clipboard content, GPS data, email addresses, phone numbers, and user's modem router, MAC address, and network SSID. The Wall Street Journal explains that the SDK, called Coelib, was developed by a Panamanian company called Measurement Systems S. de R.L, which is connected to a Virginia defense contractor that performs cyberintelligence, network-defense and intelligence-intercept work for US national security. App developers say that Measurement Systems paid them to incorporate Coelib in their apps in order to allow the company to quietly collect the user data. Though Google removed the affected apps in October, users who downloaded them previously could still have the SDK running on their devices. Serge Egelman, co-founder of AppCensus, the mobile app security and privacy research company that on Wednesday released a report on the nefarious code, says that third-party SDKs like these are often not properly analyzed or vetted. “This saga continues to underscore the importance of not accepting candy from strangers,” Egelman added.
Report: Fox News database left unprotected.
Researchers at Website Planet report they have discovered an unsecured cloud database that appears to belong to American cable network Fox News. The 12,976,279 records, totaling 58 GB contained personal employee data like emails, ID numbers, and usernames, as well as channel content info including 65,000 names of celebrities, cast, and crew members. The non-password protected and unencrypted database also included software and hardware details like event logs, IP addresses, and device data, and details indicating where data is stored, content delivery paths, and a virtual blueprint of the network’s backend operations. The researchers say there is no indication that the database contained only testing information, and they were able to validate that the names belonged to real individuals. Upon detection, Website Planet immediately notified Fox, and public access to the database was restricted soon after. The Fox News security team claims that the database is a development environment and is therefore not connected to production. They added, “As part of our investigation, we are reviewing logs to determine any anonymous access to the database.”
James McQuiggan, security awareness advocate at KnowBe4, warns that there need to be rules, policies, and techniques that protect data from insiders, too:
“Two of the Open Web Applications Security Project (OWASP) recommendations focus on preventing unauthorized access to the data and applications. When organizations, contractors and third-party suppliers work on data that contains personally identifiable information, they must have policies, procedures and audits requiring password protection and data encryption. These steps ensure that any accidental exposure to the internet will reduce the risk of unauthorized access of data or having it stolen.
"Whenever organizations upload data to be accessible via the cloud, all data must be secured and restricted to authorized users to reduce the risk of a sensitive data leak.
"With proper and robust security education and training, developers can understand and implement adequate access and identity management controls, which support the organization's policies to protect all uploaded data.”
Willy Leichter, CMO of LogicHub, thinks developers tend to get too cocky:
"Unfortunately, we’ve seen this movie play out many times before. Developers are notorious for thinking that security rules don’t apply to them, or that their processes are somehow isolated from hacking. Using real or realistic data at scale is an important test for most systems before they go live. But this is where we see developers get careless, or simply disregard security best practices. The almost 13 million records exposed could have fit on a single USB stick, and the data was likely shared by multiple developers – who probably felt password protection was a hassle.
"We also don’t know whether the data was actually stolen but should assume it was. Research has shown that a new, unprotected server spun up on AWS will be scanned by hackers in less than 10 minutes. If a researcher found this database unprotected, we should assume that the army of hackers has already found and exploited it.
"While this kind of negligence is common, and probably accidental, it’s also inexcusable, and usually indicates poor security controls in the organization responsible for the data. But until we have serious penalties for this type of accidental breach, we’ll see this again, and again…"
Erfan Shadabi, cybersecurity expert with comforte AG, thinks breaches are often traceable to simple error, not malicious intent:
“A large number of incidents and breaches can be traced back not to aggressive attacks, but rather to simple technical or human error. In this incident, a configuration error exposed millions of internal records, including PIIs on employees. Enterprises should take heed of this very common situation and invest in more effective data protection methods that are readily available in the marketplace, including data-centric technologies such as tokenization and format-preserving encryption. These measures guard the data itself instead of the environment around it by replacing sensitive information with representational and innocuous tokens. This data-centric protection travels with the data, so even if data is exposed due to technical or human error, it will be worthless, thereby averting the worst repercussions.”
NIST looks back at the history of privacy.
It’s the National Institute of Standards and Technology’s (NIST) 50th anniversary of cybersecurity, and in honor of this milestone, NIST’s April blog post focuses on how the institute has dealt with the issue of data privacy over the last five decades. NIST notes that privacy came to the forefront in the 1960s as computers were increasingly used to store personal data. In 1973 the Department of Health, Education, and Welfare recommended the establishment of a federal “Code of Fair Information Practice” for all automated personal data systems, which resulted in the Privacy Act of 1974, and NIST’s accompanying Computer Security Guidelines for Implementing the Privacy Act of 1974. This led NIST to focus on standards for cryptography and encryption, which were acknowledged as essential for ensuring data confidentiality, protection from unauthorized modification, and source authentication. In 2010 NIST issued Guide to Protecting the Confidentiality of Personally Identifiable Information, and it was in 2014 that NIST recognized privacy as an independent discipline. Looking to the future, the institute is focused on bolstering the privacy workforce, and NIST’s Privacy Engineering Program recently launched the Privacy Workforce Public Working Group to define the tasks, knowledge, and skills most essential for privacy professionals.