Threats from within: lessons from the life sciences sector.
By Jason Cole, CyberWire staff writer.
Jul 18, 2023

Insider risk can destroy a project before it even makes it to market.

Threats from within: lessons from the life sciences sector.

Code42 released its 2023 Annual Data Exposure Report for the Life Sciences sector today. The report highlights the persistent risk of insider threats. i.e. employees sterling information and giving it to competitors who can bring products to market faster. Code42 highlights this with two examples of recent lawsuits in which employees used trade secrets and even started their own company based on the research they were conducting. “For companies in the Life Sciences, safeguarding sensitive data from unauthorized access is vital to maintain a competitive edge and ensure uninterrupted business operations. Failing to do so results in expensive litigation processes, reputational damage, and most importantly, lost business opportunities,” Code42 writes. 

Some sectors have more to lose to insider threats.

70% of the respondents from the life science sector said that it seems data theft from insiders is on the rise, but the report explains that life science is on the lower end of the sectors affected. According to the report, on average two-hundred-thirteen thefts occurred each month across all sectors. Business and professional services seems to be the hardest hit at thirty-eight per month, with energy, oil and gas coming in second with twenty-eight incidents each month.

Regarding life sciences' relatively low number of events Code 42 writes, ”While this is encouraging, 20 incidents per month represents nearly one incident per – a prevalent problem.”

And unfortunately absence of evidence isn’t necessarily evidence of absence. “It’s also important to remember that this number may not reflect the true extent of the problem, as respondents cite a lack of visibility as one of the biggest challenges when dealing with insider-driven data loss. This suggests that many more insider-driven data leaks fly under the radar. This is especially concerning for businesses in the Life Sciences sector, whose sensitive data directly affects core operations. Survey respondents cite research data as the most valuable data type (69%), followed by product roadmaps (60%) and source code (52%). These data types are key to allowing Life Sciences businesses to maintain a competitive edge, and losing any of them would certainly mean consequences for future business prospects.”

What common insider-risk management practices are in use now? 

CyberWire had the opportunity to pose several questions to Code42 regarding this report and strategies a company could take to mitigate the risk of insider trading. When asked if it was standard practice for a company to require explicit permission when an employee wants to remove sensitive information or code from a network, Nathan Hunstad, Deputy CISO at Code42 said, “When it comes to company-owned data, it's normal for a company to require explicit permission before removing data such as source code from the environment. That's not to say it's always prohibited; For example, by working collaboratively and evaluating whether a certain work product includes sensitive data, it's entirely possible to let people take samples of that work product. However, unless you know what employees want to take, you can't make that determination and unfortunately not all employees are forthcoming.” 

Can a company be secure and flexible enough to keep up in the marketplace?

Fair enough: it can be easy to assume that every employee has the best intentions when requesting something. After all, at scale, it’s a very small number of employees who commit such actions as stealing trade secrets. In the US government, employees and service members are indoctrinated with a sense that security is the number-one priority, and is every worker’s responsibility. If you notice that a file is missing or that a colleague is inserting a jump drive into a device, it’s your responsibility to inform the proper security channels. Most of the time this works as, given the nature of the secrets being kept, most employees understand the dangers of leaks and spies. The question is, can a company incorporate adequate security strategies while maintaining its flexibility?

Hunstad thought this entirely feasible, but cautioned that there isn’t a one-size-fits-all approach.

“A successful Insider Risk program requires response controls that automate resolution of everyday mistakes, block the unacceptable, and allow security teams to easily investigate what’s unusual. Traditional, restrictive security policies put a strain on security teams’ already scarce resources and slow down business operations. They also create rogue employees who find workaround solutions to get their work done.”

He saw some approaches as dead ends. “Content-based blocking as a default response to data risk doesn’t work. For example, it can accidentally block employees from completing their work, while also slowing down employees’ computers. Instead, solutions that block data exfiltration based on specific users (e.g. high risk users such as repeat offenders) and destinations, such as personal cloud storage accounts, are better for business.” 

Instead, Hunstand said, “We recommend an approach that prevents data exfiltration by watching all data, all vectors and all users, and creating consequences for people who exfiltrate data. This approach assumes positive intent, lets users collaborate, and ensures security teams are focused on actual cases of risk, not on normal business. Furthermore, a good insider threat program is transparent about the data monitoring so that users know what is expected of them.”

What can a company do to secure its information internally?

What then can a company do to effectively mitigate risk? Hunstand suggested, “Companies should monitor data movement, institute clear security policies and implement employee training. First, set expectations internally. Clearly communicate security policies with your employees and contractors. By aligning on what is and what’s not acceptable when sharing data, you can hold users accountable.”

Accountability doesn’t mean reflexive blame or punishment, it’s worth noting. “If a user’s behavior gets risky,” Hunstand said, “provide real time feedback to help improve their security habits. This helps hold employees accountable and reminds them to follow best practices, which ultimately changes behavior over time.” 

None of these practices will completely obviate insider risk: 

“Even with training, insider threat data risks are inevitable. When they happen, the key is to minimize the damage by revoking or reducing access on a user level if necessary. Then you can investigate and determine the best course of action to remediate. 

“Blocking can be seen as a quick fix but can be ineffective if applied too broadly or indiscriminately. It’s more effective to block data movement in specific cases–like blocking data exfiltration from your riskiest users (e.g. departing employees or contractors). Preventing your riskiest users from sharing data to unsanctioned destinations is crucial. Blocking certain activities from those users allows the rest of your organization to work collaboratively without hindering productivity.”