Cybersecurity In-Depth: Feature articles on security strategy, latest trends, and people to know.

From COVID-19 treatment to academic studies, keeping research secure is more important than ever. The ResearchSOC at Indiana University intends to help.

dropper with some chemicals near a microscope
Source: Megaflop via Adobe Stock Photo

Figure 1: Image: Megaflop via Adobe Stock Image: Megaflop via Adobe Stock

Industrial espionage has been around long before companies needed to protect the identity of 13 herbs and spices or the cola recipe locked in an Atlanta pharmacist's safe. And scientists, being human after all, have sought to know what their "colleagues" are working on since Archimedes experimented with levers and fulcrums.

Today, however, the stakes and activity levels seem higher than ever, leading to serious interest in how to secure research and the instruments of experimentation.

Recent examples underscore why. In the early days of the coronavirus pandemic, for one, the US government accused China of spying on vaccine research. And incidents like the ransomware attack on the University of California at San Francisco, which was working to develop a COVID-19 treatment or vaccine, show that attackers are having a serious impact on some research institutions.

The question for cybersecurity professionals is how they can help keep their organizations' research capabilities from joining the list of victims.

"Right now, any research related to a cure for COVID-19 is the primary target for threat actors," says Hank Schless, senior manager, security solutions at Lookout.

The research for that critical topic shares qualities with research activities across the board, according to experts.

"When you're talking about research, and when you're crossing different types of entities, whether it's industry, academia, or others, you do have to have a baseline level of security," says Kiersten Todt, managing director of The Cyber Readiness Institute.

And those baselines can define research-oriented security in a number of ways.

"When it comes to research, there has to be best practices so that everyone knows where the baseline is and also where the ceiling is," Todt explains.

An Academic Approach
Defining the baseline and ceiling for research security is part of what the ResearchSOC at Indiana University intends to do. 

"The goal of the research is to provide scientific projects with the cybersecurity services that they really need in these modern times but are challenged to provide themselves," says Von Welch, principal investigator for the ResearchSOC project.

One of the reasons for the challenge is that most research projects are not staffed by cybersecurity experts -- they're made up of scientists. And the majority of research teams are small.

"Very few of these projects are on the scale of something like CERN [the European Organization for Nuclear Research], where they can have a dedicated computer security team," Welch says.

He explains that the ResearchSOC builds on the activity of the OmniSOC, a collaboration of schools in the Big 10. OmniSOC is a full production SOC that supports research projects across member universities and activities that range from scanning device logs to full security consulting on platform and architecture issues, Welch adds.

Support from a SOC that specializes in research is important, Welch says, because the needs and architectures of research projects can differ significantly from those supported by most enterprise SOCs.

"You see a lot of command-and-control or data infrastructure for controlling the test equipment," he says. "You see a lot of high-performance, high-end sort of unicorn infrastructure for processing data. And you tend to see a lot of collaboration. A lot of the projects we deal with are national, even global in scope in terms of their collaboration."

Beyond the extent of collaboration, other issues make securing research projects a challenge.

"In most situations, people equate data security to preventing unauthorized access. But one of the most insidious threats to data is its integrity," says Mounir Hahad, head of Juniper Threat Labs at Juniper Networks. "Therefore, having a DLP [data loss prevention] solution in place, as good as it may be, is not enough. You have to ensure no one with malicious intent is able to tamper with the data and make ever-so-slight modifications that the results are no longer trusted or lead to the wrong conclusion."

But even within the issue of data integrity, many roads lead back to the challenges posed by collaboration.

"The old stereotype of a scientist sitting alone in a room and making a discovery is just no longer true," Welch says. "It's massive collaboration, whether it's the Higgs boson [particle] or a gravity wave. "You know, these are now global collaborations with thousands of people."

And that makes the old defense model of a "walled garden" adequately protected by a well-engineered firewall no longer true either.

(Continued on next page.)

(Continued from previous page)

A Research Supply Chain
Welch describes the modern scientific process as one in which one group of researchers focuses on gathering data and others focus on developing algorithms for analyzing the data. Still, other groups might then take the data and algorithms and develop software to run on high-performance computing platforms to generate information that informs conclusions and papers.

"It's a very dynamic set of scenarios that sort of makes it tough to concretely lock down who should have access at any one time. You end up having a lot of distributed autonomy and a lot of trust relationships," he says.

The distributed nature of the research is recognized in those formal papers that now can have hundreds of recognized authors, Welch says. But enterprise security professionals must recognize that each relationship, each point at which data is passed from one researcher or research entity to another, creates another attack surface that can be exploited.

Todt has questions about how best practices can be developed for research: "The question would be, can this sector come together to create secure platforms? Do companies need to do it by themselves? And is there a role for the United States government to work collaborating to create and improve the security of those platforms?"

Work has begun in that direction, Welch says.

"There certainly have been a couple of framework developments in the EU largely built around the CERN community," he explains. "They have pulled together some some best practices here in the US. We also have some cybersecurity program guides that are actually working on building a framework.

But the very nature of science, in which different types of research differ widely from one another, means the frameworks have to be just different, too, with specific protection mechanisms developed for each research project.

The final challenge is making the principal investigator (PI) of each project -- a person with enormous responsibility and autonomy in the research world -- understand the importance of security.

"Principal investigators, just like CEOs or other C-suite level folks, typically don't come from a cybersecurity background," Welch says. "They're astronomers or physicists or other domain scientists. And their No. 1 goal is getting their science mission done, just like a CEO is trying to run their bank or get their startup launched or whatnot."

The key, he says, is language.

"If you go in there and tell them how this particular cybersecurity issue is a risk to their science mission, and you're able to communicate that so they understand how it relates to their scientific mission, now you'll start getting somewhere," Welch says. "You have to be able to have that communication with your principal investigator as you would with any leadership team, and [you have to] translate between these cybersecurity concerns and the risks to the mission."

 

About the Author(s)

Curtis Franklin, Principal Analyst, Omdia

Curtis Franklin Jr. is Principal Analyst at Omdia, focusing on enterprise security management. Previously, he was senior editor of Dark Reading, editor of Light Reading's Security Now, and executive editor, technology, at InformationWeek, where he was also executive producer of InformationWeek's online radio and podcast episodes

Curtis has been writing about technologies and products in computing and networking since the early 1980s. He has been on staff and contributed to technology-industry publications including BYTE, ComputerWorld, CEO, Enterprise Efficiency, ChannelWeb, Network Computing, InfoWorld, PCWorld, Dark Reading, and ITWorld.com on subjects ranging from mobile enterprise computing to enterprise security and wireless networking.

Curtis is the author of thousands of articles, the co-author of five books, and has been a frequent speaker at computer and networking industry conferences across North America and Europe. His most recent books, Cloud Computing: Technologies and Strategies of the Ubiquitous Data Center, and Securing the Cloud: Security Strategies for the Ubiquitous Data Center, with co-author Brian Chee, are published by Taylor and Francis.

When he's not writing, Curtis is a painter, photographer, cook, and multi-instrumentalist musician. He is active in running, amateur radio (KG4GWA), the MakerFX maker space in Orlando, FL, and is a certified Florida Master Naturalist.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights