News, news analysis, and commentary on the latest trends in cybersecurity technology.
Security Is a Second-Class Citizen in High-Performance Computing
Vendors and operators attempt to balance power and security, but right now, power is the highest goal.
SUPERCOMPUTING 2022 — How do you keep the bad guys out of some of the world's fastest computers that store some of the most sensitive data?
That was a growing concern at last month's Supercomputing 2022 conference. Achieving the fastest system performance was a hot topic, like it is every year. But the pursuit of speed has come at the cost of securing some of these systems, which run critical workloads in science, weather modeling, economic forecasting, and national security.
Implementing security in the form of hardware or software typically involves a performance penalty, which slows down overall system performance and the output of computations. The push for more horsepower in supercomputing has made security an afterthought.
"For the most part, it's about high-performance computing. And sometimes some of these security mechanisms will reduce your performance because you are doing some checks and balances," says Jeff McVeigh, vice president and general manager of Super Compute Group at Intel.
"There's also a 'I want to make sure I'm getting the best possible performance, and if I can put in other mechanisms to control how this is being securely executed, I'll do that,'" McVeigh says.
Security Needs Incentivizing
Performance and data security is a constant tussle between the vendors selling the high-performance systems and the operators who are running the installation.
"Many vendors are reluctant to make these changes if the change negatively impacts the system performance," said Yang Guo, a computer scientist at the National Institutes for Standards and Technology (NIST), during a panel session at Supercomputing 2022.
The lack of enthusiasm for securing high-performance computing systems has prompted the US government to step in, with the NIST creating a working group to address the issue. Guo is leading the NIST HPC Working Group, which focuses on developing guidelines, blueprints, and safeguards for system and data security.
The HPC Working Group was created in January 2016 based on then-President Barack Obama's Executive Order 13702, which launched the National Strategic Computing Initiative. The group's activity picked up after a spate of attacks on supercomputers in Europe, some of which were involved in COVID-19 research.
HPC Security Is Complicated
Security in high-performance computing is not as simple as installing antivirus and scanning emails, Guo said.
High-performance computers are shared resources, with researchers booking time and connecting into systems to conduct calculations and simulations. Security requirements will vary based on HPC architectures, some of which may prioritize access control, or hardware like storage, faster CPUs, or more memory for calculations. The top focus is on securing the container and sanitizing computing nodes that pertain to projects on HPC, Guo said.
Government agencies dealing in top-secret data take a Fort Knox-style approach to secure systems by cutting off regular network or wireless access. The "air-gapped" approach helps ensure that malware does not invade the system, and that only authorized users with clearance have access to such systems.
Universities also host supercomputers, which are accessible to students and academics conducting scientific research. Administrators of these systems in many cases have limited control over security, which is managed by system vendors who want bragging rights for building the world's fastest computers.
When you place management of the systems in the hand of vendors, they will prioritize guaranteeing certain performance capabilities, said Rickey Gregg, cybersecurity program manager at the US Department of Defense's High Performance Computing Modernization Program, during the panel.
"One of the things that I was educated on many years ago was that the more money we spend on security, the less money we have for performance. We are trying to make sure that we have this balance," Gregg said.
During a question-and answer session following the panel, some system administrators expressed frustration at vendor contracts that prioritize performance in the system and deprioritize security. The system administrators said that implementing homegrown security technologies would amount to breach of contract with the vendor. That kept their system exposed.
Some panelists said that contracts could be tweaked with language in which vendors hand over security to on-site staff after a certain period of time.
Different Approaches to Security
The SC show floor hosted government agencies, universities, and vendors talking about supercomputing. The conversations about security were mostly behind closed doors, but the nature of supercomputing installations provided a birds-eye view of the various approaches to securing systems.
At the booth of the University of Texas at Austin's Texas Advanced Computing Center (TACC), which hosts multiple supercomputers in the Top500 list of the world's fastest supercomputers, the focus was on performance and software. TACC supercomputers get scanned regularly, and the center has tools in place to prevent invasions and two-factor authentication to authorize legit users, representatives said.
The Department of Defense has more of a "walled garden" approach, with users, workloads, and supercomputing resources segmented into a DMZ-stye border area with heavy protections and monitoring of all communications.
The Massachusetts Institute of Technology (MIT) is taking a zero-trust approach to system security by getting rid of root access. Instead it uses a command line entry called sudo to provide root privilege to HPC engineers. The sudo command provides a trail of activities HPC engineers undertake on the system, said Albert Reuther, senior staff member in the MIT Lincoln Laboratory Supercomputing Center, during the panel discussion.
"What we're really after is that auditing of who is at the keyboard, who was that person," Reuther said.
Improving Security at the Vendor Level
The general approach to high-performance computing has not changed in decades, with a heavy reliance on giant on-site installations with interconnected racks. That is in sharp contrast to the commercial computing market, which is moving offsite and to the cloud. Participants at the show expressed concerns about data security once it leaves on-premises systems.
AWS is trying to modernize HPC by bringing it to the cloud, which can scale up performance on demand while maintaining a higher level of security. In November, the company introduced HPC7g, a set of cloud instances for high-performance computing on Elastic Compute Cloud (EC2). EC2 employs a special controller called Nitro V5 that provides a confidential computing layer to protect data as it is stored, processed, or in transit.
"We use various hardware additions to typical platforms to manage things like security, access controls, network encapsulation, and encryption," said Lowell Wofford, AWS principal specialist solution architect for high performance computing, during the panel. He added that hardware techniques provide both the security and bare-metal performance in virtual machines.
Intel is building confidential computing features like Software Guard Extensions (SGX), a locked enclave for program execution, into its fastest server chips. According to Intel's McVeigh, a lackadaisical approach by operators is prompting the chip maker to jump ahead in securing high-performance systems.
"I remember when security wasn't important in Windows. And then they realized 'If we make this exposed and every time anyone does anything, they're going to worry about their credit card information being stolen,'" McVeigh said. "So there is a lot of effort there. I think the same things need to apply [in HPC]."
About the Author
You May Also Like