TSA No-Fly List Snafu Highlights Risk of Keeping Sensitive Data in Dev Environments
A Swiss hacker poking around in an unprotected Jenkins development server belonging to CommuteAir accessed the names and birthdates of some 1.5 million people on a TSA no-fly list from 2019.
January 23, 2023
A recent incident where a bored hacker found a list of 1.5 million individuals on TSA's no-fly list sitting unprotected on an Internet-exposed server has highlighted, once again, the risky practice of using production data and sensitive information in development environments.
Swiss hacker "maia arson crimew" recently discovered the TSA list on a Jenkins open source automation server belonging to CommuteAir, an Ohio-based airline company that supports United Airlines operations on regional flights. In comments to Daily Dot — the first to report on the incident — she said she found the no-fly list while searching for Internet-exposed Jenkins servers using the Shodan search engine, and notified the company of the issue.
Openly Accessible
The list — housed in a text file named "NoFly.csv" — contained the names of more than 1.5 million individuals that the US government has barred from flying because of security concerns. The TSA makes the list available to airlines around the world so they can screen passengers intending to fly from, to, or over the US.
Daily Dot quoted maia arson crimew — who describes herself as a "security researcher" — as saying she had also found credentials to some 40 Amazon S3 buckets belonging to CommuteAir on the same server. One of those credentials led her to a database containing sensitive information — such as passport numbers, phone numbers, and postal addresses—belonging to some 900 CommuteAir employees.
Erik Kane, corporate communications manager at CommuteAir, in a media statement described the leak as resulting from a misconfigured development server.
"The researcher accessed files including an outdated 2019 version of the federal no-fly list that included first and last name and date of birth," Kane said. "Additionally, through information found on the server, the researcher discovered access to a database containing personal identifiable information of CommuteAir employees."
An initial investigation has shown that no other customer data was exposed, adding that CommuteAir immediately took the affected server offline after the hacker had informed the company about the issue, Kane noted.
"CommuteAir has reported the data exposure to the Cybersecurity and Infrastructure Security Agency, and also notified its employees," the statement read.
Using Sensitive Data for Test & Development
The incident is an example of the things that can go wrong when organizations permit the use of production data, or sensitive information, in development and testing environments — in this case, a Jenkins server.
Quality assurance teams and developers often cut and paste raw production data into their environments when testing, developing, or staging apps because it offers a less expensive, faster, and more valid way to do things compared to using test data. However, security experts have long warned about the practice being fraught with risks, because development and test environments typically lack the security controls that are present in a live, production setting.
Common issues include over-permissions, lack of network segmentation, poor patch management, and a general lack of awareness of data-privacy requirements.
Concerns over the security, privacy, and compliance issues related to the practice have, in fact, pushed many organizations into taking additional precautions such as masking, obfuscating, or encrypting sensitive and live production data before using it for testing or development. Many simply use dummy data as a stand in for the real thing when testing or staging software. Even so, the practice of using raw production data and sensitive information in development and test settings continues to be quite rampant according to security experts.
"Sadly, testing with production data is very common," says John Bambenek, principal threat hunter at Netenrich. "Quite simply, creating credible and at-scale test data is often complicated especially when dealing with unique data sets." Testing teams often want their tests to represent the real world as much as possible, so it's a very natural temptation to use production data, he says.
Patrick Tiquet, vice president of security and architecture at Keeper Security says that DevOps servers often tend to be prime targets for attackers because they are usually less protected than live, production servers. He advocates that organizations avoid using production data in non-production environments no matter how benign the data might appear.
"The use of production data in development systems increases the risk of disclosure of that information, because in many organizations, development systems may not be as protected as their production systems," Tiquet says.
Organizations that permit the practice need to recognize the fact that many data-privacy regulations require covered entities to apply specific controls for protecting sensitive data, regardless of where it might exist in the environment or how it is used. Using production data in a development environment could violate those requirements, Tiquet says.
"Exposing sensitive data can not only open an organization to litigation or government-related trouble depending on the data, but it can also lead to an erosion of customer trust," he warns. "While there are many steps organizations can take to protect their test environments, such as data masking and encryption, the most important will be including the security teams in the setup and continuous management of DevOps servers."
About the Author
You May Also Like
Applying the Principle of Least Privilege to the Cloud
Nov 18, 2024The Right Way to Use Artificial Intelligence and Machine Learning in Incident Response
Nov 20, 2024Safeguarding GitHub Data to Fuel Web Innovation
Nov 21, 2024The Unreasonable Effectiveness of Inside Out Attack Surface Management
Dec 4, 2024