Cybersecurity In-Depth: Feature articles on security strategy, latest trends, and people to know.

Meta Takes Offensive Posture With Privacy Red Team

Engineering manager Scott Tenaglia describes how Meta extended the security red team model to aggressively protect data privacy.

Jeffrey Schwartz, Contributing Writer

August 23, 2022

5 Min Read
Scott Tenaglia, a brown-haired man wearing glasses and a suit, no tie, stands at a podium with Black Hat 2022 branding
Source: Jeffrey Schwartz via Dark Reading

As several countries and US states prepare to enact new data privacy regulations next year, some companies have begun borrowing from an approach familiar to cybersecurity teams: that the best defense is a strong offense. While security red teams made up of friendly hackers who engage in persistent penetration testing are common, some organizations are now applying that same concept to data privacy.

Meta Platforms, the parent company of Facebook, Instagram, and WhatsApp, is among those that formed a privacy red team. Scott Tenaglia, the engineering manager for Meta's privacy red team, described how and why Meta created it during a session at the Black Hat USA 2022 conference in Las Vegas.

In the session "Better Privacy Through Offense: How to Build a Privacy Red Team," Tenaglia emphasized that his presentation was just about the lessons learned from forming a privacy red team. But Meta, for years under scrutiny about how it handles user data, now emphasizes data protection. Earlier this year the company revamped its privacy policy, which it recently just referred to as a Data Policy.

"As privacy and data protection regulations have improved around the world in recent years, we've explored ideas in people-centered privacy design and have worked to make our data practices more transparent," said Meta's chief privacy officer Michel Protti in a recent blog post.

Companies are beginning to create privacy red teams modeled after security red teams, says Orson Lucas, a principal adviser for KPMG's cybersecurity services.

"We're having some early conversations about that," Lucas tells Dark Reading. "It's starting to come up as a point of conversation as a supplemental service, and it's still fairly early and exploratory, but it's something we very much see as an opportunity."

How Privacy Red Teams Diverge

Privacy red teams apply the same mindset as their security counterparts, but instead of trying to breach their own networks, they attempt to steal confidential data. Tenaglia acknowledged some overlap between security and privacy red teams but says there are more differences. For instance, he described the impact of attacking an Internet-connected crock pot.

"The security vulnerability might have been pretty vanilla, but the privacy impact of that vulnerability was super high," Tenaglia said. With a privacy compromise, the attacker can access a victim's location, photos, and other sensitive personal information.

"That's really a privacy impact," he said.

Similarly, the focus of a security red team includes mitigating an attack surface. The security red team engages in reconnaissance by scanning their own firewalls, and if they can't break into the network, that suggests there are no vulnerabilities, Tenaglia noted. If a member of the security red team successfully gets through the firewall, however, that would signal the need to get to the source of the vulnerabilities.

Meanwhile, a privacy red team is more focused on large-scale access to personal user information and data, Tenaglia said. A common way of mitigating that risk is by applying rate limits, allowing access to a specific amount of data during a particular time.

"Anytime you find access to sensitive data or important data, things you didn't think you shouldn't get access to, the first thing you're going to do is see how much more of it you can get. You're going to scale up that access. You're going to scrape," Tenaglia said. "If those rate limits are working well, then you probably aren't able to do much there. Otherwise, you probably need to make some tweaks to the rate limits."

The Separate Goals of Security and Privacy Adversaries

Tenaglia also described the differences in motives among security threat actors and privacy adversaries. Security adversaries who issue APTs tend to have infinite timescales and resources and usually target specific companies seeking to steal corporate assets. In contrast, privacy red teams go after users' individual or personal information, he said.

Despite the differences, Tenaglia said he believes in recruiting security professionals to join privacy red teams. For example, someone with in-depth knowledge of Linux or Windows makes a good candidate because of their experience breaking apart the operating system.

"We don't really want your knowledge of Windows and Linux — because of course we're not attacking those things — [but] we do want your skill set and your know-how to manipulate those things to apply that skill set to our platforms, Facebook, Instagram, Messenger, etc.," Tenaglia said. "And we can teach and give you the knowledge to make you successful manipulating those things."

He added that a critical requirement for someone on a privacy red team is to have privacy instincts. Aside from the distinct focus of privacy red teams, Tenaglia noted that at Meta, they partner with the legal, risk, and security teams.

Tenaglia said that the Meta privacy red team's version of a penetration test is what they call a product compromise test. Besides finding vulnerabilities that give access to user data, the attackers also look to squeeze data out of APIs, which contain various forms of data or connection to data sources. Another exploration is adversarial emulation operations, where an intruder tries to gain access to user data by creating accounts.

The Subjectivity Issue With Privacy Findings

Tenaglia explained that findings of privacy vulnerabilities can be much more subjective than security flaws because three different factors influence the very notion of a finding.

First, where an organization operates in the world brings different laws and regulations into effect that can influence the finding. Second, perception of privacy violation is colored by the statements an organization has issued on how it protects users' data.

And third, "even if the prior two are true — meaning your finding is in line with the company statements and it's not violating the law or regulation — you, as a user of that same platform, may say to yourself, 'For me as a user, I still don't think this means the expectation of privacy,'" Tenaglia said. "And you might want to call that a finding."

In the future, Tenaglia said he is hoping for the privacy red team to move from the subjective to the objective. For that to happen, the team needs to "understand these privacy weaknesses better, and to understand how to do privacy by design," he said. "I think we'll get there."

Read more about:

Black Hat News

About the Author

Jeffrey Schwartz

Contributing Writer

Jeffrey Schwartz is a journalist who has covered information security and all forms of business and enterprise IT, including client computing, data center and cloud infrastructure, and application development for more than 30 years. Jeff is a regular contributor to Channel Futures. Previously, he was editor-in-chief of Redmond magazine and contributed to its sister titles Redmond Channel Partner, Application Development Trends, and Virtualization Review. Earlier, he held editorial roles with CommunicationsWeek, InternetWeek, and VARBusiness. Jeff is based in the New York City suburb of Long Island.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights