What a Security Engineer & Software Engineer Learned by Swapping Roles

A security engineer and infrastructure engineer with Salesforce share lessons learned from their professional role reversal, and advice for people on both teams.

Kelly Sheridan, Former Senior Editor, Dark Reading

August 5, 2020

5 Min Read
Dark Reading logo in a gray background | Dark Reading

Security engineering and software engineering teams have much to learn from each other, as two Salesforce employees learned in a "professional role reversal" that taught them how both teams can work together more efficiently and better collaborate on building secure software.

As part of the swap, principal security engineer Craig Ingram was dropped into the Salesforce runtime team. Principal infrastructure engineer Camille Mackinnon joined the platform security assessment team. In a Black Hat briefing on Aug. 5, the two shared stories and lessons learned.

Planning and prioritization were two big takeaways from Ingram's period on the runtime team. Engineers spent much of their time looking at competing priorities and deciding what they were going to work on: There were new features they had to develop — bug fixes to improve scalability and performance in their platform. Of course, security also came around requesting bug fixes.

"As someone who thought I was pretty empathetic to the balance that engineers needed to have, between ongoing engineering work and interruptions and other projects from security, it was another thing entirely to actually live through it," he said in the talk. "We couldn't get everything done at once. We had to break things down into small, manageable pieces."

That is how engineering teams scale, Ingram explained: They break projects down into parts. Many use objectives and key results (OKRs) to determine what needs to get done and define what the results of a given project will be. It's a measurable way to ensure whether or not a project was successful, as well as identify which projects could be pushed back, he added. 

"This is the time for ruthless prioritization," Ingram said, and it's time for security engineers to say no to things that don't have the highest impact or move them further toward their goals. Maybe implementing the latest tool you saw isn't as effective as implementing a supply chain security tool that could test for code changes in open source packages that engineers are using.

The concept of "test-driven development" (TDD) was a way he noticed security and software engineering teams could better work together. Software engineers determine their code is functional by writing tests. When the software is passing a test, there's a reasonably good idea it's now functional.

Because security is part of software's functionality, he said, the team can contribute back by writing security-focused software tests. For example, a strong candidate for a failing test is a security flaw found in an assessment. An engineer will know the bug has been fixed when the test is passed, and the code will continue to live in a testing framework for future testing. The team will know if there's a regression because the software will fail the test, he explained.

Looking Back Before Moving Forward
Another helpful practice was the "retrospective," which the runtime team did every week to review what they completed and/or go over an incident, such as downtime or a security problem. 

"This was a time to have a safe conversation about what's going well, what's not going well, what we want to keep doing, and what we want to improve," he explained. The idea is to create a timeline of all the things that happen and pinpoint what can be learned for next time.

This can also be applied to security teams, which could schedule a retrospective every couple of weeks. It's a good time to ask questions like "Are we having an impact?" "Are we falling behind?" and "Are we meeting the expectations of customers?" If these things keep coming up in a retrospective, he noted, it may be time to reprioritize projects or create a new objective.

Retrospectives are blameless, which Ingram said is the most important aspect of the practice. A root cause analysis of the problem doesn't mean figuring out who did something wrong — "it's not necessarily the fault of an individual if a production database fails" — but looking at which controls were missing that allowed an issue to happen in the first place. He warned teams can do more harm than good if they don't first establish trust before scheduling a retrospective.

Mackinnon emphasized the importance of listening to engineers and asking for user feedback. Engineering teams rely on product managers and professional user researchers to do customer interviews and create products that suit their needs. She advised security to do the same.

"I encourage you to try and understand your engineering team's needs," she said. "Even if you have to say no in some cases, having an engineer actually understand that you looked at their need, that you understand why they need to get there faster and have tried to work around their problems and help them build something more secure … will go a long way to make sure your recommendations are likely to be followed." 

While simplifying things for engineers may create more work for security teams in the short term, she said, it does mean engineers are more likely to align with security practices and the organization's security posture can improve overall.

One way that infosec can become more involved is by "shifting left," or moving the security engineering process to be integrated in the earlier phase of the software development life cycle. This may mean showing up in scrum meetings and learning how software is built; a better understanding of this can help improve security's role in software development.

"Start changes small," said Ingram. "Start changing how you do things, start using software engineering principles, start changing how your team does planning. Try retrospectives. Start collecting user feedback from customers. When you work and act like the teams you're partnered with, it's going to be a much better process for everyone."

Related Content:

 

 

Register now for this year's fully virtual Black Hat USA, scheduled to take place August 1–6, and get more information about the event on the Black Hat website. Click for details on conference information and to register.

Read more about:

Black Hat News

About the Author

Kelly Sheridan

Former Senior Editor, Dark Reading

Kelly Sheridan was formerly a Staff Editor at Dark Reading, where she focused on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial services. Sheridan earned her BA in English at Villanova University. You can follow her on Twitter @kellymsheridan.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights