4 Rules for Web App and API Protection

We can't delay the adoption of new rules that respect the way modern applications are built.

Dark Reading Staff, Dark Reading

May 31, 2021

5 Min Read
Dark Reading logo in a gray background | Dark Reading

Most Web app and API security tools were designed for a very different era. A time before developers and security practitioners worked together to ship secure software using integrated workflows. A time before applications were globally distributed and API based. A time before engineers expected to be able to enter a command and instantly make a global update.

But attackers are developers, too. Nimble as ever, they're using modern tools and workflows to build and advance new threats, and they aren't bogged down by the limitations of legacy solutions. That's why we can't delay the adoption of new rules for Web application and API security that respect the way modern applications are built.

Rule 1: Tools Must Fight Intent, Not Specific Threats
Security teams have long been focused on fighting specific threats. When evaluating new tools, they ask, "Can this protect me against X?" It's a style of evaluation that leads practitioners to tools that look for signatures or "indicators of compromise" of a particular threat. 

But signature-based tools don't differentiate between legitimate and malicious traffic or keep up with the unyielding increase in threats. The new rules of Web application and API security require a shift toward a more intelligent model, one that infuses enough confidence into the security toolchain so practitioners can assuredly run the system in front of valuable traffic without fear that it will block legitimate attempts or allow malicious ones through.

This puts new demands on security technology. Practitioners must have tools that examine not just the signature of the traffic but its intent or behavior. This means taking into account factors like the speed of the request, time of day, and user login status.

Builders also need tools that can be run not just in monitoring mode but in blocking mode. Tools that only run in monitoring mode for fear of false positives reinforce a broken system: The damage is done by the time the team can respond. Teams need a foundation of tooling that can confidently block threats as they happen, not diagnose the problem after the breach. 

Lastly, tooling needs to keep up with modern threats without placing a burden on the security and operations teams. With modern cloud and SaaS solutions, you get the full weight of a product security team staying ahead of threats and proactively delivering updates. There's no need to worry about patching or obsessing about the latest threats.

Rule 2: There Is No Security Without Usability
Legacy UIs can be slow and clunky and pose a multitude of risks: gaps in policy and enforcement across tooling, slow and uncoordinated response to urgent threats, and inconsistent — or worse, absent — visibility into the holistic security ecosystem.

A security solution should have a single, intuitive, easy-to-use interface that allows control and visibility of the entire solution. Observability should be all-encompassing and integrated to provide full visibility into the state of the system at a glance. And importantly, these solutions should be usable for security and non-security teams.

Next, modern tools must match modern application design. Too often, tool sets are simply packaged and sold together by a provider but are not actually capable of technical integration. Providers should offer automation and integration by default, which starts with full API control. All security solutions should have easy-to-use APIs that expose all of the functionality of the system. And they should offer real-time logs and stats that expose the data in whatever security monitoring or observability system the team uses. Integrating all of your solutions makes it far easier to determine the true intent of the attacker.

Rule 3: Real-time Attacks Require Real-time Reaction
Agile attackers are employing advanced DevOps workflows to quickly attempt, adjust, and deploy new methods. It can happen hundreds of times during a single attack. How can you possibly protect your applications if you can't react with the same speed? Reaction time isn't limited by how quickly your brain works. It's dependent on the speed of your security solutions. 

If it takes minutes or hours to spot an attack, it's already too late. The more intelligent, intent-based approach to mitigation requires multiple streams of data to make a decision. Intent-based systems operate as self-learning and self-healing systems. They are constantly analyzing patterns and behaviors to predict new or evolving threats, so it's imperative that they not only see and interpret traffic in real time but that they also have the power to deploy new rules in response to changing threats.

Rule 4: Dev, Sec, or Ops, Everyone Must Think Like an Engineer
Security practitioners, operations professionals, and developers must all adopt an engineering mindset with a focus on shipping secure software. But when secure DevOps is more performance art than authentic integration, that's bad news. Bolting security operators and their preferred tooling onto the end of your deployment pipeline does not mean you're doing secure DevOps — and it won't make your software ship faster. True secure DevOps builds security verification and vulnerability scanning directly into the automated testing and deployment framework. It provides a path for security teams to show up as an integrated part of the development team — not a gate brought in at the last minute to submit a list of vulnerabilities and hope they get fixed before the system goes live.

Better Security Is Integral to Building Better Software
It's been 15 years since Amazon launched AWS and kick-started our migration to the cloud. It's been a lot of fun. But the friction between shipping software quickly and securely remains a sticking point for reasons that we can actually solve today.

The path to reducing that friction must include security solutions that meet the needs of modern teams — ones that include security as an integral part of the cultural and technical aspects of building software. It's not enough to ship software quickly. We must ship high-quality software securely that lives up to the rules we outlined today. We're in this together.

About the Author

Sean Leach is the Chief Product Architect at Fastly, where he focuses on building and scaling products around large-scale, mission-critical infrastructure. He was previously VP, Technology, for Verisign, where he provided strategic direction along with product and technical architecture and was a primary company spokesperson. Sean was previously CTO of name.com, a top 15 domain registration and Web hosting company, as well as a Senior Director at Neustar. His current research focus is on DNS, DDoS, Web/network performance, Internet infrastructure, and combating the massive Internet security epidemic.

About the Author

Dark Reading Staff

Dark Reading

Dark Reading is a leading cybersecurity media site.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights