AppSec in the World of 'Serverless'

The term 'application security' still applies to 'serverless' technology, but the line where application settings start and infrastructure ends is blurring.

Boris Chen, Co-founder and VP Engineering, tCell, Inc.

June 21, 2018

5 Min Read
Photo Credit: Cultura/REX/Shutterstock

"Serverless" computing is essentially an application deconstructed to its atomic unit: a function. Function-as-a-Service (FaaS) would actually be a better name, but the whole XaaS naming scheme is a bit, shall I say, PaaSé. (Oops, couldn't resist!) So, instead, we have "serverless" to drive home the idea that application developers don't need to think about servers any longer. They can focus their energies on creating countless glorious functions – and in the cloud, no less.

In concept, this continues the industry trend of making a starker separation in software delivery services, as well as extending the micro-services trend to the next stage of decomposition, or the breaking down of monolith applications. Here are some key concepts to understand about serverless in the context of application security (AppSec) and infrastructure.

Code Still Matters
A serverless function is a piece of application code. As such, little changes when it comes to AppSec fundamentals – for example, defending against injection attacks. Query strings and string concatenation of file names are still bad. Not paying attention to encoding is bad. Serialization attacks still occur, and so on. Similarly, applications still use third-party libraries, which could have known vulnerabilities and should be vetted. Serverless doesn't make those problems go away. (For an excellent talk, see "Serverless Security: What's Left To Protect," by Guy Podjarny.)

On the other hand, because security practitioners have placed a great deal of attention on infrastructure settings and services, the line where application settings start and infrastructure ends is now blurry.

Infrastructure Shift
Because serverless extends what the infrastructure provides, it shifts the shared security model. Just as in the case of cloud computing, where the provider takes responsibility for the security "of the cloud" (hardware, network, compute, storage, etc.) while leaving the customer responsible solely for security "in the cloud" (operating system, authentication, data, etc.), serverless reduces the responsibility of the customer further.

Serverless infrastructure eliminates the need for operations to constantly update OS patches. Further, the execution environment is in an ephemeral container, with a read-only file system and highly restrictive permissioning. Controls like these greatly improve inherent security. But they also have their own limitations, such as /tmp being writable, and "ephemeral" doesn’t strictly mean a repaved instance between each invocation.

Most attacks against serverless applications succeed through a combination of the aforementioned limitations (which are still significant improvements over typical containerized instances), app-level exploits, and taking advantage of services in the cloud infrastructure, such as poorly configured AWS IAM. (The talk "Gone in 60 Milliseconds," by Rich Jones, outlines chaining examples.) It's highly instructive to understand the anatomy of such attacks. My main takeaway: The road to hell is paved with default settings.

Greater dependency on infrastructure also mutates some of the threats. In the case of DDoS attacks, the infrastructure can scale to meet the demands; hence, DDoS effectiveness is diminished. However, it's not the sky that’s the limit but your wallet. Major cloud providers simply do not put utilization caps in place for many reasons. One reason? They don't want to be held responsible for an involuntary shutdown of service based on a monetary threshold. The most you can do is set up billing alerts – and thus was born the "denial of wallet" attack.

The Threat of Serverless Sprawl
Fundamentally, the above concerns present few unique risks not shared by customers with apps running on plain EC2 instances. However, managing sprawl does present a novel challenge for serverless. The reason: Serverless functions are like tribbles. They start out small and cute, but then they proliferate, and you end up neck-deep in them. Suddenly, what was meant to be simple is simple no longer.

As the number of functions multiply without a means of easily managing the access controls of serverless functions, the application security posture is greatly threatened. For instance, the principle of least privilege is easy with few functions, but as functions proliferate, often with ill-defined requirements, maintaining secure settings rapidly becomes harder.

Fighting Fire with Fire
Serverless provides a way to scale, so why not use it to scale serverless security? When it comes to the "three R’s of security" (rotate, repave, repair), serverless functions provide an excellent mechanism to build security into deployment. For instance, AWS already provides a means to rotate keys using Lambda functions. Moreover, serverless functions are basically in continuously repaved containers, and practitioners have been writing lambdas to automatically fix security mistakes. In fact, there’s a lot of untapped potential in No. 10 on the OWASP Top Ten: Insufficient Logging and Monitoring. Lambda functions that operate on CloudTrail logs to identify threats and perform automatic remediation have intriguing potential.

Serverless is neither the end-all and be-all, nor does it make irrelevant lessons learned from AppSec. It nonetheless provides an exciting opportunity to build more secure apps in the cloud (serverless or otherwise), with some pitfalls to beware of along the way.

The Future 
Vendors, tools, and processes will need to evolve to fit naturally into the structure of serverless application construction. Some solutions, such as host/container security tools, may become less relevant in some respects due to the shift in responsibility. But those that can manage security concerns on the functional level (both build and run times) and manage infrastructure at scale will enable serverless to fulfill its goal of providing a more secure means of delivering cloud applications.

Related Content:

Why Cybercriminals Attack: A DARK READING VIRTUAL EVENT Wednesday, June 27. Industry experts will offer a range of information and insight on who the bad guys are – and why they might be targeting your enterprise. Go here for more information on this free event.

About the Author

Boris Chen

Co-founder and VP Engineering, tCell, Inc.

Boris Chen is vice president of engineering and co-founder of tCell, an AppSec startup providing next-generation real-time attack detection and prevention for applications built for the cloud.  He has over 20 years of industry experience building high-performance web infrastructure and data technology. Before co-founding tCell in 2014, Boris spent five years at Splunk as VP of engineering, from startup through IPO, where he helped drive Splunk's to petabyte-scale deployments, verticals, and big data integrations. Prior to joining Splunk, Boris was director of engineering at LucidEra, an early "business intelligence as a service" innovator. At BEA Systems, where he was part of the original WebLogic acquisition, he led engineering teams in WLS, JRockit, and WLI. Boris started his career at Sybase. Boris holds a B.S. in EECS from the University of California, Berkeley.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights