Managing Multi-Cloud Security ‘Whether You Want to or Not’

Yes, it is possible to orchestrate security across multiple clouds without creating performance hurdles. Here’s how.

Jeff Schilling, Chief Security Officer, Armor

November 3, 2016

4 Min Read
Dark Reading logo in a gray background | Dark Reading

In my experience, many conversations with customer security teams inevitably begin with: “I just found out that one of our business owners built infrastructure in the public cloud and it is hosting a critical business process.”  Or, “we can’t afford the tech refresh in my current datacenter, and I have been directed to manage a multi-year migration plan to the public cloud.” Hence the headline of this piece “…whether you want to or not.”

Hybrid cloud security management is one of the popular industry trends that has seen a plethora of service offerings, all proclaiming to provide the “single pane of glass” to visualize security posture across datacenters. However, if there isn’t a plan for orchestrating security across multiple clouds, there will inherently be a collection of disparate data that will make management difficult. 

For those fortunate enough to have a centralized datacenter managed in-house, the reality is that while those days are most likely numbered it might already be too late to embrace this new reality and plan accordingly. Before succumbing to this “want to or not” category, it is not as difficult as one might think to develop a roadmap for managing multi-cloud security on one’s own terms.

Data classification
The first step in managing this divergent landscape is to classify datacenter environments into low, medium and high risk. This allows for proactive management of the type of data to send to each cloud option. The breakdown should include:

  • Low-risk environments (e.g. static marketing webpages) are great candidates for public cloud offerings with a few security controls to detect an infected server.

  • Medium-risk environments (e.g. dev environment, collaborations systems) require protection, but likely will benefit from the agility and cost of public cloud options. This classification should have a managed security solution that pulls security telemetry into a Security Incident Event Management (SIEM) tool or a third-party security monitoring tool.

  • High-risk environments (e.g. payment card data, personal healthcare information) should be protected and require auditable security controls that must be maintained.  This could be conducted in the public cloud; however, most organizations choose to keep them within internal IT infrastructure or host in a secure private cloud. This is normally the last environment to move to the cloud and is likely the bottleneck to keep from putting all environments in one location. 

Setting the Security Framework
Once an approach to distribute workloads to various cloud options is determined, the next step is to define the security framework and tools to leverage in each environment. It is advisable to organize security controls into high-level buckets in accordance with the compliance frameworks being used (e.g. NIST 800-53, ISO, PCI), then try to standardize the tools you use to implement those controls. 

For example, consider the tools for network inspection (layer 3/layer 4) and application inspection (layer 7), network segmentation, configuration control, endpoint detection/remediation. Whenever possible, the same tools should be used across multiple clouds. If this isn’t reasonable, try to ensure the log output from those tools can be consumed and visualized with a correlation tool or SIEM.

The final step to building a multi-cloud security platform is to create the logging infrastructure that allows all of the information to flow into the proverbial “single pane of glass” to manage multi-cloud security. The critical aspect in this situation is to settle on a single logging standard (e.g. Syslog, JSON) then convert where necessary to integrate in a visualization tool or correlation engine. This is where many security teams choose to use a third-party tool or management portal to offload this demanding architecture design task on an outside group. 

Once these steps are in place, building a sound roadmap to a secure multi-cloud environment becomes far more manageable. A solid plan will also be able to sustain additional growth and ensure that the ROI that the cloud offers is fully realized. The ultimate goal is to have seamless security without creating performance hurdles. Being proactive and thinking big picture is a huge first step.

Related Content:

Black Hat Europe 2016 is coming to London's Business Design Centre November 1 through 4. Click for information on the briefing schedule and to register.

About the Author

Jeff Schilling

Chief Security Officer, Armor

Jeff Schilling, a retired U.S. Army colonel, is Armor's chief security officer. He is responsible for the cyber and physical security programs for the corporate environment and customer-focused capabilities. His areas of responsibilities include security operation, governance risk and compliance, cloud operations, client relations, and customer engineering support.

Previous to joining Armor, Schilling was the director of the global incident response practice for Dell SecureWorks, where his team supported over 300 customers with incident-response planning, capabilities development, digital forensics investigations and active incident management. In his last military assignment, Schilling was the director of the U.S. Army's global security operations center under the U.S. Army Cyber Command.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights