Cloudy with a Chance of Security Breach

Businesses must be aware of the security weaknesses of the public cloud and not assume that every angle is covered.

Ronan David, Vice President of Strategy at EfficientIP

April 12, 2019

5 Min Read
Dark Reading logo in a gray background | Dark Reading

The forecast calls for clouds, and they show no signs of clearing up. Cloud platforms are everywhere. In fact, IDG's 2018 Cloud Computing Study showed 73% of all businesses have placed at least one application or part of their infrastructure in the cloud. The public cloud space is particularly hot at the moment as Google moves to aggressively compete with Microsoft and Amazon.

Digital transformation has been the primary catalyst for this explosion. Typically, cloud adoption starts when some workloads are moved to public providers — usually, development and test environments that are less critical. After a while, noncritical applications might migrate over, followed by storage — files and databases — and then, bigger deployments.

The public cloud is attractive because underlying parts of the infrastructure and orchestration procedures hosting the application components are fully managed and mostly hidden — such as network functions, Internet access, security, storage, servers, and computing virtualization. Everything is easily configurable through a simple user interface or an API. Security is enforced by using private networks isolated from Internet… or is it?

Information on private networks hosted in a public cloud is not safe. This is because private networks, even without access to the Internet, are still able to communicate with it via DNS. Most of the time, no specific configuration is required to get full DNS access from the workloads pushed onto public cloud infrastructures. As a result, DNS tunneling, DNS file systems, and data exfiltration are possible on most public cloud providers by default. It is not a security flaw; it's a feature that's built-in on purpose, mainly to help workloads that need to access cloud serverless services to ease the digital transformation. This opens up any business to a wide range of possible data leaks.

Four of the most likely scenarios are as follows:

1. Malicious code is inserted into the back end to perform DNS resolution to extract data. Typically, access is gained through standard methods (such as SQL injection, heap overflow, known vulnerability, and unsecured API) from outside the organization.

2. Malicious code is inserted into a widely used library so that it affects all users (such as a supply chain attack) regardless of the language (such as Java, Python, and Node.js).

3. People inside an enterprise who have access to a host can modify/install/develop an application that uses DNS to perform a malicious operation (contact command and control, push data, or get malware content).

4. A developer inserts specific code that doesn't require a change in the infrastructure and that uses DNS to extract production data, events, or account information. The code will pass the quality gates of the continuous integration and testing part and be automatically deployed to production.  

So, what can an organization do — especially when it is deploying multitiered applications on multiple cloud services?

First, a private DNS service should be deployed for any business information stored on a private network hosted in public cloud, even if it's temporary. A private DNS service will allow you to filter what is accessible and what should not be. It will allow you to also regularly audit cloud architecture which is now mandatory in any public cloud environment. This requires specific identifying cloud patterns that are new to most systems and network architects. Most workloads don't require full access to the Internet. But sometimes, it's necessary — for example, when a DevOps team needs to update the infrastructure, installation packages, or dependencies because it simplifies the deployment phase. Businesses can approach this by designing an "immutable infrastructure" with prebuilt images, private networks, and controlled communication inbound and outbound. They should also perform testing phases, especially since options of cloud providers may change without being integrated.

Second, cloud providers propose private networking solutions to deploy internal resources and back-end services (e.g., databases, file storage, specific computation, back-office management). This addresses security and regulatory concerns like data protection (e.g., GDPR), data ciphering, or simply to stop parts of an application from being exposed directly to the Internet. All good practice. However, a better practice is to deploy computational back-end resources on subnets or networks not connected to the Internet, and that can only be reached by known sources. Then, filtering rules can be enforced to restrict access and comply with security policies.  

Finally, ensure there is a flexible DDI (DNS-DHCP-IPAM) solution integrated into the cloud orchestrator to make the configuration easier. DDI will automatically push appropriate records in the configuration once the service is enabled. This will not only bring considerable time savings to your organization, but it will also enforce policies to help secure public cloud deployments.

Moving applications to either a public or private cloud is inevitable. And as businesses continue to transform themselves, cloud usage will tag along on the journey. But businesses need to be aware that the public cloud isn't infallible when it comes to security, and must not assume that every angle is covered. Otherwise, the convenience of the cloud will turn into an inconvenience for your data.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry's most knowledgeable IT security experts. Check out the Interop agenda here.

 

About the Author

Ronan David

Vice President of Strategy at EfficientIP

Ronan David develops the strategic direction for EfficientIP, which delivers fully integrated network security and automated solutions for DDI (DNS-DHCP-IPAM). He oversees EfficientIP's customer and partner relationships, resulting in corporate growth and development within the security market. He forges strategic relationships with leading partners including Microsoft, Cisco, Tufin, and VMware. He is also a technical adviser for key clients in market sectors, including finance, insurance, telecommunications, retail, transportation, and the public sector, driving solutions to improve network security, performance, and availability. He has been with EfficientIP since 2004. He previously held positions at Orange and Visionwave.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights