Thursday, August 16, 2012

Q&A: CloudPassage’s Andrew Hay Talks Cloud Security Challenges

The concerns about security within the public cloud continue to be an ongoing discussion

The concerns about security within the public cloud continue to be an ongoing discussion, and while the cloud computing market grows at a rapid rate, there are still many companies who have their reservations about cloud adoption.

Earlier this week, Liam Eagle spoke to StillSecure CEO Rajat Bhargava where he discussed in a Q&A interview some of the specific security threats facing the public cloud, along with best practices.

CloudPassage chief evangelist Andrew Hay offered his own insights on this subject by drawing from his own experience with his own company and customers.

In an email Q&A with the WHIR, Hay talked about the challenges of moving from the private to public cloud and how they can be mitigate, who is responsible for securing data in the cloud, and what the limitations are for maintaining best practices of cloud security.

WHIR: What are the challenges of moving from the private to public cloud, and how they can be mitigated?

Andrew Hay: Private cloud provides a more data center-like architecture for the delivery and security of applications and servers. Customer platforms can be dedicated to only that customer without the fear of sharing resources or storage real estate with other customers. In public cloud, however, customers sacrifice isolation for increased cost savings and share a provider’s compute resources with an unknown number of other customers. Private cloud providers may also provide network security infrastructure, such as firewalls, intrusion systems and other network-based controls that customers can employ to help secure their infrastructure. You can still use dedicated devices in private cloud, but, in public cloud, virtual appliances must be used. Traditional security strategies were created at a time when cloud infrastructures did not exist. Multi-tenant (and even some single-tenant) cloud hosting environments introduce many nuances, such as dynamic IP addressing of servers, cloud bursting, rapid deployment and equally rapid server decommissioning, which the vast majority of security tools cannot handle.

The technical nature of cloud-hosting environments makes them more difficult to secure. This may simply be due to the age of cloud technology – still an infant by technology standards. This means that we do not yet have industry accepted practices or as many tools designed to handle the environment. A technique sometimes called “cloud bursting” can be used to increase available compute power extremely rapidly by cloning virtual servers, typically within seconds to minutes. That’s certainly not enough time for manual security configuration or review. While highly beneficial, high-speed scalability also means high-speed growth of vulnerabilities and attackable surface area. Using poorly secured images for cloud bursting or failing to automate security in the stack means a growing threat of server compromise and nasty compliance problems during audits. Traditional firewall technologies present another challenge in cloud environments. Network address assignment is far more dynamic in clouds, especially in public clouds. There is rarely a guarantee that your server will spin up with the same IP address every time. Current host-based firewalls can usually handle changes of this nature but what about firewall policies defined with specific source and destination IP addresses? How will you accurately keep track of cloud server assets or administer network access controls when IP addresses can change to an arbitrary address within a massive IP address space? Also, with hybrid cloud environments, the cloud instance can move to a completely different environment – even ending up on the other side of the firewall configured to protect it.

So when moving from private to public cloud architectures, customers need to have a better understanding of how their data will be segregated, in addition to what technical security controls are available to them. The tools that you have relied upon for addressing on-premises security concerns might not be built to handle the nuances of cloud environments. There are some things to keep in mind when looking to secure your cloud environments. Primarily, you need to ensure that your chosen tools can be built into your cloud instance images to bake security into the provisioning process. Vulnerabilities should also be addressed prior to bursting or cloning your cloud servers and changes should be closely monitored to limit the expansion of your attackable surface area. Finally, your chosen tools should also be designed to handle the dynamic nature of cloud environments without disrupting operations or administrative access.

WHIR: Who is responsible for securing data in the cloud: the provider, the tenant or both?

AH: The security of a cloud infrastructure is actually a joint responsibility. The provider is responsible for securing the physical infrastructure, including the facility, servers and inter-networking, but, depending on what cloud model is employed (IaaS, PaaS or SaaS), the provider responsibility differs. In an IaaS environment, the provider is generally responsible for the security up to, and including, the hypervisor. In PaaS environments, however, the security responsibility of the provider often stops at the security of the applications being run by the customer. In a SaaS, the responsibility for security is almost entirely on the shoulders of the provider, but is shared with regards to the presentation layer and generated data.

WHIR: What are some of the new exposures, threats, and risks threatening cloud servers?

AH: One of the most critical issues with cloud servers is that the majority of them initialize unpatched – or at least extremely out of date. When you sign up for an Infrastructure as a Service public cloud like Amazon Web Services or Rackspace, the provider makes a catalog of virtual machines available for you to use. Unfortunately, these VMs typically start up completely unpatched. For example, choose a Windows 2008 service pack 2 server from the catalog, and it will spin up missing about three year’s worth of patches. This creates two problems right out of the gate. First, you’ve just spun up a publically accessible server that is vulnerable to every exploit created since the product was released. Second, you will need to patch the server prior to using it, which increases deployment time, as well as cost, since you will need to pay for the CPU and network utilization required to patch the server.

Also, when you start up a VM in a public cloud, there is obviously no way to access the console. As a result, the administration ports (SSH for Linux, RDP for Windows) get exposed to Internet access. Combine this exposure with the lack of patches mentioned above, and you have a server just begging to be hacked. While some public cloud providers use public/private keys for authentication, this simply protects you from brute force attacks against the system. So far this year, we have seen two denial of service attacks (CVE-2011-1968 and CVE-2012-0152) and two remote code execution attacks (CVE-2012-002 and CVE-2012-0173) against Windows RDP. All four are exploitable without requiring the attacker to first authenticate with the system.

WHIR: What are some best practices for businesses that store their data in the cloud to prevent security breaches, without buying a security product?

AH: Some of the best practices for cloud have much in common with the best practices employed for securing servers in traditional data centers. Patch your servers and keep them patched, turn off any unneeded services with listening ports, restrict access to the remaining services with a host-based firewall, check for insecure configuration choices, watch for signs of intrusion, alert and respond to malicious or unwanted network traffic, and provide strong authentication for your server access.

WHIR: What are the limitations of these best practices, and at one point does a company need to invest in a security platform?

AH: All of the above best practices are well established and frequently implemented in traditional data centers. IT teams have historically relied on strong perimeter controls to prevent server weaknesses from being exploited. Relatively lax enforcement of security standards was tenable in these environments since servers were safe behind the corporate firewall. Without defined perimeters or security choke-points, elastic cloud environments are much more difficult to secure. Security mechanisms need to expand, contract and automatically update along with the cloud server environment that changes dynamically. Every single server has to be hardened before it can be exposed to the public. That’s why automation of security controls is key to successful cloud deployments. Any company that wants to leverage the scale and the economies of the cloud needs to invest in a security platform that can support the dynamics of a public or hybrid cloud environment. The security platform needs to be portable, scalable and elastic, allowing customers to burst out new server clones, knowing that they are secure from the very moment they are spun up.

Talk Back: What are you doing to educate your customers about cloud security? Do you agree with Andrew Hay’s insights about the challenges of security in the public cloud? Let us know in the comments.

Related posts:

  1. Cloud Brokerage Ingram Micro to Sell Netmagic Cloud Services
  2. The Open Group Publishes Standards for SOA and Cloud Computing

Source :

No comments:

Post a Comment