Tech Insight: HTTPS Is Evil

New privacy and encryption offerings on social networks could help protect users, but they also create major headaches for IT and security managers

Dark Reading Staff, Dark Reading

March 18, 2011

4 Min Read
Dark Reading logo in a gray background | Dark Reading

Last week, Twitter joined Facebook and other social networks in a default HTTPS option to help protect the privacy of users on its site. Many believe the author of FireSheep is to thank for pushing HTTPS support up the priority list for social networks.

With the new HTTPS setting, millions of people are now able to protect their private -- and not so private -- postings from prying eyes on airplanes, at coffee shops, or anywhere else they might browse their favorite social network sites. Facebook was cheered by the security community for finally taking this fundamental step in protecting the sessions and data of users.

Enterprise IT organizations, on the other hand, aren't so sure about the new security measures. Their first question: How do you monitor what's coming in and out of the corporation if all of the transports are encrypted?

The perils of social networks have been researched and reported many times. The reality is that any transport method out of an organization -- whether via physical media or private message in a social network -- is a potential avenue for data leakage. When these avenues become encrypted, security staff lose the ability to monitor these sessions and understand what's going out the door. Essentially, the increased client security of these websites decreases the enterprise security response capabilities of those tasked with protecting corporate data.

So what is a lonely security team to do when it wants to protect corporate data but HTTPS sessions hinder its visibility? Time to implement some layers of visibility.

If your organization has a fancy commercial data leak prevention (DLP) product, then check its ability to provide either end-point protection or integration with Web proxies. Some DLP products provide the ability to hook all browser calls on the end point and inspect each transaction. Any transaction matching the DLP tool's rule set will be acted on. Unfortunately, anything on the endpoint might be tampered with by the end user, an intruder, or malware.

Integrating with Web proxies provides a less intrusive alternative and doesn't rely on the endpoint, but it might present problems with compatibility and coverage. If the endpoint is outside the organization, then it might not be covered by the proxy unless the organization forces all remote traffic through a hosted system.

If your organization doesn't have a commercial DLP solution that can inspect HTTPS traffic or another method, you still have options. Implementing an open-source proxy, such as Squid, can allow you to inspect HTTPS traffic -- even if you don't block any requests in the standard Web-filtering scene.

Squid is a popular open-source proxy. It can be used as a reverse proxy for incoming connections or in a more traditional configuration for proxying outbound connections. Implementation varies from environment to environment, but Squid can be placed in-line to inspect all traffic and capture traffic via ICAP, or it can be used with a span port. No matter how it is implemented, it can be used to inspect HTTPS traffic within the Squid server using sslbump.

Using sslbump within Squid is pretty straightforward, with only a few configuration options required. First, the http_port must be configured for the sslBump directive, as in this sample configuration:

http_port 3128 sslBump cert=/usr/local/squid3/etc/CA-priv+pub.pem

Next, since Squid requires a reverse proxy, we must enable the always_direct directive.

always_direct allow all

Since there are some requests we will not want to decrypt within Squid -- such as when users access their bank accounts, perform e-commerce transitions, or access sensitive sites or those that don't handle sslbump properly -- we can use the ssl_bump and acl directives to prevent these sites from going through the same process.

acl broken_sites dstdomain .example.com ssl_bump deny broken_sites ssl_bump allow all

These directives basically tell your systems to create a list of sites called broken_sites; the sites that can't handle sslbump properly shouldn't be processed in this manner. If the site requested is not found in the broken_sites list, then the system will allow ssl_bump to proceed.

You should now have a working Squid solution that can intercept HTTPS requests. From here, open-source plug-ins -- such as SquidGuard and ClamAV -- can be used to inspect the traffic.

HTTPS is a big win for website users and their privacy, but it can be a big problem for security teams attempting to prevent sensitive information from leaving the enterprise via social networks. There are numerous ways to address this problem. Evaluate the infrastructure, determine the need, and choose the one that works best for your enterprise.

Have a comment on this story? Please click "Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.

Read more about:

2011

About the Author

Dark Reading Staff

Dark Reading

Dark Reading is a leading cybersecurity media site.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights