My (Tentative) Wish List For A Better Secure Browser

Web browsers are where the client machine rubber meets the Web server road. So it stands to reason that strong Web browser security is paramount -- far more effective than relying on thousands of Web application/plug-in developers to write more secure code. There are definitely some browser developers that are making strides in the right directions, but none of them are quite there yet. I'm still thinking through this, but if I were writing my wish list for a more secure Web browser today (and,

Sara Peters, Senior Editor

October 14, 2008

6 Min Read
Dark Reading logo in a gray background | Dark Reading

Web browsers are where the client machine rubber meets the Web server road. So it stands to reason that strong Web browser security is paramount -- far more effective than relying on thousands of Web application/plug-in developers to write more secure code. There are definitely some browser developers that are making strides in the right directions, but none of them are quite there yet. I'm still thinking through this, but if I were writing my wish list for a more secure Web browser today (and, well... I am) then here's what it would be:1. It has to work. This is absolutely the most important piece of the puzzle. The trouble is, the most effective ways browsers have thus far come up with to improve security also cause some truly damaging impacts on performance.

2. It has to be built like a platform, not like a singular application. Once upon a time, the Web was a series of static pages, and the Web browser was an application that let you find and view those static pages. Times have changed, however, and now the browser itself plays host to many rich, Web-based applications. Thus, browser development should be treated more like operating system development. Some browsers -- Google Chrome, principally -- are beginning to make strides in this direction. (As my fellow CSIers, Kristen Romonovich and Robert Richardson, said from the get-go, Chrome is more a Windows competitor than it is an Internet Explorer competitor.)

3. It needs a modular -- not monolithic -- architecture. In a modular architecture, the browser is divided into at least two components -- generally speaking, one that interacts with the client machine, and one that interacts with the Web and operates from within a sandbox. The main benefit is that it's a great defense against drive-by malware downloads. If an attacker compromises the Web-facing component of the browser, they won't automatically gain full access to the client machine with user privileges. They'll only gain access/privileges to whatever the Web-facing component needs. Internet Explorer 8 (beta) and Google Chrome (beta) use modular architectures. The OP Browser still in development by researchers at the University of Illinois uses a more granular modular architecture that splits the browser into five components.

Yet monolithic architectures are used by all the major browsers today. (Monolithic architectures are kind of like real-estate brokers who represent both the buyer and the seller -- you just can't quite trust them.)

4. It has to support some sort of process isolation. In essence, isolating processes means that when one site/ object / plug-in crashes, it doesn't crash the entire browser.

5. Its security policies cannot rely heavily on the user. Average users should not be expected to understand the intricacies of privacy and security settings. They shouldn't be expected to dig into their Internet options, flip JavaScript on and off and on and off again, disable plug-ins, delete nefarious cookies, or anything else.

6a. It's got to figure out how to securely handle plug-ins. 6b. It's got to figure out how to securely handle JavaScript.

The troubles with plug-ins are that they tend to run as one instance, so process isolation doesn't really work with them -- they're given unchecked access to all the browser's innards, and they tend to assume/require the user's full privileges. In order to allow plug-ins to run properly, Chromium (the modular, open-source Web browser architecture used by Google Chrome) runs them outside of the sandbox, and with the user's full privileges -- so the browser can't do anything to save the user's machine from malicious downloads through an exploited plug-in.

The OP Browser has some very innovative ways of handling plug-ins. Rather than using the Same Origin Policy -- which prohibits scripts and objects from one domain from accessing/loading content (scripts/objects) from another domain -- the browser applies to plug-ins a "provider domain policy," in which the browser can label the Web site and the plug-in content embedded in that Web site with separate origins. The plug-in's origin will be the domain that's hosting the plug-in content, which is not necessarily the same as the domain of the page you're viewing. (So if you were here on InformationWeek.com and I'd embedded an Adobe Flash media file from YouTube, the OP browser could recognize the page's origin as InformationWeek.com and the Flash file's origin as YouTube.com.) The benefit here is that you can add a site to your "trusted" list -- thereby allowing plug-ins and allowing any plug-in content that originates from that trusted site -- without needing to allow plug-in content that is running on the trusted site but originates from untrusted sites. This greatly mitigates the risks of cross-domain plug-in content ... however a) there are some cases where this policy will prevent plug-ins from operating properly and b) as Robert Hansen, CEO of SecTheory pointed out to me, the primary vector for cross-domain content attacks (XSS, CSRF) is JavaScript, not plug-ins.

Yet, browsers (the OP browser included) continue to apply the same origin policy to JavaScript, and there are many JavaScript-based attacks -- JavaScript hijacking, for example -- that sidestep the same origin policy.

The trouble is, none of the browser companies have really figured out yet how to securely handle JavaScript in a way that doesn't disrupt one's browsing experience and/or require security-savvy action from users. The NoScript plug-in for Firefox is a good tool, but a) it's not a standard Firefox feature, and b) it's a bit advanced for the average user. Other browsers allow you to simply disable JavaScript, but doing so means the user won't be able to enjoy some of the fun, quintessentially Web 2.0 things the Internet now has to offer. Further, JavaScript is automatically enabled on any sites on the user's "trusted" list, so malicious JavaScript on a legitimate site continues to be a problem.

Web browsers' inability to elegantly handle JavaScript-related threats is a big problem, because it means that we all must rely upon the individual Web site developers to keep their sites free of cross-site scripting flaws and cross-site request forgery vulnerabilities.

Part of the trouble may be that currently available rendering engines, used for parsing HTML and executing JavaScript, are error-prone and written in generally insecure languages. (So if you're a young researcher, maybe "Creating a more secure HTML rendering engine" would make a good thesis project. Pretty please?)

I'm still thinking some of this through, so do let me know if you disagree, see errors in my judgment, or think something else should be on this list.

Also: Should one browser be expected to do everything? How likely are you (and your users) to use one browser for everyday activities and another browser for more delicate activities?

We'll be devoting the next issue of the Alert -- CSI's members-only publication -- to browsers and other elements of client-side Web security issues. We'll also be discussing some of them during the CSI 2008 conference next month. On Tuesday, Nov. 18, Gunter Ollmann of IBM-ISS will present a full 60-minute session on "Man-in-the-Browser Attacks," and, also on Tuesday, browser security will be discussed during the Web 2.0 Security Summit, moderated by Jeremiah Grossman (CTO, WhiteHat Security) and Tara Kissoon (Director of Information Security Services at Visa Inc.).

About the Author

Sara Peters

Senior Editor

Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad of other topics. She authored the 2009 CSI Computer Crime and Security Survey and founded the CSI Working Group on Web Security Research Law -- a collaborative project that investigated the dichotomy between laws regulating software vulnerability disclosure and those regulating Web vulnerability disclosure.


Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights