Marking HTTP Sites as Insecure: The Emperor’s New Clothes Indeed!
December 17, 2014 Leave a comment
- Users don’t have a way for readily knowing when a site should be protected using SSL/TLS or not, and Google engineers are proposing yet another indicator.
- A better use of their time would be in working with existing standards efforts – or starting a new one – that let site owners indicate when a site should be protected.
Google is using its size in the web arena to affect changes in how users view the relative “security” of websites. I put security in scare quotes because that word has a dubious meaning at best and more likely doesn’t mean what the company intends. The short story is that Google wants a way to indicate to end users that a page which is not properly protected using TLS – the current, improved version of SSL – is not secure.
However, there are several problems to this approach. First, what does Google mean by “secure” and what do end users interpret as “secure”? Back in the 90s, the browser makers, online retailers, and security vendors oversold the benefit of SSL and the general public, not knowing any better (nor should they, really), believed the golden lock meant the site was secure and therefore trusted. Of course, the use of SSL guaranteed nothing of the sort and trying to correct that perception was as effective as spitting into the wind.
Then, then industry – courageously ignoring recent history – had the bright idea of demonstrating that some sites have proven they are who they say they are with extended validation certificates promoted by the CA/Browser Forum. EV certificates were supposed to indicate something about trustworthiness in the browser by turning the address bar green, grey, or red depending on the certificate, but really didn’t. The stated intent was good, but the outcome was (and is) a mess.
Now, engineers at Google, also ignoring the same recent history as the CA/Browser Forum, want to do more of the same. You see, the fact is that there are times when having data sent over a network protected by TLS is useful, such as when you are authenticating to a web site or transmitting your credit card info to an online store, but the presence of TLS or the lack thereof only affects the protection of data in transit. There are times when security is not important, such as when browsing the web or reading news. What really matters is when a site is supposed to use TLS (as required by a site owner), but in fact, isn’t or is only using TLS for some portions of a site. When that happens, the browser should treat the use of TLS as a failure and not allow the page to be displayed. Fat chance of that happening.
Of course, indicating that a particular web site should be using TLS and which certificate should be used doesn’t appear to be too difficult to figure out, and work has already begun in the IETF and is described in RFC 6394, “Use Cases and Requirements for DNS-Based Authentication of Named Entities (DANE),“ as well as active protocol development work. I am sure there are other mechanisms that could be developed that would provide a useful improvement to web and network security which would involve less analysis on the part of end users and more automation from the systems we use.