SSL/TLS/HTTPS: Keeping the public uninformed

Credit: Wikinews

Perhaps the most important thing to understand about the SSL/TLS/HTTPS system that secures websites is that you are not supposed to understand it.

I say this as a follow-up to my last blog, which argued that web browsers share much of the blame for Man-In-The-Middle attacks such as Superfish, because they hide the name of the Certificate Authority (CA) vouching for the identity of secure websites. Yes, the CA name is available with a couple clicks, but if it wasn't hidden in the weeds, Lenovo customers might have questioned why Superfish was the only Certificate Authority on their PCs.

My proposed solution - prominently identifying the Certificate Authorities vouching for a secure website - was a fantasy. If SSL/TLS/HTTPS was really designed for security, this would have been done long ago. But secure websites are security theater. They seem to be secure, techies say they are secure (at least in public), but the system is flawed. That it took so long to expose Superfish was because the system is rigged against normal (non-techie) folks.

Jonathan Zdziarski recently made another simple suggestion that, like mine, will never see the light of day.

He points out that HTTPS interception, such as Superfish, can be detected if the web browser notices that the last X (insert your favorite arbitrary number here) "secure" websites were all vouched for by the same Certificate Authority.

At times, this can be a legitimate thing. For example, some anti-virus programs on Windows offer this as a feature - they decrypt SSL/TLS web pages to scan for malware. But, at some point, browsers should issue a warning about this and let the end user confirm that the HTTPS interception is being done on purpose.

This would be a simple thing to add to a web browser, but don't hold your breath waiting.

Along the same line, it would be easy for web browsers to periodically validate the list of trusted root Certificate Authorities. If the browser finds any that it did not put there, the end user should be told and asked if they want to keep trusting that CA.

You'll see this feature added when pigs fly.

For whatever reason(s) the tech industry prefers an un-informed public.

Maybe it's arrogance, they don't think a plumber could possibly understand the concept of Certificate Authorities. Maybe it's just too much work to explain things in a way that non-nerds can understand it. Maybe they don't want to expose things because then everyone will see that the emperor has no clothes. Maybe spy agencies are leaning on them to keep things as they are.

Recently Virgin Media let one of their digital certificates expire. It's a trivial thing that happens every now and then. An article about this at The Register shows a screen shot of the error displayed by Firefox.

Nowhere does Firefox actually say that the certificate expired, let alone the date that it expired or what it thinks the current date is. That, would have been helpful. Instead, the error messages might as well be in Klingon. 

Firefox warns that the connection is untrusted and not secure, that the identity of the website can not be verified, that someone may be trying to impersonate the website and that the security certificate is invalid. Then, my favorite warning: that no issuer chain was provided. Issuer chain? Really? 

It's hard to believe that whoever wrote these messages actually intended for anyone to understand the situation.

The problems with SSL/TLS and Certificate Authorities have been known for decades.

Back in 2011, Dan Goodin, in an article called "How is SSL hopelessly broken? Let us count the ways" said "... the repeated failures suggest that the system in its current state is hopelessly broken." The article also quotes Moxie Marlinspike saying that the system offers "... just an illusion of security". An article in The Economist that year started off "The digital-certificate system that is meant to block eavesdroppers nosing in on secure internet transmissions seems to be in tatters." 

Over the years, there have been many proposed improvements to the system, far more comprehensive that the small steps suggested by Zdziarski and myself. Yet none have gained any traction. 


Back in April 2013 Steve Gibson released an online certificate fingerprint tester that detects Man-in-The-Middle attacks. You enter the name of a secure website and he reports the fingerprint of the certificate his server received. You then compare it to the one in your browser. If all is well, they will be the same. I blogged about the service at the time.

While not hard to use, it does require some technical background, you have to remember to use it and there is no easy to remember domain name. 


Google's Certificate Transparency project was in the news recently. First, here is how they introduce it:

Google's Certificate Transparency project fixes several structural flaws in the SSL certificate system ... If left unchecked, these flaws can facilitate a wide range of security attacks, such as website spoofing, server impersonation, and man-in-the-middle attacks ... Certificate Transparency makes it possible to detect SSL certificates that have been mistakenly issued by a certificate authority or maliciously acquired from an otherwise unimpeachable certificate authority. It also makes it possible to identify certificate authorities that have gone rogue and are maliciously issuing certificates.

The system requires buy-in from both web browsers and Certificate Authorities. Chrome will, of course, support it and Mozilla, last year, pledged to also support it in a future version of Firefox. It seems unlikely, to me, that Apple and Microsoft will co-operate on a system designed by Google.

On the CA side, the system requires them to submit certificates that they issue to public logs. The heart of the scheme is that anyone can inspect the logs for problems or abuse. So far, only two (DigiCert and GlobalSign) Certificate Authorities have committed to the system. 

Last month, rogue certificates were issued by an Egyptian company called MCS Holdings, operating under the authority of China Internet Network Information Center (CNNIC). Paul Ducklin of Sophos wrote a great article on the topic.

In response, Google will remove CNNIC as a trusted root Certificate Authority in Chrome, which is a big deal that could put them out of business. But, Google offered to re-instate CNNIC if they implement Certificate Transparency. 


Back in 2010 the Electronic Frontier Foundation (EFF) rolled out their SSL Observatory project which they describe as:

The EFF SSL Observatory is a project to investigate the certificates used to secure all of the sites encrypted with HTTPS on the Web. We have downloaded datasets of all of the publicly-visible SSL certificates on the IPv4 Internet, in order to search for vulnerabilities, document the practices of Certificate Authorities, and aid researchers ... We are particularly concerned about the role and practices of Certificate Authorities (CAs), which are the organizations that can sign cryptographic certificates trusted by browsers. These certificates can contain statements like, "this public key belongs to" ... Browsers trust a very large number of these CAs, and unfortunately, the security of HTTPS is only as strong as the practices of the least trustworthy/competent CA.

As far as I know, nothing much has come from this.


Jonathan Zdziarski has another suggestion, borrowing a concept from the Sender Policy Framework (SPF) of email. He calls it Certificate Validation Framework (CVF) and describes it below. 

SPF is specifically designed for email, but the concept is pretty solid: using a secondary service (DNS) as a lightweight means to read information directly from the host you're interested in connecting to (via SSL here) ... If web browsers were to look for certificate validation data within the TXT record of the destination’s DNS records, they could obtain a hash of the certificate that the server is using; the website you’re visiting would effectively be advertising a hash of its own certificate so that any other certificate in the world, regardless of who’s signed it, would fail ... There would be a very publicly visible trace of a government or other attacker attempting to MiTM DNS, whereas today MiTMing certs is a very low visibility task.

Seems like a reasonable idea to me. 


To the best of my knowledge, certificate pinning has had the biggest real world impact of the suggestions listed here. 

It is a type of white listing that was added to the Chrome browser back in 2011. 

Normally, any Certificate Authority can issue certificates for any domain. Pinning restricts the Certificate Authorities that are allowed to issue certificates for a domain. For example, when this was first rolled out, Chrome would only accept Gmail certificates issued by four CAs: Verisign, Google Internet Authority, Equifax and GeoTrust. Certificates from any of the other 600 some odd CAs were not only rejected by the browser, but Chrome went so far as to phone home to report them. 

 It was certificate pinning that exposed the fraudulent certificates being issued by MCS Holdings last month. In 2011, it exposed fraudulent certificates in Iran that were issued by DigiNotar. As a result, DigiNotar went out of business. 

Mozilla adopted certificate pinning in Firefox version 32. Its use was expanded in versions 33 and 34. 

One problem with certificate pinning is that it doesn't scale well. HPKP (below) aims to address that.


HTTP Public Key Pinning (HPKP) is a proposed extension to the HTTP protocol. A new HTTP header would allow websites to inform browsers of the Certificate Authorities they have contracted with. This list of trusted CAs will eventually time out. The hope is that "By effectively reducing the number of trusted authorities who can authenticate the domain during the lifetime of the pin, pinning may reduce the incidence of man-in-the-middle attacks due to compromised Certification Authorities."


Trust Assertions for Certificate Keys (TACK) is a proposed extension to TLS from Moxie Marlinspike and Trevor Perrin. They describe it:

TACK allows clients to pin to a server-chosen ... "TACK signing key" ... which ... enables pinning without limiting a site's flexibility to deploy different certificates and TLS keys on different servers or at different times. Since pins are based on TSKs instead of CA keys, trust in CAs is not required ... If requested, a compliant server will send a TLS Extension containing its "tack". Inside the tack is a TSK public key and signature. Once a client has seen the same (hostname, TSK) pair multiple times, the client will "activate" a pin between the hostname and TSK for a period equal to the length of time the pair has been observed for ... TACK pins are easily shared between clients. For example, a TACK client may scan the internet to discover TACK pins, then publish these pins through some 3rd-party trust infrastructure for other clients to rely upon.

Convergence and Perspectives are other proposals. The Firefox extension Certificate Patrol was designed to inform the user when a new certificate was being used. It seems to have been abandoned. 

Perhaps the most important thing to take from this list of proposed improvements, is how bad the current system is.

If history is any judge, these proposals will fail to catch on. After all, the SSL/TLS authentication scheme involving Certificate Authorities has been around since 1994.

Back in 2011, security expert Bruce Schneier thought the underlying problem had to do with incentives rather than technology. Those who have stakes in the existing system (Certificate Authorities, browser-makers and governments) have no interest in fixing it. He said this two years before the Edward Snowden revelations. Spy agencies too, have to like things the way they are. 

In my opinion, if the tech industry really wanted the system fixed, it would have been improved long ago. 


April 17, 2015: Expanded the description of Certificate Transparency. 

The march toward exascale computers
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies