Hardening SSL/TLS: Common Certificate Issues

4 July 2023 - Articles
Back

Introduction

I recently wrote a quick start guide to hardening SSL/TLS configurations, to help people to better understand all the different aspects of securing their transport layer security configuration – however, during that article I skipped over a big section: SSL Certificates.

In this article, we’ll focus on the certificates themselves and the impact of common certificate issues. It’s also worth noting that whilst they’re commonly called “SSL Certificates”, we learned in the last article that of course all version of SSL should be disabled, since we know SSL was deprecated in 2015 and is “comprehensively broken”.

Therefore, if you prefer the term “TLS Certificate”, or even the technically more accurate “X.509 Certificate” then that’s great – but you’ll rarely see the latter term used outside of technical documentation, it seems “SSL Certificate” has stuck as the common term.

When an application, such as a web browser, connects to a remote system, for example, a web server, it verifies the certificate presented by the remove server to ensure that the application that was requested is in fact the one that they are connecting to – and an interception attack is not taking place.

In order for a web server to prove that they are the legitimate web server for a domain, they present a signed certificate. The certificate is signed by a trusted certificate authority (CA). Your operating system (or the web browser itself) comes bundled with information about the trusted certificate authorities that sign these certificates.

This leads us into the huge topic of Public Key Infrastructure, in short, your web browser comes preconfigured with the details of a large number of trusted certificate authorities (CA) and your organisation may even have their own CA. The certificate from the remote system is verified to ensure that it is signed by a trusted certificate authority. The idea is that only the correct server could have the correct certificate signed by the trusted third party (the certificate authority) as the CA would not have signed a certificate without first validating the server ownership (we hope). I’ll save the in depth details of Public Key Infrastructure for another post, and we’ll focus just on the certificates themselves and common certificate issues in this post.

Now I just said that a CA would not have signed a certificate without validating the server ownership first, and generally that’s true but, of course, there’s a lot resting on this being correct. If an attacker can illegitimately gain a signed certificate, they will be able to impersonate a web server and perform an interception attack and may be able to eavesdrop on or modify communications in transit.

For small and medium sized organisations, some of the risks of bad certificates will be outside your threat model, but I’ll cover them for completeness – and of course they are potential risks to some large organisations, national states, and even targeted individuals such as political dissidents.

Rogue and Stolen Certificates

I said previously that your web browser will trust the web server supplied certificate if it is signed by a trusted certificate authority. Therefore, the CAs are critical in ensuring that SSL/TLS connections are protected. There are two potential risks here, the first is that a CA itself goes rogue, for example inappropriately issuing certificates to entities other than the owners of the target system. This could for example be a certificate authority issuing signed certificates to a government, to allow that government to monitor communications as part of a mass surveillance system or to specifically target political dissidents.

The second, but related, issue is that a certificate authority could be compromised. If attackers compromise the systems that allow certificates to be signed, they could present illegitimate certificates but have the CA sign them, thereby leading web browsers to trust them.

This type of attack has happened previously, for example with Operation Black Tulip. In this case a certificate authority (DigiNotar) was compromised, allowing threat actors to issue illegitimate certificates for approximately two-months with more than 500 certificates being produced. The result of this was that DigiNotar was removed as a trusted CA from all major web browsers and operating systems and ultimately DigiNotar went bankrupt.

These kinds of attacks do happen, especially at the far end of the attack spectrum where nation states play. However, the far more likely scenario for most organisations – especially those small and medium enterprises – is that the signed certificate that is issued to them is insufficiently protected, leading attackers to be able to compromise it.

For example, Nvidia had at least two code signing certificate stolen in 2022 which led to malicious software being distributed which was signed by Nvidia. They aren’t the only ones though: hackers have previously compromised Adobe’s code signing infrastructure to sign malicious software.

You could also imagine that if, during a Penetration Test, I compromise a web server, backup server, or file server – I might incidentally come across an organisation’s signed certificates which I could then utilise to monitor communications between their web server and end users. It’s pretty common for me, during engagements, to find certificates that are insufficiently protected. In short, digital certificates must be protected in storage and you should have a plan in place for what to do if a certificate is compromised.

There’s also the possibility that a certificate is mis-issued, which you might expect should not happen but certainly has previously. In 2015 it was found that Symantec’s CA had mis-issued 2647 certificates.

Expired Certificates and Certificate Lifespan

If a certificate is compromised, attackers will be able to utilise it until either the certificate expires – or the revocation of the certificate is fully distributed. The shorter the certificate is valid for, the shorter the time the certificate can be leveraged by a threat actor.

Due in part to this, we’ve seen moves from organisations such as the Certificate Authority/Browser Forum (CA/B Forum) to reduce the maximum lifespan of certificates. For example in June 2020, Chrome announced that beginning with Chrome 85 any certificate issued on or after September 2020 with a validity period longer than 398 days (about 13 months) would be considered invalid (note: this change did not apply to locally-operated CAs, only the default browser trusted CAs which are commonly called “publicly trusted CAs”). This was a move that forced the validity period of certificates to be shortened and Chrome specifically highlighted reducing the impact of compromised keys as one of the reasons they made this change.

Shorter lifecycles also encourage the use of automation, which in turn allows for even shorter lifecycles. Let’s Encrypt for example have a certificate lifecycle of only 90 day, but recommend that users renew every 60 days and even note that they’ll consider reducing this maximum lifecycle in the future.

There is also the risk presented by certificates that have simply expired. Whilst it’s rare to see this on public websites, I see it very commonly on penetration tests on infrastructure management pages such as network switch management interfaces and cctv camera interfaces, that type of thing.

One of the big problems with letting certificates expire is that it teaches users (both non-technical and administrators) bad habits. If a certificate has expired and the standard procedure is simply to click through the security warning and access the page, it makes the job of a threat actor pulling off an interception attack much easier. They could potentially present an invalid certificate for the connection and if the user habitually clicks through the security warning, they might not spot the attack taking place.

Self-Signed Certificates

Of course, not all certificates are signed by publicly trusted CAs, it’s possible for an organisation to deploy their own public key infrastructure with their own CA and install those CA certificates on their own end points.

Alternatively, it’s also possible for certificates to be “self-signed” that is to say, not signed by a certificate authority at all. These certificates cannot be trusted as there is no third-party trust to verify that the certificate is legitimate. Generally speaking, a threat actor could simply generate their own self-signed certificate and use it to perform an interception attack.

It is true that a self-signed certificate could be manually verified once and then trusted and then if a difference certificate is ever presented a warning could be issued – however, in most instances that I’ve seen them deployed, they’re poorly validated and likely to lead to bad habits as described above for expired certificates. I’ve seen other people comment that in testing environments self-signed certificates are acceptable, however I’d still recommend against them and instead reconfigure your environment to allow for signed certificates to be used instead, either from a publicly trusted CA or from a local CA.

Whilst it is possible for a self-signed certificate to be properly manually validated, I’ve seen far too much documentation that just say “Ignore the certificate warnings” – and, in my opinion, “oh it’s just a test system” isn’t a good enough excuse for building such bad habits.

With CA signed certificates it’s of course important to validate that every certificate in the chain is valid (for example, that it hasn’t expired) and that the leaf certificate is in fact valid for the target system. Meaning that the target domain is included either within the Common Name or the Subject Alternative Names of the certificate – but there’s also some certificate specifics to validate, such as the algorithms in use are appropriately secure.

Hashing Algorithm

A common weakness on certificates is that they’ve been signed with a weak hashing algorithm, whilst these days we don’t typically come across MD5 thankfully, but we do occasionally spot a certificate signed with SHA1. Both of these algorithms are considered not secure enough for signing certificates. It’s feasible that an attacker could perform a collision attack against the certificate – where they create a new certificate that is specifically crafted to have the same hash as the valid certificate – allowing for interception attacks. This has previously been demonstrated for MD5. Additionally, SHA-1 is considered insufficiently secure and has been “sunsetted” in favour of more secure algorithms like SHA-256.

It should be noted that trusted root CA certificates may be signed with SHA-1 as these certificates are trusted by their identity, not their hash and so you may occasionally come across root certificates that are signed with SHA-1 for a little while longer. https://www.schneier.com/blog/archives/2012/10/when_will_we_se.html

Subject Alternative Names/Common Name Mismatch

There’s of course also the possibility that a certificate cannot be verified because the domain name of the target system is not included in the list of names the certificate is valid for. There are two areas here, the “Common Name” which originally was the only name a certificate would be valid for (although this could be a “wildcard” allowing a certificate to be valid for any subdomain of a given domain) but now certificates have been extended with Subject Alternative Names allowing a single certificate to be valid for multiple unrelated domains (or multiple wildcards).

Whilst it’s uncommon to see a system configured with a certificate for the wrong hostname, it is possible and using a system configured in this way would again lead to a security warning and teaching users to click through these warnings would lead to bad habits.

If a certificate is required to protect more than one domain, simply add the required domain names to the Subject Alternative Names during certificate issue.

Short RSA Keys

Finally, it’s important to ensure that the encryption keys provided by the server are appropriately secure, so using short RSA keys (e.g. less than 2048 bits) would lead to a certificate being less secure than is ideal. The shorter the key the easier it would be to factor the prime and effectively break the protection offered. That said, CA/B Forum now require that certificates issued after January 1 2014 use at least 2048-bit keys.

Summary

TLDR:

Teaching users to click through security warnings to ignore certificate issues such as self-signed certificates or expired certificates drives bad security habits and will make it easier for threat actors to perform interception attacks.

Certificates should:

  • Not have a validity period longer than 398 days (ideally use automation to drive this down below 90 days).
  • Be signed with a secure hashing algorithm (so not MD2, MD4, MD5 or SHA-1) such as SHA-256.
  • Not have a short RSA key if RSA is used. The minimum key length for RSA should be 2048-bits.
  • Be valid for the target system, meaning it should not be issued with a notBefore date in the future, must include the target domain name in the common name or subject alternative names, and it must not be expired.
  • Not be self-signed, but should be signed by a trusted CA (either local or publicly trusted)
Play Cover Track Title
Track Authors