Recommended Defenses:

  • Use HTTPS (and only HTTPS)
  • Set the "Secure" cookie attribute on session cookies
  • Set the "Strict-Transport-Security" response header

 

The Original Session Defense – Network Encryption

The original cookie specification has a section (RFC 2109 Section 8.1) calling out three anticipated security risks.  Two are a form of cross domain cookie sharing and ended up to be well defended by the browser implementations from the beginning.  The third, plain-text transmission of the cookie, turned out to be quite tricky.  Even though the specification designers knew from the beginning that there was a risk of someone observing the cookie value as it is sent across the network, some special cases were found over time that needed extra defenses to block.

The term “sniffing” comes from the name of the first commercial network protocol analyzer released by Network General Corporation in 1986 (image generated by Craiyon)

The solution to network sniffing, as the act of reading network traffic in transit has come to be known, is not included in the specification, but was already well underway.  We can assume that Mr. Montulli (see Part 1 for more of his story) discussed the risk with others in his company (and outside it), and whether it was his idea or someone else’s, at the same time as cookies were being rolled out, Netscape developed a protocol for encrypting network traffic that they named “Secure Socket Layer” or SSL.  Perhaps you’ve heard of it.  Version 1 had…issues, and was never released, but SSL 2.0 was released in February 1995, just three months after Netscape 1.0 and a full year before the HTTP 1.0 specification was official.  Version 2.0 still had a lot of issues, but cryptography right is the playground of academics and nation states and has literally been an arms race, at least as defined by the U.S. government.  We’re not going to tackle the nuances of key exchange and cipher suites here nor the subtle ways mistakes can be made and exploited by throwing massive amounts of computing power at them.  For the purposes of this discussion, let’s assume we have encryption that works, because most of the time it does.  Unfortunately, even a safely encrypted connection isn’t the only necessary defense to protect session cookies from being read on the wire. 

Fool me once

Even if the cookie is “usually” sent over an encrypted connection, if it is “ever” sent unencrypted, the attacker has a chance to read it, and reading it once is enough.  The original cookie specification gets some more credit here.  Not only did it call out the need for encryption, but section 4.2.2 even defined an attribute that would instruct the browser to “use only (unspecified) secure means” to send the cookie.  The “unspecified” word was likely needed since SSL had not been released yet.  The attack that Mr. Montulli foresaw (probably), works like this.  If an attacker has the ability to read network traffic, they can often (though not always) change the traffic as well.  So let’s say they take a response from some random site that the user loaded without encryption and add an image tag or something similar with a URL pointing to the targeted website.  This injected URL will specifically use HTTP instead of HTTPS.  The victim’s browser will load that image automatically and include all unexpired cookies it knows about in the request because that’s how cookies are supposed to work.  Since this injected request is unencrypted, the attacker would be able to read the session cookie as it is sent to the targeted site, even if the response is an immediate redirect to an HTTPS version of the image or a 404 error indicating that the image never existed.  However, if the “secure” attribute is set on the cookie, the browser knows to leave it out of unencrypted requests, and it obviously can’t be read if it’s not there.  Despite the availability of this attribute from the very beginning of the existence of the web, sites still frequently forget to set the attribute on sensitive cookies.

A more intrusive form of this attack was described by Moxy Marlinspike in a Blackhat presentation in 2009.  He observed that people are “sometimes” lazy, and “sometimes” don’t type the protocol part of the URL when going to a website.  Let’s be honest, even typing “www” is often times too much effort.  For years, browsers would default to using the HTTP protocol in these cases.  Assuming that an attacker has network position to not only read, but modify network traffic, Moxie proposed, and implemented, a proxy tool that would sit in the middle of the victim and the server and silently convert every HTTPS link to HTTP.  The proxy could also remove the "secure" flag when a cookie was set.  Even if the server was configured to modified content over HTTP with the client none the wiser (probably).  They would continue browsing the site and the proxy would continue replacing secure links with insecure links, maintaining the ability to view all of the traffic sent between the client and server, including sensitive site content, and of course the session cookie.

A machine-in-the-middle attack diagram.

This level of attack was beyond the foresight of the cookie specification team, but one pointed out, a new header was introduce called HTTP Strict Transport Security (2012).  Simply put, once a browser receives this header from a domain, it will never make any request to that domain using HTTP ever again.  Well, almost never.  The header does have an expiration time, but each time the header is received, that time-out can be, and usually is, extended.  Moxie’s attack could still work if the user’s very first visit ever can be intercepted, but that is rare.  Still, to address this possible gap, there is a public list of sites (https://hstspreload.org/) that should only ever be loaded over HTTPS.  It is maintained by Google but all major browsers including Chrome, Firefox, Edge, Opera, and Safari will check this list before loading the URL you type in.

As time has gone on, the need to run an HTTP server has nearly disappeared.  SSL/TLS certificates can be obtained for free and the default when a user enters a domain without a protocol is now HTTPS.  Your users will probably never even notice if you shut down the HTTP version of your website completely, and the security community is increasingly recommending this configuration.  If your website never sends or receives unencrypted traffic, an attacker has no chance of observing it without breaking the encryption itself (which is "hard"™).

Imposter Sites

Sometimes an attacker does not take control over all of a victim's traffic, but can only manage to provide a fake IP address for a legitimate website.  This fake address can host a clone of the real site, or act as a proxy similar to previous attacks, but in either case, the attacker is in a position to easily obtain the victim's username and password and of course their session token as well.  The SSL/TLS protocol contains a defense against this in the form of Certificate Authorities (CAs) and site certificates. Certificate Authorities are a group of about 150 entities that are considered to be trusted.  Your browser or operating system typically ships with a list of these entities and the public key each one uses.  CAs have a process that allows the legitimate owner of a domain to prove their ownership, and then the CA issues an SSL/TLS certificate, signed with their private key, that can be shared with users to demonstrate that the server they connected to matches the domain they typed.  Browsers check the signature on the certificate and if it does not trace back to a trusted CA, display an error.  Browsers will usually let you continue to the site anyway, but developers should not rely on that behavior.  If your users are trained to ignore certificate errors, then an attacker can use machine-in-the-middle techniques to redirect user's traffic to their malicious server and then provide their own fake certificate.  The error will look essentially the same as always, and if the user continues to the site, the attacker can read and modify all of their traffic as if there were no encryption.

For more information about attacks that put a user's session at risk, and how to defend against them, consider the rest of the articles in this series.

Part 1 - Overview
Part 2 - Network Sniffing (this article)
Part 3 - Token Exposure
Part 4 - JavaScript Injection (XSS)
Part 5 - Blind Session Abuse
Part 6 - Post-compromise Use
Bonus 1 - JWTs: Only Slightly Worse
Bonus 2 - Device Bound Session Tokens

Errata

See a mistake?  Disagree with something?  This email address is being protected from spambots. You need JavaScript enabled to view it..