Cybersecurity

Offensive Security Testing Using Cloud Tools

Rick Osgood 70px jpg

Rick Osgood

Senior Security Consultant, Labs

Blog Images 2022 09 14 Tile

When performing offensive security testing, assessors sometimes run into issues where their source IP address gets blacklisted. For example, we might be performing a web application test and, due to the many suspicious queries being performed, our IP address is suddenly blocked. While on the surface this may seem like an effective security control, it’s actually quite easy to change a source IP address. Methods have existed for a long time, including using a proxy server or routing traffic through a VPN tunnel. The problem with these methods is that they take effort to set up and those new source IP addresses can just as easily be blocked, leaving assessors in the dark once again. When doing this kind of work every day, it would be nice to have an efficient method for changing a source IP address for this kind of testing without risking getting blacklisted.

When performing offensive security testing, assessors sometimes run into issues where their source IP address gets blacklisted. For example, we might be performing a web application test and, due to the many suspicious queries being performed, our IP address is suddenly blocked. While on the surface this may seem like an effective security control, it’s actually quite easy to change a source IP address. Methods have existed for a long time, including using a proxy server or routing traffic through a VPN tunnel. The problem with these methods is that they take effort to set up and those new source IP addresses can just as easily be blocked, leaving assessors in the dark once again. When doing this kind of work every day, it would be nice to have an efficient method for changing a source IP address for this kind of testing without risking getting blacklisted.

Currently there are so many options for quickly and easily spinning up new resources in the cloud that I thought this problem would be easy to solve. With cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google so readily available and affordable, there must be a way to use them to good advantage, so I did some research. It became clear that I certainly wasn’t the first person to think of this, because I found a few other projects that attempt to automate offensive operations within the cloud. Some of these tools aim to solve the exact same source IP address problem.

Prior research

One example is Proxycannon-ng. While I didn’t use this tool, the idea behind it is to spin up a static VPN server that you connect to from your workstation. Exit-nodes are then programmatically created to provide load balancing and multiple-source IP addresses for offensive operations. This is a cool project, but for my purposes it felt like overkill. I also was interested in finding a way to solve this problem in a serverless manner, so creating a bunch of full virtual machines was more effort than I wanted to expend for the simple task of proxying HTTP requests.

Another existing example is the IPRotateBurp Suite extension. This extension allows you to spin up an AWS API gateway which acts as a proxy for all requests. The extension makes this transparent to you once the API gateway is configured. The extension automates the configuration of the API gateway as much as possible. A single API gateway in a region will automatically give you a random IP address with each request, making it very useful for this type of testing. I tested this extension myself and found that when I spun up a single API gateway and made 100 requests in a row, I ended up with 84 unique IP addresses. I’m sure this varies depending on many factors, but that was my experience with a single test. You can also spin up more gateways in other regions to increase the IP pool. The main downside I found to this method is that you must specify a single target website for the API gateway. If you need to target more than one website at a time, you would require multiple API gateways and the extension would have to be smart enough to route your requests to the proper gateway. Currently, the extension only supports one URL at a time. However, I found that it is very fast and introduces minimal delay to each request. Another benefit is that AWS API gateway allows for up to one million requests per month on the free tier. That’s a lot of password spraying!

One other solution I found is awslambdaproxy. This tool runs on a publicly accessible server (like an EC2 instance). The tool spins up Lambda functions that open reverse SSH tunnels back to the server. It can open multiple tunnels at a time and will proxy requests through the tunnels in a round-robin fashion. Lambda functions can only run for a maximum of 15 minutes, so the tool handles the creation of new tunnels over time to ensure there is always a live tunnel. I have not tested this tool, but it sounded a bit clunky. Although it uses Lambda for some serverless functionality, it still relies on a full-blown server to accept the requests and handle all the Lambda function shuffling. Also, having Lambda functions run constantly for 15 minutes at a time felt counter to the whole idea of Lambda functions. Generally, the function should perform a task and then exit, and not be used as a server for as long as possible. Also, as you’ll see below, I’ve found that Lambda functions keep their IP addresses for a period. I was hoping for a solution that would change IP addresses with each request if possible. On the plus side, this solution allows for SOCKS proxying, which means you can do a whole lot more than simple HTTP requests.

A new idea

After looking at what was already out there, I decided to try to build my own solution. Not because I couldn’t work with these existing solutions, but because it’s always good to have more options and it’s fun to build things. My goals for the project were to build a tool that:

  • Proxies HTTP/S requests through a cloud service
  • Allows targeting of multiple websites at a time
  • Changes IP addresses with each request
  • Uses serverless concepts to reduce complexity, overhead, and cost
  • Does not require Burp Suite but can be used in conjunction with Burp Suite if desired

I have some experience using AWS Lambda, so I decided to use that as a base. Lambda is AWS’s serverless compute service. It allows you to write code in several runtimes including Python, NodeJS, and .NET. Lambda allows you to execute that code in the cloud without having to have your own virtual machine or container. You write the code as a function and then trigger the function with an event. The event contains input data for the function. The function then performs its task and returns data in whatever way it is programmed to do. The previously mentioned awslambdaproxy tool uses this service to open a reverse SSH tunnel and proxy traffic through the function’s virtual machine. I thought that maybe I could use this service to make a simple HTTP proxy by converting an HTTP request into a Lambda trigger event and then return the HTTP server’s response.

Lambscan

I first wanted to prove the concept and test that my source IP address would change each time the Lambda function was invoked. I decided to write a port scanning utility that I could run from my local machine but would “proxy” requests through Lambda. This way I could scan a resource I own and see if the IP address changed with each invocation. I ended up calling this utility Lambscan. I first wrote a basic Python-based Lambda function to perform the actual port scan. I call this the Lambscan “worker.”


Lambscan worker source code

The function accepts an IP address, port number, and protocol (although the protocol is ignored for now and assumed to be TCP). The function then opens a TCP socket to the specified host on the specified port. If the connection is successful, the function returns True. If not, it returns False. Pretty simple, really.

The next step was to write the main script to run on my actual workstation. This script would need to accept an IP address, or list of IPs and a list of ports. It would have to parse that information and then invoke the Lambda function one time for each port that required testing. I wrote a basic script that allowed me to test a comma-separated list of ports against a single IP address. I executed this script against a server I control and ran TCP dump on the server to inspect the traffic. Unfortunately, what I saw was not what I was hoping.


Tcpdump output

The Lambda functions successfully checked each of the specified ports (8000 - 8001), but the source IP was always the same. It wasn’t my own IP address, so that was good. But the AWS IP address stayed the same with each function invocation. This was not what I expected. I assumed that since the function could be called at any given time, each invocation would “spin up” a new backend resource, resulting in a new source IP address. Obviously, that assumption was incorrect.

After some research, I discovered that AWS uses something called Firecrackeras a backend to manage Lambda resources. Firecracker is an open-source virtualization technology that uses lightweight “microVMs.” Amazon specifically built this technology to meet the needs of their serverless offerings. When you want to execute code on-demand, you don’t have time to wait for an entire virtual machine to spin up before your code runs. On the other hand, a simple container is nice and quick but is less secure in a multi-tenant environment. Firecracker aims to meet somewhere in the middle by allowing for extremely minimal virtual machines that can be spun up in seconds.

When you first create a Lambda function that’s never been run, it’s considered to be in a “cold” state. The first time you execute it, a microVM resource is spun up and the code executes. This first invocation can take several seconds longer than normal as a result. After this first invocation, the function is in a “warm” state. A function can stay in this warm state for around 30 to 45 minutes if you haven’t invoked it again. After that, the microVM is destroyed and your function goes cold. If you continue executing the function over and over, it should stay warm. This means that the backend infrastructure only changes when a function moves from a cold to a warm state. That’s why my port scanning tool seemed to have a static IP address between ports.

However, this problem was surmountable. With a bit more research I found that the default maximum number of Lambda functions is 1,000 functions per account. I could theoretically make up to 1,000 copies of my Lambscan worker function and have them all be identical, but they would all get their own microVM and therefore have a chance of obtaining a unique IP address. The worker function didn’t need any changes, but the main script would need a way to create some number of worker functions and then be smart enough to cycle through them. Then it would have to clean up after itself to prevent potentially hundreds of stray Lambda functions.

I spent some more time coding and ended up with a functional cloud-based port scanning tool. You can tell Lambscan how many workers to spin up and specify a number of threads for parallel scanning. Every time the Lambda worker function is invoked, Lambscan will cycle to the next worker. This way the IP is rotated with each port scanned. In the example below I called Lambscan with only five workers, but I scanned 10 ports. This means each worker would have to scan two ports each.


Lambscan example output


Tcpdump output with recycled IPs

You can see in the tcpdump that the IP address changed five times and then cycled back around, repeating those same five IP addresses. If I call it again with 10 workers, I should get 10 different IPs.

Tcpdump output with 10 unique IPs

You might notice that none of these IPs are the same as the five IP addresses from the first screenshot. That’s because the Lambda functions were automatically destroyed after the first execution. The second execution spun up 10 new workers and they all ran from a cold state. Therefore, they all obtained new backend resources and new IP addresses.

With this proof of concept working, I moved to the next step —finding a way to proxy HTTP/S traffic through Lambda.

Lambproxy

After a lot of trial and error, I developed a tool that does the job. I call it Lambproxyand it works as an add-on module for mitmproxy. Mitmproxyis an open-source interactive HTTPS proxy and is a great tool for intercepting and manipulating HTTP/S traffic. The developers implemented an easy method to build custom add-ons, and after trying a few other options, this ended up being the simplest and quickest way to build Lambproxy.

In the screenshot below, you can see an example of hitting api.ipify.org (a service that returns your public IP address) 25 times in a row. The first example shows the results when proxying through Lambproxy. The second example shows the results without the proxy.


Using curl to test Lambproxy

I decided to take a similar approach with Lambproxy as I did with Lambscan. The general idea was to intercept an HTTP/S request, serialize the data, then invoke a Lambda worker function using the request data as part of the event. The worker could then make the HTTP request to the end server, serialize the response, and return it back to the main Lambproxy script. Lambproxy would then forward that back to the browser or tool of choice.


Flow diagram

Lambproxy allows for six options at launch:

  • roleArn
  • scope
  • maxWorkers
  • maxInvocations
  • regions
  • trigger

When a Lambda function is created, the function must be assigned an IAM role that specifies what other AWS resources the function can access. In order to run Lambproxy, you must first create a basic role for the worker functions to assume. The README file includes a policy template to give worker functions minimal permissions. The template simply gives them access to log events to CloudWatch. The tool can be updated to automatically create a role, but then your user would need IAM access. I wanted to design the tool to require the least permissions possible to get the job done.

Scope can be optionally defined. If you don’t define a scope, then all requests will be forwarded through Lambda. With a scope defined, only matching requests will be proxied through Lambda. Other requests will pass through mitmproxy without being modified by Lambproxy. Multiple domains or URLs can be specified in a comma-separated list. This met my requirement for being able to target multiple websites at a time.

The maxWorkers parameter allows you to specify how many Lambda workers you want to spin up at launch. The more workers you have, the more source IP addresses you’ll end up with. Though it may take mitmproxy longer to start up and clean up after itself when you add a lot of workers. Also, it’s possible that multiple workers can obtain the same source IP address by chance, since there is no way to control what the source IP address is.

The maxInvocations parameter is great if you are monitoring your AWS spending. The AWS Lambda free tier currently allows for up to one million invocations per month at no cost, though that does depend on how long your functions run. Lambproxy keeps track of how many invocations it has processed since it was last started. If it hits your maxInvocations number, Lambscan will stop proxying traffic through Lambda to save you money. You’ll also get a notice in the mitmproxy console.

The regions parameter allows a comma-separated list of regions to be specified for your workers. Lambscan will rotate through each specified region per request. For example, if you create five workers and specify five regions, you’ll end up with one worker per region. If you specify 10 workers and five regions, that’s two workers per region.

The last option is for a “trigger.” Since each Lambda worker ends up with a somewhat “static” IP address after moving to a warm state, it’s possible that your worker might end up getting blacklisted from the service you are testing. A “trigger” is a string in the web server response that indicates you have been blacklisted. For example, if you are performing a password spray against a target, and after 100 attempts the server starts responding with a blacklist notice, you can add that notice to the trigger parameter. Lambscan will detect this trigger and rebuild that Lambda worker, thereby obtaining a new IP address. Lambscan will then automatically resend your failed request to the new worker and relay the response back to you. This happens transparently in the background. This also means you could theoretically use just a single Lambda worker, and it should refresh itself automatically every time it gets blacklisted. However, this will cause additional overhead and slow your requests down.

Results

I found that Lambproxy gets the job done, but with lack of speed as a noticeable downside. Invoking a Lambda function can take one to three seconds when it’s in a warm state, and longer when it’s in a cold state. This noticeably delays HTTP requests. While you can proxy your web browser through Lambscan and browse the web this way, there is a definite lag. The tool is better suited to specific operations like password spraying, API scraping, directory brute forcing, and other specific attacks where you are likely to be IP blacklisted.

I ran another test using Lambproxy against my own web server. I built 100 Lambda workers and sent 100 requests through. The result was 96 unique IPs, which was better than the 84 IPs I previously got using the IPRotate Burp extension. However, the 100 requests took almost 2.5 minutes to complete. By comparison, IPRotate performed 100 requests in 17 seconds. IPRotate also did this using a single API gateway, versus the 100 functions required to get the same results from Lambda. It became clear that overall, using API Gateway as a proxy is a better solution than using Lambda, but it’s always nice to have options. You never know when cloud services may change and sometimes there may be a very specific requirement that one tool just can’t meet.

It was a lot of fun building out these tools and learning along the way, thanks to Coalfire’s research and development program. Although Lambproxy may not have performed as well as I had hoped, this is part of the R&D process. Not everything developed is going to result in an amazing success. Sometimes the real result is the knowledge gained along the way. I’ve learned more about Lambda and how it works with Firecracker on the backend. I’ve confirmed that Lambda functions can be used to mask a source IP address, but the unavoidable delay means Lambda may be more suitable for some applications over others. An HTTP proxy might be too much to ask, but the Lambscan port scanner works quite well, so it may be worth spending some more time to polish that up and add more features. I’m sure there are plenty more uses for serverless cloud services in the offensive security world waiting to be discovered.