PokeDnD

Pokémon combat running on top of the D&D 5e ruleset

Sep 2025 - Present

PokeDnD landing page

Overview

PokeDnD is a web app for running a Pokémon-themed D&D 5e campaign at the table. It has a full Pokémon battle engine (type chart, abilities, items, weather, status conditions, mega and Z moves, reactions), shared character sheets, a 3D dice log, a synced music player, and an Owlbear Rodeo extension for the map. The DM and the players each get their own filtered view of every battle in real time.

The problem

The Pokémon battle system on its own has maybe 30 years of rules built up over time: type charts, abilities, items, weather, terrain, status conditions, multi-hit moves, reactions, mega and Z moves. On top of that, we wanted shared character sheets that the DM and the players could both see, hidden information per viewer, dice that everyone could see roll, a synced music player, and a map. Coordinating that much state between multiple people in real time is, of course, a real web app with a database and live updates.

My role

The project was started by Hunter Gallo in May 2025 as a Python Discord bot. In September 2025 I started rewriting the whole thing from scratch as a Next.js web app. The first commit of the new codebase is September 5, 2025. I've done a lot of the work on the schema, the battle engine, the real-time layer, the auth integration, and the Owlbear Rodeo extension. Hunter focuses on the mechanics, reviews rules from the DM side, and essentially functions as the product owner as well.

Architecture

The whole app is one Next.js process, talking to Postgres through Prisma, using Ory for auth, and reaching out to a few external APIs along the way (Spotify, YouTube, the Owlbear Rodeo extension). Nothing especially complex on the surface, but most of the interesting work is in how the realtime layer and the battle service fit together.

Loading diagram…
Core architecture. Full lines are direct calls (HTTP, function, or SQL). Broken lines into the hubs are in-memory publishes. Thick lines are SSE pushed back to clients.

Say a player attacks. The request comes in as an HTTP POST to one of the /api/battle/* routes, which calls into the battle service. The battle service writes the result to the database, then calls battleHub.broadcastFiltered() for filtered events (or plain broadcast() for shared ones), which pushes a per-viewer payload over SSE to every client connected to that battle. The hub lives in memory on a single Node process, so the whole app runs as one instance on Railway. Going serverless would mean rebuilding the realtime layer on top of Redis, which is way more than I need for a few players in one campaign.

Reading the diagram

  • Full lines are direct calls. HTTP coming in from the client, function calls inside the server, SQL going out to Postgres.
  • Thick lines from clients are long-lived SSE connections. The same connection that sends commands also receives pushed updates.
  • Broken lines into the hubs are publishes, not calls. The battle service tells the hub something happened and moves on. The hub handles getting it to subscribers.
  • External services are placed next to whoever calls them. Ory next to Auth. Spotify next to the API (the server refreshes OAuth tokens on a 10-minute buffer). YouTube next to the music service (the player itself runs in the client, the server just keeps everyone on the same track).

How filtered broadcasts work

One server event becomes a different payload per viewer. broadcastFiltered takes a build function that runs once per subscriber. It gets the subscriber's context (role, channel, viewer ids) and the raw event, and returns whatever that viewer is supposed to see. The DM gets the full battle state. Players get a filtered version with their own party plus whatever's currently visible to them on the enemy side.

Loading diagram…
One event in, two payloads out. The build runs once per subscriber so hidden state never leaves the server.

External integrations

Ory CloudIdentity, OIDC, admin SDKAPI routes call /sessions/whoami via getSessionUser()
Spotify Web APIPer-user OAuth tokens on User rowAPI routes refresh on a 10-min buffer
YouTube IFrame APIClient-side playerSynced to other clients via musicHub
Owlbear Rodeo extensionBuilt into /public/owlbear/Calls back into /api/owlbear/* and a campaign SSE channel

Key systems

Real-time SSE hubs

Five hubs (battleHub, musicHub, rollHub, tradeHub, catchHub) kept on globalThis as a Map<channelId, Set<Subscriber>>. The battle hub's broadcastFiltered runs a build function for each subscriber, so the DM and the players can get different payloads from the same event. That's how hidden HP and unscouted type info never leak to non-DM clients.

Battle rules engine

Around 3,800 lines under lib/rules/. Covers type effectiveness, multi-hit moves, weather and terrain, immunities, status conditions, items, mega and Z moves, and reactions. Ability, accuracy, power, stat, DOT, multi-target, and reaction hooks all load from JSON files into a RuleHook table at runtime, so adding a new mechanic doesn't need a deploy.

Auth and role-based access

The Ory session cookie gets validated by getSessionUser(), which calls /sessions/whoami with a 60-second positive cache and a 10-second negative cache for hard 401s and 403s. Routes protect admin actions with hasAdmin(session), and per-channel DM and player roles are resolved on every request so the SSE hub can build the right payload for each subscriber.

Music sync (Spotify / YouTube)

One MusicState row per campaign holds the track URL, source, playing, looping, startedAt, and pausedAt. The server figures out the time passed so every client lines up at the same point in the song, plus a full state push every 60 seconds for any clients that have fallen out of sync. Spotify uses per-user OAuth with a 10-minute refresh buffer. YouTube goes through the IFrame API.

Discord identity sync

Done through Ory's OIDC provider, not a bot or webhook. On first login, syncDiscordInfoFromOry() reads the identity's OIDC credentials from the Ory admin API, pulls the Discord id, username, and avatar out of the id_token, and writes them to the local User row. Fire and forget. The login flow doesn't wait on it.

Owlbear Rodeo VTT extension

A compiled extension lives in /public/owlbear/. About two dozen /api/owlbear/* routes back it: manifest serving, auth, sprite proxying, scene and position sync, damage application, pointer movement, reveal controls, presets, command channels, and a campaign-wide SSE live channel. The DM drops trainer and Pokémon tokens onto a map and HP and status updates flow both directions.

Reactions: pausing an attack midway

Normally an attack resolves in one server call: hit check, damage roll, status effects, an HP write, and a broadcast. Pokémon reactions, however, break that clean flow. Moves like Protect, Counter, and Mirror Coat (plus the custom reaction moves I store in the DB) fire after the hit lands but before damage actually goes through, and they're the defender's, not the attacker. So the attacker is essentially done acting at that point, but the defender now has a window to burn one of these reactions, and what they pick changes what the attack actually did. Mirror Coat sends the special damage back at the attacker, Counter does the same for physical, and Protect cancels the attack entirely and burns PP on the defender. If the defender doesn't have anything valid to react with, or just passes, the attack lands like normal.

The hard part is that the defender's response shows up in a completely different HTTP request, from a completely different client. So you can't just branch inside resolveAttack and wait, because resolveAttack is already returning to the attacker. You also can't hold the attacker's request open until the defender clicks, since the defender could easily be AFK for a while. And you can't recompute the attack from scratch when the reaction finally comes in either, because the rolled hit, the damage context, and a lot of used-modifier state all involved random rolls that already happened.

So, I serialize the whole thing. When the attack pipeline reaches the post-hit point and sees the defender has a valid reaction available, it writes the entire attack context into JSON: the original request, the ids of any modifiers that would have been used, attacker and defender passives, the spread multiplier, the rolled hit, and the multi-target state. That JSON goes into a PendingReaction row with an expiresAt, and a filtered reaction:prompt SSE event goes out to just the defender. Then the attacker's request returns PENDING_REACTION, so their client puts up a "waiting for defender" state and we wait.

Then, when the defender picks (or passes), they POST to /api/battle/reaction. That route loads the row, checks the status, rehydrates the context, re-loads the modifiers by id (still still in place, of course, because the attack got deferred), applies the reaction effect, and runs the rest of damage resolution as if it were one atomic call. If the row happens to expire before they respond, the next request that hits it just treats it as a pass.

Loading diagram…
The attack pauses across two HTTP requests from two different clients. The PendingReaction row is the only thing that lives through the wait.

The hard parts:

  • Getting the context to actually serialize. It has to round-trip through JSON without losing anything. Prisma Decimals and dates were the easy ones to forget about. I store modifier ids instead of the rows themselves, since the rows can change between the prompt going out and the response coming back.
  • Surviving double-clicks and replays. A slow defender double-clicks. SSE reconnects and replays the prompt. The row's status field (PENDING / CHOSEN / PASSED / TIMED_OUT) is the only thing I trust, and anything other than PENDING exits early.
  • Knowing who the prompt is for. The prompt event broadcasts to every subscriber in the channel with the same payload. Each client checks viewerTrainerId === prompt.defenderTrainerId (or admin) to decide whether to put up a reaction picker or a "waiting for opponent" overlay.
  • Saving the whole thing all at once. When damage finally lands, the BattleEvent, the HP writes, the lastAttackResult snapshot, and the catch-encounter HP-change hooks all have to commit in one transaction. Otherwise a client that refreshes mid-flight catches a half-resolved attack.

This one requires everything: the schema, the rules engine, the SSE layer, the client. The edge cases (timeouts, replays, races) ate way more time than I expected.

Lessons learned

  1. Pick the deployment shape sooner than I did. The SSE hubs live on globalThis, which means the whole app has to run as one Node process. I went with Railway, but I didn't actually lock that in until about six months in. If I had planned for serverless or anything fanned out from the start, the entire SSE design would have been completely different. Picking the host late worked out, but it could have easily been a painful retrofit.
  2. Rules as data, editable without a deploy, was the real win. The hooks (ability, accuracy, power, stat, DOT, multi-target, reaction) loaded from JSON files from the start, but the JSON only seeded at deploy time. So when Hunter wanted to tweak an ability mid-campaign, it meant me pushing a deploy. Adding the RuleHook DB table and an admin page on top of it meant DMs could change rules in-app without me in the loop. That alone took the iteration time from days to seconds.
  3. Schema first for the core models, but only the core. The first commit modeled Campaign, Trainer, PokemonInstance, and Species, and those have basically held up. The battle models (Battle, BattleParticipant, BattleEvent, BattleSnapshot) came in later, in pieces, and that growth shows up in the code. Some of the things I never modeled out (moveStates, featureUses, passives as ad-hoc JSON) are still ad-hoc JSON. They work, but I've given up any kind of schema-level safety on those parts. A lot of that is because it is extremely difficult to predict what these will look like, especially since the website is still in active development and new mechanics are being added. For now, the flexibility is worth the tradeoff.
  4. The battle UI never settles down. The battle page has been reworked more than anything else in the app. I've done multiple full UI rewrites and dice reworks. I used to think this meant I was failing to design it right up front, but I think the real lesson is the opposite: multi-actor turn state is genuinely hard to design before you've seen real players use it. Iterating on it with the group at the table was way faster than trying to plan the perfect version on the first pass.
Landing page with sign-in and a quick tour of what's inside.
Login with email/password, one-time code, or Google / Discord OAuth.
Account creation with the same OAuth providers.
Home hub with your current campaign, party vitals, and join/create flows.
Trainer sheet: classes, ability scores, skills, and party at a glance.
Trainer sheet, continued: class features, active effects, and graveyard.
Per-Pokémon sheet with ability, IVs, moveset slots, and evolution.
Real-time battle with 3D dice, damage resolution, and field conditions.
Attack roller: pick a move, see target HP, and read the current field state.
Attack result modal with the full hit/damage roll breakdown.
Ability check modal: every roll shows performer, dice, modifier, and total.
Catch encounters with ball selection, capture DC, and modifier breakdown.
Catch encounter with full ball inventory and past-encounter history.
Campaign trainer overview: players and NPCs with level and HP at a glance.
Two-sided trade UI; both trainers confirm before the swap commits.
Campaign Pokédex with discoveries, type filters, and regional forms.
Pokédex entry with base stats and full level-up learnset.
Rule Hooks: searchable catalog of ability/item/field effects driving combat math.
Admin dashboard with campaign-scoped tooling and data management.
User preferences, dice-rolling modes, and Spotify integration.

Stack

Back to projects