Aligning panic recovery with the deployment failure flow

This one was a small cleanup, but it made the behavior feel more consistent…

I adjusted the panic recovery path inside RunNginx() so it follows the same failure policy as the normal deploy flow. Before this, the recover block already handled panic, but the cleanup path still felt a bit separate from the rest of the deployment lifecycle.

So I changed it to be more aligned. When a panic happens, the job now appends a panic: ... log, updates the deployment status to failed, appends a final deployment failed log, and then publishes the done event with failed status. That way, even the panic path leaves a clearer trail in both the database and the SSE flow.

I also renamed the context in that recover block to cleanupCtx, which feels a bit more honest because that context is really there for recovery-time cleanup work, not the main deploy work itself.

The other small change in this commit was inside the broker. I simplified sync.RWMutex into a regular sync.Mutex. Earlier, RWMutex still made sense when the publish path was only reading subscriber data. But after the broker started cleaning up slow subscribers too, the publish path was no longer truly read-only anymore, so using a plain Mutex felt simpler and more accurate.

Nothing flashy changed on the surface here… but I like this kind of cleanup because it reduces little inconsistencies. The more I work on this real-time flow, the more I notice that panic handling, final status updates, and subscriber cleanup all need to follow the same mental model, otherwise small edge cases start behaving differently from the normal path.

© 2026 Wahyu Syahputra. All rights reserved.