Scenar.io Puts Conversational, AI-Driven Debugging Practice in the Browser for SRE and DevOps Candidates
Scenar.io is an AI-driven interactive debugging practice tool that simulates servers and interviewers to help SRE and DevOps candidates hone problem-solving.
Origins: why a conversational debugging practice tool was built
What became Scenar.io started as a practical workaround for one engineer’s interview prep. Preparing for a Google SRE interview, the founder had the technical knowledge but found it hard to reproduce the dynamics of a live debugging session: thinking aloud, explaining trade-offs, and responding to challenges from an interviewer. To bridge that gap they used Claude in a terminal to impersonate a broken server and an interviewer, iterating on prompts and scenarios until the setup itself became a bottleneck. That realization—spending more time configuring practice sessions than actually practicing—was the seed for a productized solution that captures the conversational nature of real-world SRE interviews.
The project was explicitly designed to address gaps the founder saw in existing tools: SadServers forces exact terminal commands, LeetCode focuses on coding rather than operations, and flashcards test memory rather than problem-solving. Scenar.io aims to reproduce the real interview experience where candidates describe intentions in natural language and the practice environment responds with realistic outputs and follow-up questions.
What Scenar.io does and how a session feels
Scenar.io is an interactive debugging practice environment in which an AI plays both the server under investigation and the interviewer observing your process. Sessions are conversational: instead of typing shell commands you describe actions in plain language—“I’d check if nginx is running,” for example—and the system produces command output consistent with an underlying simulated server state. The AI follows up like an interviewer, asking clarifying questions or pressing you to justify choices.
A typical scenario begins with an alert and a short prompt that frames the incident. In the “Nginx Won’t Start” example provided by the creator, the session opens with a problem statement—website down, nginx failing. When the candidate says they would check the nginx service status, the simulation returns realistic status output indicating an exit failure and a bind error on port 80. From there the candidate is expected to reason through next steps—discovering another process occupying port 80, stopping it, starting nginx, and validating the outcome—while the AI tracks progress through stages similar to an interview rubric (root cause, short-term mitigation, and long-term prevention).
Because the interaction is framed as conversation rather than typed commands, Scenar.io focuses on intent recognition: the AI interprets statements of intent, translates them into simulated command effects, and returns the visible outputs the user would have seen on a real server.
Built-in scenarios and practice modes
Scenar.io ships with a set of built-in scenarios and multiple practice modes to cover common SRE and ops challenges. The product includes 18 built-in debugging scenarios that span typical failure classes—disk full conditions, OOM killer incidents, DNS failures, container crashloops, compromised binaries, and more. Those scenarios let users practice identifying root causes, applying short-term fixes, and describing long-term mitigations.
Beyond the core debugging scenarios, Scenar.io provides three broader practice modes:
- Verbal interviews: Conceptual Q&A covering Linux, networking, containers, security, and system design, where the AI scores responses on accuracy, completeness, and communication.
- Sandbox mode: Open-ended exploration of simulated servers with no predefined bug, useful for exploratory investigation and system audits.
- Custom scenario generation (Pro feature): Describe any topic and the AI will construct a scenario on that theme for targeted practice.
Users can also choose an interviewer persona to tune difficulty and feedback style: a supportive mentor (easy), a neutral professional (medium), or a Socratic challenger (hard) who pushes for justification and deeper reasoning.
How the AI simulation is structured
The technical approach centers on a deterministic hidden state and a two-part AI role. Each scenario is backed by a hidden_state: a JSON representation of the simulated environment that includes running processes, disk usage, service states, log entries, and network connections. When a user expresses an intent, the AI receives the hidden_state along with the user’s input and produces command output that is consistent with that state.
To prevent the AI from skipping expected command outputs and jumping straight to interview questions, prompts steer the model through a dual-role structure: first as a server simulator that must produce realistic command output, then as an interviewer that follows up with probing questions. An additional hallucination-detection layer compares generated outputs to the canonical hidden_state to catch and correct fabricated details, reducing inconsistencies between what the AI says and what the simulated state supports.
This design aims to replicate the visible artifacts of real debugging—service status lines, error messages, and logs—without requiring the user to type exact shell commands. The simulation keeps track of the candidate’s progress through rubric-aligned stages so that the AI can assess and respond like a human interviewer.
Stack and deployment choices
Scenar.io’s implementation and deployment details are provided for readers who follow Software Heraldoling and platform choices. The frontend is built with Svelte 5. The backend is a Bun runtime paired with the Hono framework. For data storage the project uses Turso (libSQL) with Drizzle ORM. The AI integration is via Claude Sonnet 4.5 accessed through OpenRouter. The application is deployed on Fly.io and the developer uses GitHub Actions for CI/CD.
Those technology choices reflect a modern JavaScript-first stack with a lightweight runtime, plus an embedded SQL store and a hosted deployment platform—details that may be of interest to developers tracking how AI-first products are being built and scaled.
Pricing, access, and early-subscriber offer
Scenar.io provides a free tier and a paid Pro tier. The free plan includes a monthly allotment designed to let users evaluate and practice regularly: five debugging sessions, three verbal interviews, and two sandbox sessions per month. The Pro plan is priced at $9 per month and unlocks unlimited sessions, custom scenario generation, and access to all difficulty modes.
For early adopters there is a limited-time promotional offer: the first 100 subscribers can subscribe to Pro for $5 per month using the code M3OTEYOQ; that discounted price will be locked in permanently for those early subscribers. The product sign-in is handled via GitHub, and the free tier requires no credit card.
How the conversation model changes practice dynamics
By shifting the interface from typed terminals to conversational intent, Scenar.io changes how candidates rehearse operational problem solving. Rather than memorizing command syntax or datasets, users practice expressing investigative intent and walking through their reasoning in real time. The AI returns concrete outputs the user would have seen, and the interviewer persona challenges decision-making and communication. That combination—pass/fail-like artifacts plus probing follow-ups—aims to mirror the cognitive load of an actual on-call debugging session more closely than flashcards or command-driven emulators.
The founder framed this design choice as a direct response to the limitations of other platforms: some tools require perfect command input to progress, while others focus narrowly on algorithmic coding tasks. Scenar.io positions conversational debugging practice as an intermediate modality better suited to systems interviews and real operational work.
Who the tool is aimed at
The product is explicitly positioned for people preparing for DevOps or SRE interviews, and for practitioners who want to sharpen debugging instincts. Scenario topics and the verbal interview pool are drawn from typical SRE domains—Linux, networking, containers, security, and system design—so the offering is most relevant to candidates and engineers working in those areas. The sandbox mode also supports broader exploratory learning for people who want to audit a stack or practice investigative techniques without a prepared bug.
How to try Scenar.io
According to the creator, Scenar.io is live and available on the project site; users sign in with GitHub to pick a scenario and begin practicing. The free tier gives enough monthly sessions to evaluate the product and build a steady practice habit without a credit card.
Comparisons and developer context
The creator explicitly compared Scenar.io to several existing options to clarify where it fits in the ecosystem: SadServers requires exact terminal commands; LeetCode focuses on coding problems rather than operational debugging; and flashcards test recall rather than problem-solving. Those contrasts define the niche Scenar.io aims to fill: conversational, rubric-driven debugging practice that reproduces both the outputs and the social pressures of an interview.
From a developer tools perspective, Scenar.io crosses adjacent categories—AI tools for coach-like feedback, learning platforms for technical hiring prep, and sandboxed environments for systems exploration. Its architecture and AI design choices may be informative for other teams building interactive training tools or interview simulators that need to balance conversational flexibility with deterministic, testable outputs.
Limitations and guardrails described by the creator
The product architecture includes explicit guardrails to limit AI hallucination and preserve fidelity to scenarios. The hidden_state approach provides a single source of truth for each scenario, and a hallucination-detection layer checks the AI’s generated outputs against that state. The dual-role prompt enforces that the AI produce expected command-like outputs before switching to interviewer behavior. Those measures are intended to reduce common failure modes in AI-driven simulations—specifically, fabricated outputs or skipping the concrete evidence a candidate would expect to see.
Feedback loop and future direction implied by the creator
The creator built Scenar.io as a solo engineer and emphasized that user feedback will shape the product’s evolution. They asked practitioners which scenarios would be most useful and what features would improve prep. That early-stage, feedback-driven development model suggests roadmap decisions will be responsive to user needs and that scenario coverage and interviewer behaviors are likely to evolve based on practitioner input.
What precise features or integrations might appear next is not stated; the only explicit development mechanism described is direct user feedback shaping future builds.
Implications for interview preparation and learning practices
Scenar.io highlights a broader trend in technical learning and assessment: using AI to simulate social and operational dynamics rather than merely automating static exercises. For candidates, a conversational simulator can offer repeated, rubric-aware practice that focuses on reasoning and communication as much as on technical correctness. For teams and hiring programs, tools like this point toward more nuanced preparation pipelines that combine hands-on labs with coached, dialog-based rehearsal.
At the same time, the approach underscores an engineering challenge for AI-driven training: preserving fidelity to deterministic scenario states while maintaining the flexibility to interpret natural language. Scenar.io’s hidden_state plus hallucination checks is one concrete implementation pattern that other learning tools may adopt or adapt.
What the founder asks of early users
The creator positioned Scenar.io as an experiment grounded in personal need, and explicitly solicited user feedback: which scenarios would be helpful, and what improvements would make the product more useful for interview preparation. Because the project is an independent effort, the developer emphasized that incoming feedback will have a direct effect on prioritization and development choices.
Looking ahead, broader adoption and feedback will determine scenario breadth, interviewer sophistication, and any additional feature work—whether that means more scenario types, finer-grained scoring, or deeper integrations with learning workflows.
The product is presented as a live, practical tool for people actively preparing for SRE and DevOps interviews or wishing to keep their debugging skills sharp. Early adopters are offered a promotional price and invited to shape the tool’s future through direct feedback.
As usage grows, the core ideas behind Scenar.io—a deterministic scenario state, conversational intent-driven interaction, and interviewer personas—may influence how other developer tools and training platforms approach interactive learning and assessment. The project demonstrates a specific pathway for turning private AI-assisted practice sessions into a repeatable, productized experience that emphasizes communication and reasoning as much as technical command fluency. Future developments will likely be guided by practitioner input and the product’s ability to maintain scenario fidelity while expanding coverage and polish.
















