Exception-path QA tooling / developer product / edge-case generator

Find the bugs hiding
inside reasonable-looking user behavior.

FakeDoor QA generates believable but deliberately dangerous entry points, weird state transitions, and awkward edge-case scripts so product teams can catch the bugs that never show up in happy-path testing — but always show up after launch.

What the first version should do

Read the page structure. Invent dangerous flows. Output reproducible breakpoints.

The product does not need full browser automation on day one. The first strong version only needs three things: understand a product surface, generate failure-oriented scripts that feel like real users, and rank the bug paths worth fixing first.

~/fakedoor/report
$ fakedoor scan https://product.example/onboarding

 Parsed 19 interactive surfaces
 Generated 42 deceptive-but-valid user scripts
 Prioritized 7 likely breakpoints

# high risk path
1. Open invite link in expired session
2. Switch account mid-form
3. Retry submission after network bounce
4. Hit browser back twice
5. Re-open stale confirm modal

Result: duplicate workspace created
Cause: optimistic state not invalidated
Priority: P1 / reproducible / user-visible
Problem

Most teams test what they designed. Real users test what they should never have done.

Teams are usually decent at validating the main flow. They are much worse at checking what happens when someone double-submits, loses auth, opens stale links, changes permissions, contaminates forms, toggles accounts, or comes back through browser history at the worst possible time. That is where embarrassing production bugs live.

Happy-path QA is necessary. It is just not enough.

FakeDoor QA is built for the corners where products fail quietly first, then loudly later.

Engine

Generate edge-case scripts that feel realistic enough to hurt.

The core insight is not random fuzzing. It is structured misbehavior. FakeDoor QA creates test entries and abnormal flow chains that still look plausible from a human perspective, which makes the failures far more actionable for product and engineering teams.

Three-part launch slice

Read product structure → generate abnormal scripts → output reproducible bug paths with priority. Playwright or Cypress execution can come later.

Output

Not just “something broke.” A ranked path to reproduce, explain, and fix it.

The value of the product is operational clarity. Instead of throwing raw noise at a team, it should surface the exact sequence, state assumptions, likely cause, and user-visible impact, so the fix conversation can start immediately.

Bug reports that engineering will actually use

Every finding should come with a path, a trigger condition, a likely reason, and a confidence level — not just a screenshot and a shrug.

Why this can convert

It speaks directly to teams that keep shipping one awkward bug they “didn’t think users would do.”

3core jobs
Parse product structure, invent dangerous scripts, rank reproducible failures
1pain point
Teams miss weird but valid user behavior until it becomes public
P1focus
Prioritize the breakpoints most likely to hurt users, launches, or trust
Developer tools do not need more dashboards. They need sharper failure discovery.

Ship fewer embarrassing bugs by testing the doors real users should never have opened.

FakeDoor QA is a cleaner pitch than generic AI testing: it is specifically about generating believable exception flows and edge-case scripts that expose the blind spots of small product teams before launch day does it for them.

Back to top