Skip to content
Prompting

The No-Mock Policy: How to Prompt AI to Kill Fake Data in Your Codebase

By Stuffnthings AI Team · 2026-04-12 · 7 min read

The No-Mock Policy: How to Prompt AI to Kill Fake Data in Your Codebase

Every AI-built app starts the same way: it looks finished. Buttons click. Pages load. Profiles populate. Then you open the source and find the truth — hardcoded arrays, setTimeout calls pretending to be API latency, "Coming Soon" badges glued over empty route handlers, and a mockUser object that's been quietly running your entire authentication flow.

This is the smoke-and-mirrors stage of AI-assisted development. It's not the AI's fault — it's a prompting problem. If you tell an AI to "fix it," you'll get cosmetic patches: one hardcoded string swapped for another, a new placeholder dressed up as a feature. What you actually need is to stop giving the AI descriptive instructions and start giving it architectural constraints.

This is the prompting framework I use to walk a codebase from prototype to production. I call it the No-Mock Policy.


Step 1: Audit and Purge

Before the AI builds anything new, it needs to surface the technical debt that's already there. You can't replace fake functionality you haven't catalogued. The first prompt in the workflow is a scanner — it doesn't fix anything, it just produces the hit list.

Act as a Senior Full-Stack Auditor. Scan the attached files for the following patterns:

  • Hardcoded JSON/Arrays used as data sources
  • UI components with "Coming Soon" labels or disabled buttons without logic
  • Frontend services using setTimeout to mimic API latency
  • Empty functions or "TODO" comments in API route handlers

Task: Create a markdown table listing every instance of fake functionality found, the file path, and the specific data/feature that is currently missing a backend connection.

The output of this prompt is gold. You now have a concrete map of every lie your prototype is telling. No more "I think the dashboard is mostly done" — you have a table that says exactly which features are real and which are theater.


Step 2: Set the Real-Data-Only Constraint

Once you have the audit, the next move is to give the AI your Single Source of Truth: a database schema, an API spec, or both. Then you lock it inside a ruleset that prevents it from inventing new fake data while it's trying to remove the old fake data.

The four rules I enforce:

Zero-Mock Policy. Never use const data = [...]. If a data source is missing, halt and ask for the schema or endpoint. This is the rule that does the heavy lifting — it forces the AI to admit when it's missing information instead of fabricating it.

Type-Safety First. Define TypeScript interfaces for all API responses before writing component logic. This prevents shape-mismatch bugs from sneaking in when the real data finally arrives.

Error Boundaries. Every real connection must include a loading state and an error state. No silent failures. No spinners that spin forever. If the API can fail, the UI has to acknowledge it.

Environment Integration. All URLs use environment variables like process.env.API_BASE_URL, never http://localhost:3000. This is the difference between code that works on your laptop and code that works in production.


Step 3: The Implementation Workflow

Even with rules in place, AI models drift. They'll follow the policy for the first feature, then quietly slip back into old habits by the third. The fix is to enforce a three-step loop for every feature being converted, with a human checkpoint at each stage.

Schema. The AI generates the Prisma model, SQL migration, or API route definition. You verify the database fields match what the feature actually needs before any frontend work happens.

Integration. The AI replaces the mock frontend call with a real fetch, useSWR, or useQuery hook. You confirm no hardcoded data remains in the component — grep for the old mock variable name to be sure.

Validation. The AI writes a basic integration test that pings the real endpoint. You run it. If data flows end-to-end, the feature is done. If not, you don't move on.

The key is the human verification step. The AI is fast but not trustworthy on this loop — your job is to be the gate.


Step 4: A Concrete Example

Here's what this looks like applied to a single "Coming Soon" feature — say, a User Profile page that's been faking it with a mockUser constant.

I am replacing the mock User Profile page with real functionality.

  1. Database: Here is my PostgreSQL schema [insert schema].
  2. Backend: Create a Node.js API route that fetches this user data by ID.
  3. Frontend: Remove the mockUser object in Profile.tsx. Implement a real fetch call to the new API.
  4. Constraint: If the database returns null, show a "User Not Found" state. Do not fall back to placeholder data.

That last constraint is the one most people forget. Without it, the AI will helpfully "handle the edge case" by reintroducing fake data as a fallback — and you're right back where you started.


Why This Actually Works

The reason this framework works has nothing to do with magic words in the prompt. It works because it forces the AI to build in the correct dependency order: Database → API → Frontend.

When you start from the schema, the AI can't hallucinate data shapes — its code will literally break if the fields don't exist. When you generate the API route next, the frontend has a real contract to bind to. By the time the component is being written, there's nothing left to fake. The fake data has nowhere to hide because every layer above it is already real.

This is also why "just tell the AI to remove the mocks" fails. Removing mocks is the last step of a much larger process. If you skip the schema and the API, the AI will dutifully delete the mock array and replace it with... a different mock array. Or worse, an empty array that crashes the component.


The Takeaway

AI is exceptional at generating code that looks like a working application. It's much worse at generating code that is a working application — unless you give it the constraints that make faking it impossible.

The No-Mock Policy isn't really about prompting tricks. It's about understanding that AI tools mirror the rigor of the instructions they're given. Vague request, vague output. Architectural constraint, architectural output.

If your AI-built project is stuck in the smoke-and-mirrors phase, run the audit prompt this week. You'll be surprised how much of your "finished" app is held together by setTimeout and good intentions — and how fast it comes together once you stop letting the AI lie to you.

Get the Prompt

Copy this prompt and use it with your AI coding assistant.

## 1. The "Audit & Purge" Prompt

Before the AI builds anything new, it needs to identify the "debt." Use this prompt structure to scan your project:

"Act as a Senior Full-Stack Auditor. Scan the attached files for the following patterns:

- Hardcoded JSON/Arrays used as data sources.
- UI Components with 'Coming Soon' labels or disabled buttons without logic.
- Frontend Services using `setTimeout` to mimic API latency.
- Empty Functions or 'TODO' comments in API route handlers.

Task: Create a markdown table listing every instance of 'fake' functionality found, the file path, and the specific data/feature that is currently missing a backend connection."

## 2. Setting the "Real-Data-Only" Constraint

Once you have the list, you must provide the AI with your Single Source of Truth (your database schema or API documentation). Use the following ruleset in your prompt:

### The Strict Implementation Rules

- **Zero-Mock Policy:** Never use `const data = [...]`. If a data source is missing, halt and ask for the schema/endpoint.
- **Type-Safety First:** Define TypeScript interfaces for all API responses before writing the component logic.
- **Error Boundaries:** Every real connection must include a loading state and an error handling state (no more 'silent' failures).
- **Environment Integration:** All URLs must use environment variables (e.g., `process.env.API_BASE_URL`) rather than localhost strings.

## 3. The Implementation Workflow

To ensure the AI doesn't revert to old habits, follow this three-step loop for every feature it "fixes":

| Step | AI Action | Human Verification |
|------|-----------|--------------------|
| **1. Schema** | Generate the Prisma/SQL schema or API route. | Verify the database fields match your needs. |
| **2. Integration** | Replace the mock frontend call with a `fetch` or `useSWR`/`useQuery` hook. | Ensure no hardcoded data remains in the component. |
| **3. Validation** | Write a basic integration test to ping the real endpoint. | Run the test to confirm data flow. |

## 4. Example Prompt for a Specific Feature

If you are fixing a "Coming Soon" User Profile page, use this:

"I am replacing the mock User Profile page with real functionality.

- **Database:** Here is my PostgreSQL schema [Insert Schema].
- **Backend:** Create a Node.js API route that fetches this user data by ID.
- **Frontend:** Remove the `mockUser` object in `Profile.tsx`. Implement a real fetch call to the new API.
- **Constraint:** If the database returns null, show a 'User Not Found' state. Do not fall back to placeholder data."

Ready to build with AI?

Get the tools, templates, and training to launch your AI-powered business.

View Pricing