Design prompt
Project Discovery Grounding
This prompt is for teams at the very start of a project who need to slow down and get aligned. It helps ground discovery work in context, assumptions, risks, and unknowns, creating a shared understanding before research or design begins.
Problem statement/ Project kick off/ Define
Prompt: Project Discovery Grounding
You are an experienced product and service design strategist working alongside a multidisciplinary team.Your task is to help ground a new product or service initiative before discovery begins. Not only will this ground the discovery but will be used as the initial KPI’s to help track how the project/Product is tracking as it progresses through its product life cycle.
Context
Here is what is currently known about the project:
Product / service name:
High-level goal (why this exists):
Who the primary users are (if known):
Organisation / business context:
Known constraints (time, budget, policy, technology):
What triggered this work (e.g. pain point, opportunity, mandate):
(If any of this information is missing, proceed using reasonable assumptions and flag them clearly.)
1. Generate problem statements
Create 4 distinct problem statements that could plausibly frame this project.
Each problem statement should:
Be written from a user or service perspective
Avoid proposing solutions
Be clear enough to guide research
Sit at a different level (e.g. user, journey, service, system)
Label each one and briefly explain why it matters.
2. Discovery focus
For each problem statement:
Identify what must be learned or validated during discovery
List key assumptions that should be tested
Suggest the most appropriate research methods (qualitative, quantitative, or mixed)
Also note:
Areas that are out of scope for initial discovery
Risks of not addressing certain unknowns early
3. Define success metrics
Propose measurable indicators of success from two perspectives:
A. Business / organisational metrics
Examples may include:
Efficiency, cost, demand, conversion, compliance, adoption, risk reduction
B. Design & usability metrics
Examples may include:
Task success, time on task, error rate, CSAT, CES, SUS, confidence, accessibility outcomes
For each metric:
Explain what “improvement” would look like
Indicate whether it is short-term (0–3 months), medium-term (3–12 months), or long-term (1–3 years)
4. Confidence & improvement
Provide a confidence score (0–100%) indicating how robust you believe this output is based on the information provided
Clearly list what additional inputs would most improve the quality or accuracy of this work
Highlight any assumptions you made that the team should confirm or challenge
Output format
Present your response in clear sections with headings and bullet points.
Avoid jargon where possible.
Be explicit and neutral in tone.
Follow-up prompts (this is where it scales)
You’re right that this can’t be one-and-done. The power comes from chaining prompts that refer back to this artefact.
Here are the key follow-ups you’ll want in the library.
Prompt 2: Discovery synthesis check-in
Used mid-discovery.
Purpose
Validate or invalidate the original problem statements
Adjust success metrics with rationale
Input
Research findings
Updated constraints
Initial data signals
Output
Which problem statements still hold
Which should be merged, reframed, or retired
Updated confidence score
Prompt 3: Post-delivery success review
Used after launch or release.
Purpose
Measure impact against original intent
Explain deltas, not just numbers
Output
What moved, what didn’t
Why (likely causes)
What success means now
Prompt 4: Long-term impact & strategy signal
Used 6–24 months later.
Purpose
Detect second-order effects
Decide whether success criteria should evolve
This is where design becomes strategic, not just evaluative.
A quiet but important point
What you’re designing isn’t just a prompt.
You’re designing:
A shared language of success
A memory for the project
A defence against hindsight bias
AI is just the accelerator.
If you want, next we can:
Turn this into a one-page internal framework
Stress-test the prompt on a real past project
Create a confidence scoring rubric so teams don’t blindly trust percentages
You’re thinking about this in a really grounded, responsible way — that’s exactly how AI should enter design practice.