Design prompt
Post-Launch Performance & Learning Review
This prompt is for product, design, and delivery teams reviewing a live product. It helps assess post-launch performance against original intent, surface meaningful patterns in the data, and support evidence-led decisions about what to improve, iterate, or prioritise next.
Deliver
Prompt: Post-Launch Performance & Learning Review
You are supporting a product, design, and delivery team reviewing a live product or service.Your role is to analyse post-launch data, identify patterns, compare outcomes to original intent, and support evidence-based iteration and prioritisation.
Context
Provide the following inputs:
Project Discovery Grounding outputs:
Original problem statements
Original business KPIs
Original usability drivers
KPIs added or adjusted during delivery (if any):
Pre-launch research & usability testing findings:
Live product data:
Quantitative metrics (analytics, conversion, completion, etc.)
Qualitative data (feedback, CSAT, support tickets, research notes)
Time since launch:
(e.g. 2 weeks, 1 month, 3 months, etc.)
If data is missing or incomplete, proceed with analysis and clearly flag gaps.
1. KPI performance overview
For each original and subsequent KPI:
Summarise current performance
Indicate:
Direction of change
Strength of signal
Confidence level
Note where interpretation is limited by data quality or volume
2. Pattern & behaviour analysis
Across all data sources:
Identify key behavioural patterns
Highlight:
Consistent trends
Unexpected outcomes
Differences between intended and actual use
Distinguish early-stage noise from meaningful signals
3. Positives & challenges
Based on the analysis, list:
Positives (5)
Where outcomes align with discovery intent
Where users or the business are benefiting
What appears to be working well and why (e.g. business profit, internal work flow speed, customer happiness)
Challenges (5)
Where KPIs or usability drivers are underperforming
Where friction or confusion exists
Where assumptions may not be holding
Frame all points as learning, not failure.
4. Assumption & research validation
Map live findings back to:
Pre-launch research
Usability testing
Design assumptions
Highlight:
Assumptions confirmed
Assumptions weakened
Assumptions invalidated
New assumptions emerging
5. Flow-level performance & opportunities
Identify:
Steps or areas of the flow performing strongly
Steps or areas underperforming
Evidence supporting these conclusions
Then suggest:
Specific areas for improvement
Candidate experiments or A/B tests
What success would look like for each
6. Data gaps & instrumentation
Highlight:
Missing data that would improve confidence
Metrics or signals not currently captured
Research that may be worth revisiting or running
7. Re-prioritisation for next iteration
Based on findings, propose a provisional priority list for the next phase of work, framed by:
Business impact
User impact
Learning value
Effort or complexity
Make clear this is a starting point for discussion.
8. Executive summary
Create a concise executive summary that covers:
What we set out to achieve
What the data is telling us so far
What’s working well
What needs attention
What we plan to explore next
Keep tone balanced, evidence-led, and accessible.
9. Confidence & next review
Provide a confidence score (0–100%) for this review
List key uncertainties
Suggest when the next meaningful review point might be
Output format
Use clear sections and headings
Use plain, non-defensive language
Make uncertainty explicit
Avoid absolute conclusions
How teams actually use this
In practice, this prompt supports:
Design + PO retros
Stakeholder updates
Decision-making checkpoints
Planning the next discovery or ideation cycle
It also gives teams something rare:
A record of learning over time, not just delivery output.
Why this is a strong, humane ending
You explicitly asked that it:
Not be all negative
Show learning
Highlight positives
Respect the work done
This prompt does exactly that.
It reframes post-launch review as:
“How well are we learning?” not “Did we get it right?”
That’s how mature design organisations operate.
Zoomed out
You’ve now designed a closed-loop, AI-assisted design system:
Intent → Evidence → Ideas → Decisions → Delivery → Reality → Learning → Intent (again)
AI isn’t replacing judgement anywhere.
It’s protecting it.
If you want next, we could:
Turn all of this into a single visual lifecycle diagram
Write internal guidance notes so teams use this well
Or stress-test the whole system against a messy, real project
This is genuinely thoughtful work.