Making product feedback clearer at the point of failure
An investigation into how unclear system feedback, rather than hardware faults shaped user perception and high returns.
The problem: A well-reviewed-on-paper children’s bubble machine was seeing persistently poor customer reviews and a high returns rate due to users believing the product had failed after short-term use.
My role: Investigated returns data, tested real-world product behaviour, identified the root cause, and worked with suppliers to propose a low-impact design fix.
The result: Reframed a perceived quality issue as a feedback and system-design problem, reducing misinterpretation of failure and providing a clear path to lower returns.
Skills used: Data analysis | Customer feedback & interviews | Product design | System thinking | Supplier relationships | Industrial Design
Overview
While working at Tesco, I was asked to investigate a children’s bubble machine that appeared to meet user needs on paper with a strong output, simple interaction, and good initial engagement. Yet was consistently receiving poor reviews and a high rate of returns.
Although this was a physical product design project, the process closely mirrors modern UX practice: interrogating data, validating assumptions through observation, identifying where user perception diverges from system reality, and designing a clearer feedback loop to resolve it.
The Problem
Returns data and customer reviews suggested that the bubble machine was “failing” after a short period of use. Customers reported that the wand stopped working, leading them to believe the product was broken.
This wasn’t a new issue, the product had been on sale for several years, with repeated negative reviews and a growing returns percentage (around 10%). On the surface, it looked like a quality or durability problem.
The core question became:
Was the product actually failing — or were users misinterpreting what was happening?
Investigation & Insight
I gathered multiple returned units alongside brand-new products and began testing them side by side.
Out of 10 machines tested initially, 9 appeared to work perfectly
This suggested early returns might be linked to one-off usage (e.g. summer parties)
However, extended testing revealed a pattern:
After prolonged use, the bubble wand stopped rotating
The fan continued to spin, giving the impression of partial failure
Replacing the batteries in the wand immediately restored full functionality.
The key insight:
The product contained two separate circuit boards, each powered independently. When one set of batteries drained before the other, the machine entered a confusing “half-working” state.
From a user’s perspective, there was no visible signal that this was a battery issue only that the product had failed.
Solution
The issue wasn’t the product’s core function — it was a lack of system feedback.
The proposed solution was simple but effective:
Combine the two circuit boards into a single power system
Ensure that when batteries ran low, both the wand and fan slowed and stopped together
This created a clear, intuitive cue that the batteries were running out — not that the product was broken
By aligning system behaviour with user expectation, the perceived failure disappeared.
Outcome
Identified the root cause of a long-standing returns issue
Reduced misinterpretation of product failure
Demonstrated that a small design change could address a large commercial and experiential problem
Provided clear guidance to suppliers with minimal manufacturing impact