Kaiku, For a world of story tellers

A personal project exploring how AI can accelerate brand, UX, and building, from idea to a live iPhone app.

Problem & Opportunity

It’s not just famous people who have stories worth telling. People keep journals, reflect on life, or want to capture memories, but most tools either feel too private, too overwhelming.

The opportunity

Create a cosy, story-first experience that supports people who struggle to start writing, while still letting confident writers publish quickly.

Research caveat (keep this honest but confident)

This project wasn’t driven by interviews or usability testing — it was primarily an exploration of using AI tools to speed up ideation, synthesis, brand exploration, and build execution (a workflow increasingly common in teams I’ve worked in).

Kaiku is a space to:

  • write short personal stories (like a journal, but lighter)

  • share them with the world (if you want)

  • read other people’s stories before bed — less doom-scroll, more calm curiosity

Tools used: ChatGPT · Builder.io · Supabase · Figma · GitHub · Fly.io · Stark · Midjourney · Pika · Runway · Typescales · Foundation: Colour Generator

Brand & Atmosphere

I started by describing the idea and an early persona to ChatGPT, then explored names and visual directions.

A key spark: whales pass knowledge down through generations — which led to the early concept name Echo, and a friendly whale mark (a bit like a “Twitter-simple” mascot).

But Echo felt oversaturated and slightly corporate. I wanted something cuter and more ownable, so I explored options (Hikari, Tomo, Ema, Koto…) and landed on Kaiku — the Finnish word for echo.

Name & logo exploration

With a rough whale sketch and a name, I needed a world around it.

Using Midjourney, I generated underwater / coral scenes to explore a bright, warm palette — then mirrored the vibe into a dark mode version. That quickly gave me:

  • core colours + accents

  • an atmosphere for UI backgrounds

  • colour directions to bring back into the logo

Brand identity

The logo (bringing it back to craft)

AI got me a starting point, but the output felt dated and not aligned with modern app branding.

So I returned to Figma and designed a cleaner, more contemporary whale mark — keeping the friendly “mascot” feel while making it app-icon ready.

Discovery (lightweight, intentional)

Rather than jumping into screens, I defined Kaiku through intent.

I created lightweight personas and user stories to anchor the MVP around emotional needs — especially for people who struggle with the blank page.

UX Discover to Wireframes to UI

Personas (AI-assisted, designer-led)

I started with a persona based on a real vibe: friends sharing old racing/karting stories around a campfire — which made me realise there must be endless “small but meaningful” stories everywhere.

I then described to ChatGPT where Kaiku could fit:

  • creative writing prompts

  • “reading instead of consuming”

  • parents recording memories for kids

  • capturing moments from a newborn’s early years

ChatGPT produced structured personas fast, filled gaps I’d missed, and suggested a few I hadn’t considered. It wasn’t one-click perfect — but the back-and-forth still saved a lot of time.

HMWs, user stories, and flows

I used the same approach: I fed AI my initial thoughts, let it expand, then I curated and adapted the output in Figma into:

  • a small set of How Might We statements

  • user stories for the MVP

  • core flows I could keep checking against during design

Style guide / design system

Using my palette + Figma plugins, I quickly built:

  • colour ramps

  • type scale

  • core components + variants

  • auto-layout driven templates

I used the same approach: I fed AI my initial thoughts, let it expand, then I curated and adapted the output in Figma into:

  • a small set of How Might We statements

  • user stories for the MVP

  • core flows I could keep checking against during design

Wireframes to UI design

Making it real

Why Builder.io

I explored a few approaches (including Make + Firebase ideas), then landed on Builder.io because it let me translate Figma designs into a working product quickly.

I still had to adjust layouts at times — but overall the screens stayed close to the original design.

The “wow” moment

This was the most fascinating part: watching the idea become functional, filters, swipe interactions, accounts, saved stories, not just a prototype.

Stacking and shipping

Builder.io also helped bridge the scary bits (for me):

  • database + auth via Supabase

    code + version control via GitHub

  • deployment via Fly.io

  • guidance written in genuinely beginner-friendly language (and it could interpret screenshots from other tools to unblock me)

Learnings

Why Builder.io

I explored a few approaches (including Make + Firebase ideas), then landed on Builder.io because it let me translate Figma designs into a working product quickly.

I still had to adjust layouts at times — but overall the screens stayed close to the original design.

Even with AI and no-code, the classic dev/design mismatch still appears: spacing and layout polish took a lot of time — just like real product work.

1. The familiar padding problem

To get the AI to build what I’d designed, I had to explain:

  • intended behaviour

  • edge cases

  • how changes affect other screens

  • what “good” looks like This made me wonder if design roles will blend more with BA-style communication as AI build workflows mature.

2. Writing like a BA

Sometimes the AI forgets what you’ve already tried and sends you in circles.

My workaround: bring in a second AI, summarise the situation clearly, and paste the improved prompt back into Builder to break the loop.

3. Getting stuck in loops is real

4. Third-party + database setup is where AI helped most

This is where my project would usually stall. AI guidance made it possible to:

  • set up the database

  • wire up auth

  • deploy

  • understand what’s needed for an iPhone app release