top of page

Koru Kids is a London-based startup in the childcare industry. When I joined the company as a lead product designer they were at the concierge MVP stage providing after-school nanny service by doing manual matching like an agency. My main focus was to build the foundation of a self-serve platform and reduce matchmaking cost prior to scaling the product to other regions of the UK, US and to other verticals (Baby & Toddlers, Nanny Share, etc).

Instant-Search-2.jpg

Our newly formed cross-functional Matchmaking Product Squad was responsible for customer experience from user registration to the moment when an introduction between a family and a nanny happens. So I started from studying customer journey and weak spots of the funnel

Pre-match - Nanny2.png

Below you can explore this case study broken down into the following sections:

​

Table of content

​

  1. Driven by User Behaviour

  2. MVP and Iterations

  3. Mental Models Fit

  4. Dilemma of Choice

  5. Response Rate

  6. Nanny Profile

  7. Results

  8. More Case Studies in Detail

1. Driven by User Behaviour

When I joined Koru Kids, the company didn't have proper measuring and tracking of user behavior. So, we kicked off building the product experimentation environment almost from scratch.

 

We started using Hotjar for video recordings, Google Analytics (later Amplitude) for event tracking, Google Optimize for small Front End experiments, Metabase (SQL), and Google Sheets for A/B tests (later Looker)

analytics3.png

2. MVP and iterations

Our first self-serve platform iteration was just a simple text-based version of a nanny listing page, and we released it only for 50 users as a soft launch.

 

After every release, we learned by studying quantitative data and talking to our customers.

 

Iteration after iteration we improved the navigation, onboarding, nanny profile cards, filters, and tailored the signup process for different customer segments.

KK3.png

3. Mental Models Fit 

While constantly comparing the conversion into our key metrics and running experiments we were also trying to understand how to use concepts that can be easily understood by our users. How to build a natural flow and propose actions, CTAs, pages that a user would expect depending on a stage of the journey.

​

For example, after one survey we learned that "Invite to chat" isn't the best next step for all our customers after they land on the search result page and the concept "shortlist" occurred as a pattern.

shortlist.jpg

We tested new navigation in Maze alongside new labels and "shortlist" was the best representation of what most users expect.

​

So we firstly introduced "Add to shortlist" as a fake door test, validated the appetite. Then we changed the main page's name to "Shortlist", built the functionality, and eventually swapped the main CTA from "Invite to chat" to "Shortlist". A/B test then proved that by having a proper sequence of pages and prompting a user to discover nannies in the right order we not only improved usability but also boosted a decision-making speed improving the conversion.

4. Dilemma of Choice

We were often focusing on UX challenges rather than UI and visuals. One example is balancing between the lack of nannies on the search result page and an overwhelming list of options that causes analysis-paralysis and blocks a user from conversion to the next stage

empty.png

On the one hand, we knew that a travel time of a nanny to a family's house directly affects the conversion to a successful match. A long journey of a nanny usually leads to a negative nanny experience, declines, and breakups. But the distribution of our nannies wasn't equal in London. We had areas where a family could have only 0-2 nannies and areas where they could have more than 100.

​

We ended up with implicit filtering where the system applies a default travel time depends on the number of nannies in the area. For example, if a user has more than 10 nannies we decrease a travel time and filter nannies that too far away, but when there are not enough options we increase a travel time and allow more nanny to appear. Users could play with this filter later by themselves but the system set it by default if a filter was untouchable.

 

By doing that we got much more healthy search results with 7-9 options to choose between. This was a golden number that was proven as the best in terms of conversion and UX (many families mentioned this number as perfect during user interviews). 

travel_time.jpg

5. Response Rate

Another example of a big UX challenge (that caused the most negative WOM) was users' unresponsiveness to a suggested match.

​

We had a hypothesis that if we introduce a time-sensitivity of a match request we would boost a response rate. So we intentionally started to expire matches in 3 days after no response and show a message to a user that a nanny can be simply booked (which was true).

expiry2.png

Also, we run some microcopy A/B tests to engage a user with UI. For example, some users mentioned that “Decline” sounds rude, so we tested a softer “No thanks” button instead. We knew that users don't have a good motivation to decline and they sometimes prefer just ignore it. However, the feedback is important for the waiting side since they don't want to be "ghosted" and need to know when to continue the search and look for other matches.

Group 59.png
arrow.png
Group 63.png

Another problem that we discovered was that if 1 family invites too many nannies and then start interviewing them all it is likely that Nanny "A" will be waiting for a family's interview feedback, while a family would want to hold on because they interviewing Nanny "B" and Nanny "C". So if Nanny "C" has an interview with another family and wait for their decision it creates webs and causes a negative experience for all. 

​

Eventually, we limited the number of invitations only to 6. It wasn't obvious from the beginning that some user restrictions can actually improve an overall UX.

F-Search-Modal-M-02.png
arrow.png
F-Search-Modal-M-03.png

In a few months of experimentation we achieved the following results:

 

  1. Converted hundreds of NO RESPONSE matches to RE-OPENED and eventually ACCEPTED, meaning we got more successful matches.

  2. Converted thousands NO RESPONSE matches into DECLINED, meaning instead of waiting for a response a nanny or a family gets a notification and resumes their search faster.

  3. Families with “No thanks” buttons responded to 6% more match requests in comparison to families with the “Decline” button

  4. Conversion to response was increased by 12% in total

6. Nanny Profile

After building the search result page our next biggest leverage on the matching journey was a nanny profile itself.  I really enjoyed the design process of the nanny profile page since we follow the same iterative approach. Our first release was just a simple text-based page based on a nanny CV in pdf which we used to collect on GDrive during our nanny screening and vetting process. Even without any nanny photos.

​

In time we launched various A/B tests, conducted many user interviews, watched hundreds Hotjar video recordings, run thousands of SQL queries, created plenty of user events in analytics, and eventually built up on top of the purely text-based version. Currently, a nanny profile page contains sections with rich content that is ranked in the priority order in which a user would like to discover it. 

Frame-2.jpg

7. Results

For the company

To wrap up my 2 years of work at Koru Kids I can say that we managed to build a stable (but still too far from a perfect) platform that now performs better than the manual matchmaking process we had before. I am proud of my contribution in the following: 

​​

  1. Significantly reduced the matching cost (up to X4) by replacing loads of routing manual work by the system.

  2. Built the instant nanny search (self-service) platform for Koru Kids families allowing customers to find a nanny by themselves.

  3. Improved the conversion from newly registered family to a matched family almost by 100%  

  4. Set up product feature performance measuring environment and planted seeds of product experimentation culture.

Matching Cost Decreased

CR% to Activated Users Increased (YoY)

Untitled.png
Untitled2.png

For me

While the company was growing, I gained more experience and skills: 

 

  1. Unlocked new level in SQL and Metabase product in particular.

  2. Met new tools for qual research like Lookback, Maze, and for quant like Looker, Amplitude

  3. Improved my product management skills like prioritization, stakeholders management, feature delivery facilitation, etc

  4. Gained more practical experience in A/B testing and working with numbers.

bottom of page