top of page

2019 - 2021

@ Koru Kids

​

Koru Kids is a London-based startup in the childcare industry. When I joined the company as a lead product designer they were at the concierge MVP stage providing after-school nanny service by doing manual matching like an agency. My main focus was to build the foundation of a self-serve platform and reduce matchmaking cost prior to scaling the product to other regions of the UK, US and to other verticals (Baby & Toddlers, Nanny Share, etc).

vadymuxd-site-hero-KK2.jpg

TLDR 

​

During my 2 years of leading Matching Squad Design at Koru Kids, I contributed to the following: 

 

  • Product Design and Scale: Led the design and the launch of new nannies search platform to the London market. Managing design performance and iterative product releases. Designed features like instant search, nanny profiles, filters, ranking, nanny availability, shotlist, etc;

  • Business Performance: Reduced the matching cost X4 times.

  • Design Performance: Improved nanny match rate X2 times. Improved the conversion from a new registered family to a matched family by almost 100%. 

  • A/B testing: Pioneered experimentation with setting up back-end testing with feature flags initially, later with Google Optimize. Conducted 20+ A/B tests of UX, UI and microcopy.

  • User research: Conducted dozens of user interviews and multiple surveys.

  • Product Analytics: Introduced and mastered multiple data insights tool such as Hotjar, Amplitude, Metabase, and Looker. 

Matching Cost Decreased

CR% to Activated Users Increased (YoY)

Untitled.png
Untitled2.png

1. Customer Journey Mapping

​

Our newly formed cross-functional Matchmaking Product Squad was responsible for customer experience from user registration to the moment when an introduction between a family and a nanny happens. So I started from studying customer journey and weak spots of the funnel while creating CJM

Pre-match - Nanny2.png

​2. Driven by User Behaviour

​

When I joined Koru Kids, the company didn't have tools to measure and track user behavior. We kicked off building the product analytics and experimentation capability environment almost from scratch.

 

We started using Hotjar for video recordings, Google Analytics (later Amplitude) for event tracking, Google Optimize for small Front End experiments, Metabase (SQL), and Google Sheets for A/B tests (later Looker)

analytics3.png

3. MVP and iterations

​

Our first self-serve platform iteration was just a simple text-based version of a nanny listing page, and we released it only for 50 users as a soft launch.

 

After every release, we learned by studying user behaviour data and talking to our customers.

 

Iteration after iteration, we improved the navigation, onboarding, nanny profile cards, filters, and tailored the signup process for different customer segments.

KK3.png

4. Mental Models Fit 

​

While measuring key metrics and running experiments, we also contucted various studies to learn how our users understand our UI concepts and what mental models they have. We wanted to design an intuitive flow and suggest actions, CTAs, and pages that a user would expect depending on a stage of the journey.

​

For example, after one survey, we learned that "Invite to chat" wasn't the best next step for all our customers after they land on the search result page, and the concept "shortlist" occurred as a pattern.

shortlist.jpg

We tested new navigation in Maze alongside new labels and "shortlist" was the best representation of what most users expect.

​

So we firstly introduced "Add to shortlist" as a fake door test, validated the appetite. Then we changed the main page's name to "Shortlist", built the functionality, and eventually swapped the main CTA from "Invite to chat" to "Shortlist". A/B test then proved that by having a proper sequence of pages and prompting a user to discover nannies in the right order we not only improved usability but also boosted a decision-making speed improving the conversion.

​5. Dilemma of Choice

​

Another example of UX challenge is to balance between the lack of nannies on the search result page and an overwhelming list of options that might cause analysis paralysis which blocks a user from conversion to the next stage

empty.png

On the one hand, we knew that a travel time of a nanny to a family's house directly affects the conversion to a successful match. A long journey of a nanny usually leads to a negative nanny experience, declines, and breakups. But the distribution of our nannies wasn't equal in London. We had areas where a family could have only 0-2 nannies and areas where they could have more than 100.

​

We designed a smarter implicit filtering where the system applies a default travel time depending on the number of nannies in the area. For example, if a user has more than 10 nannies, we decrease the travel time and filter nannies that are too far away. However, when there are not enough options, we increase the travel time and allow more nannies to appear. Users could play with this filter later by themselves but the system sets it by default.

 

By doing that, we got much better search results with 7-9 options to choose from. We learned from user interview previously that this was the best number.

travel_time.jpg

6. Response Rate

​

One more example of a UX challenge was users' unresponsiveness to a suggested match. This caused the most negative WOM (word of mouth)

​

We had a hypothesis that if we introduce a time-sensitivity of a match request, we would boost the response rate. So we intentionally started to expire matches in 3 days after no response and show a message to a user that a nanny can be simply booked (which was true).

expiry2.png

Also, we run some microcopy A/B tests to engage a user with UI. For example, some users mentioned that “Decline” sounds rude, so we tested a softer “No thanks” button instead. We knew that users don't have a good motivation to decline and they sometimes prefer just ignore it. However, the feedback is important for the waiting side since they don't want to be "ghosted" and need to know when to continue the search and look for other matches.

Group 59.png
arrow.png
Group 63.png

Another problem was that if a family invites too many nannies and then start interviewing them all it is likely that Nanny "A" will be waiting for a family's interview feedback, while a family would want to hold on because they are interviewing Nanny "B" and Nanny "C". So if Nanny "C" has an interview with another family and wait for their decision it creates webs, and causes negative experience for all. 

​

Eventually, we limited the number of invitations to only 6. It wasn't obvious from the beginning that some user restrictions can improve the overall UX.

F-Search-Modal-M-02.png
arrow.png
F-Search-Modal-M-03.png

In a few months of experimentation we achieved the following results:

 

  1. Converted hundreds of NO RESPONSE matches to RE-OPENED and eventually ACCEPTED, meaning we got more successful matches.

  2. Converted thousands NO RESPONSE matches into DECLINED, meaning instead of waiting for a response a nanny or a family gets a notification and resumes their search faster.

  3. Families with “No thanks” buttons responded to 6% more match requests in comparison to families with the “Decline” button

  4. Conversion to response was increased by 12% in total

​7. Nanny Profile

​

After building the search result page, our next biggest leverage on the matching journey was a nanny profile itself.  We followed the same iterative approach, and designed the UI of a profile by adding more elements step by step, while learning from each release. Our first release was just a simple text-based page based on a nanny CV in pdf which we used to collect on GDrive during our nanny screening and vetting process. Even without any nanny photos.

​

In time we launched various A/B tests, conducted many user interviews, watched hundreds of Hotjar video recordings, ran lots of SQL queries, created plenty of user events in analytics to iterate and add stuff. As a result, a nanny profile page had sections with elements that were ordered in a way to follow how a user would like to discover it. 

Frame-2.jpg
bottom of page