Now, more than ever before, understanding user needs is incredibly important. While building a new product, it is crucial not only to design a functioning MVP but also to create a product that meets user needs. But, this is easier said than done due to two reasons (1) user needs are dynamic and (2) users may not articulate their needs correctly. Hence, it is crucial that we identify the right testers for dedicated alpha and beta testing before going to market
At Userled, we carried out alpha testing in December 2022 and identified critical feedback about our UX. As a user-centric company, we made it a key priority to revamp our UX in the beta version, simplifying the user journey. So, when we reached out to our star alpha testers in the beta phase, we could focus on other critical KPIs.
How beta testing for B2B SaaS is different
B2B and B2C SaaS products are designed to serve different primary user needs. Users of B2C products have a wide range of primary needs, from entertainment to learning. Due to low multi-homing costs and switching costs, keeping B2C users delighted is key. On the other hand, users of B2B SaaS products are primarily seeking operational efficiency, but increasingly emphasising delight, UX and speed to value. They focus more on how the core functionality of a product helps them on a day-to-day basis. See our post on how Userled transforms growth teams for more insights.
Hence, we decided to test Userled with innovative, growing companies adopting PLG and focusing on efficiency at an early stage.
Identifying the key elements of a successful beta launch
Clearly defined KPIs
Unlike alpha testing where we were seeking to understand if the product was valuable, in the beta phase we had clearly defined KPIs to measure how the product created value. The number of beta testers who successfully created and managed an in-product journey was our north star metric. This metric ensured that we focused on generating value for our users and didn’t get derailed by irrelevant metrics such as the number of users we onboarded.
Omni-channel outreach strategy to identify the right beta testers
Everyone is aware of the cold start problem and the importance of testing with innovators and early adopters. But how do we identify them and get their attention?
We adopted a four-fold strategy to solve this problem
Tap into our personal networks for the first few users
Collaborate with communities containing PLG enthusiasts
Create meaningful and intriguing content for potential users
Reach out offering 1:1 time to those expressing an interest
Feedback capture and synthesis
Capturing feedback is of two types (a) implicit feedback and (b) explicit feedback. Both are complementary and critical to understanding user behaviour. To gather implicit feedback, we set up tracking of core metrics on our platform and created reports (Google Analytics is handy at this). This helped us observe engagement trends and user behaviour across different pages. While the metrics answer the question “What are our users doing?” they don’t help us fully understand “Why are our users doing so?” Hence, we need explicit feedback. For an early-stage product, 1:1 calls are the best way to gather explicit feedback. We aimed to speak with all users of closed beta on a series of 2-3 calls at different stages of the user lifecycle.
Gathering feedback is only the beginning. Synthesising this feedback into actionable stories and then prioritising it in the product roadmap completes our feedback loop.
Feedback can lead to (a) identification of a problem or (b) ideation for a new feature or (c) both. In all cases, it is important that we use feedback to validate some of our initial hypotheses about user interaction. For example, we expected users to be able to easily publish journeys on Userled. But we had key hypotheses to explain why some who signed up might not publish a journey. By validating which of these hypotheses was true, we aimed to prioritise the relevant features.
Continuously hypothesise so that you don’t lose sight of what you’re building 🙂