HYPOTHESIS & EXPERIMENT (PART 3): Experiment types, MVP and continuous discovery

Misael Neto
13 min readApr 19, 2021

In my last post, I wrote about assumptions, hypothesis and touched a little on experiments. I also discussed about how you, your team and stakeholders should always be mindful of the differences between problem space and solution space (part 1).

So by now you’re convinced that before investing time and resource in development you should gather evidence about the product you and your company wish to build. If not, take a look at this article: https://www.cbinsights.com/research/startup-failure-reasons-top/. The original study referenced by the article has been cited by many as evidence of lack of focus on product-market fit from startups and shows that over 40% of startups fail because people build products nobody needs.

There’s a great quote I read in the book Lean Analytics: “Don’t sell what you can make, make what you can sell”. But how can we figure out if people will want to use/buy what you wish to build? Ask them, you might say! Well, turns out it's more complex than that. When you strait up ask potential customers what they want they usually tell you what you want to hear, not what you and your team need to hear. There is a great example from SONY, this is the short version:

Sony’s conducting a focus group for a yellow ‘sport’ Walkman. After assembling their ‘man/woman on the street’ contingent, they ask them ‘Hey, how do you like this yellow Walkman?’ The reception’s great. ‘I love that yellow Walkman- it’s so sporty!’ ‘Man, would I rather I have a sweet yellow Walkman instead of a boring old black one.’ While everyone’s clinking glasses, someone had the insight to offer the participants a Walkman on their way out. They can choose either the traditional black edition or the sporty new yellow edition- there are two piles of Walkman’s on two tables on the way out. Everyone takes a black Walkman.

Check it out on Alex Cowan's site:

So, the key takeaway is that people don’t always know or tell you what they want and your job, as a product person, is to figure out how to conduct experiments so you can find out what to build. Also, what people want, what they desire is one of three dimensions of your product discovery efforts. The following image illustrates all of them really why and, in my humble opinion is the most basic and solid framework for a Product Manager’s playbook.

IDEO’s three lenses for customer-centric discovery

But the question remains: how can we build de right product (not only what customer want but that is viable and feasible to build)?

Types of experiments

First of all, lets define experiment. Here are some definitions I have found just by googling:

  1. a scientific procedure undertaken to make a discovery, test a hypothesis, or demonstrate a known fact
  2. a course of action tentatively adopted without being sure of the eventual outcome.
  3. try out new concepts or ways of doing things.

Ok… so the concept may very on definition but to me, and for the sake of validating a product, the first definition seems more fitting. I don’t know about the scientific part of the definition, but you should know what you and your team want to validate/invalidate/demonstrate/test before going into an experiment. At least you need an assumption or else:

  1. you leave room for more errors than you can afford in your analysis.
  2. your experiment may be design in a wrong or take too much resources to pull of.

For content on defining assumptions and hypothesis, take a look at my last post, or follow this great post on the subject: https://medium.com/@Kromatic/assumption-vs-hypothesis-to-the-death-df1ebc63e749. For the sake of examples, here are two from the previous link:

taken from @Kromatic’s post on assumption vs hypothesis

So what are some examples of experiments? Well, the lean startup movement has made a few concepts well known. One of them is the concept of MVP. A minimum viable product (MVP) is a version of a product with just enough features to be usable by early customers who can then provide feedback for future product development (taken from wikipedia). This definition comes from Eric Ries and his book, The Lean Startup. It does not, however, mention anything about experiments. But Y Combinator stated the following about MVP: The MVP is analogous to experimentation in the scientific method applied in the context of validating business hypotheses, it is utilized so that prospective entrepreneurs would know whether a given business idea would actually be viable and profitable by testing the assumptions behind a product or business idea. Is MVP an experiment?

Adapted form Dan Olsen’s version of MVP from the book, The Lean Product Playbook

By the way, their have been many definitions. Some have even extended the concept. Minimum Sellable Product, Minimum Marketable Product. Recently I have heard about Minimum Viable Knowledge. To me, these ideas overly complicate a concept that’s already complicated and subjective enough. But instead of defining a MVP, maybe every one of us should try to define what it is not. Here are some of my attempts. A MVP is not:

  1. The version 1 of a product.
  2. An excuse to deliver a bad product.
  3. A permanent state of a product.
  4. Not an experiment. =] cheating by using double negative.

So yes, a MVP is an experiment. But what about the idea of continuously improving a product and forever been on BETA mode? Well, beta is a version of a product. Continuous improvement and changing your product is a requirement in todays market space. It has been for some time now. But this doesn’t mean you should still call your product a MVP if it took a lot of resources to build (later in this post I discuss the subject of a product that is built through iterations of MVPs and has lots of tech debt). If your product needs to improve, you should focus on what needs to be done and own your challenges. Master Yoda said once: “named must your fear be, before banish it you can”.

So what are some examples of MVPs? Here are some types of MVP I’ve been studying and trying to implement for the past months:

Concierge MVPs: Involves manually helping your users accomplish their goals as a means of validating whether or not they have a need for what you’re offering. The way you implement this type of MVP is by simplifying the product and replacing automated components that would take long to implement with manual and non automated labor (almost always human labor). This type of MVP not only saves time by not focusing on product building, it also allows you and your team to com in contact with your customers and getting a close enough felling of their needs been met or not. Here are some of the benefits of Concierge MVP from this great post I found online:

  1. You get real contact with customers. Manuel learned a lot about how his grocery list service could incorporate allergies, diet preferences and health targets from creating lists and sharing them in person with customers, then seeing their reaction.
  2. You don’t need to build a site or product. This is fully transparent to the customer. You could say something like “this service will be $10 per month once I release the product. in the meantime, how would you like your own personal diet expert to create the ideal shopping list for you for $10?” Customer get a great deal and understand why. Manuel gets to learn very early in the process — before even building a product!
  3. Interacting with customers in the place where they use the product is called ethnographic interaction. The Concierge MVP yields this opportunity. Watching shoppers as they shop gives Manuel insight that he would not get just by talking with people about shopping.

There are also some disadvantages:

  1. A concierge service is a personal service delivered by an individual. The results of a concierge MVP experiment are not necessarily a completely accurate gauge of the underlying product idea because they can be biased by the likability of the concierge.
  2. The building of the product still needs to happen at some point. It’s great to do the learning before the product-building. But another set of lean cycles will be required when the product is built.

Wizard of Oz MVP: The is one of the fastest and most effective ways of testing hypotheses as to whether proposed solutions will create value for customers. Because the testing is performed manually, entrepreneurs can quickly modify their MVPs and test a large number of hypotheses, finding the most effective solutions.

There are several advantages to the Wizard of Oz MVP approach, here are some taken this great post I found online :

  1. In the early days, it is much cheaper to mimic a functioning system (by paying humans to do the work behind the scenes).
  2. It is possible to see much more quickly how a mature system would behave. The full Aardvark system and algorithms may take years to build, but the Wizard of Oz MVP gets feedback in weeks.
  3. The result is actual data generated from the user interacting with a system that they believe works perfectly. This data is far less prone to bias than the subjective thoughts of the Concierge in the Concierge MVP.

Single-Feature MVP: As the name might imply, a single feature MVP doesn’t necessarily mean you should develop one single feature. Also a MVP should validate a core value of your business: desirability, viability, feasibility or other values not expressed by these fundamental principals. The idea of a single feature is to test the bare minimum by also adhering to other core principles of MVP, minimize the waste. A single-feature is not a proof of concept (PoC) or a prototype, at leas they don’t on my playbook. These diverge on the target audience and main purpose. PoC’s are a fantastic experiment, but they are not MVPs because they focus of validating feasibility for developers. Prototypes on the other hand, should mostly be used to validate desirability (mainly UI) ideas with stakeholders, developers and a limited user group. A single-feature MVP should focus on all 3 lenses of customer-centric discovery, but must always be focused on the end-user. As should all MVP. The main challenge with this type of MVP is not wasting resources. Single feature MVP are presumed to be reimplemented by designed and are expected to carry a lot of technical debt

So what if you build an entire product on top of many iterations and many single-feature MVPs? Well, the principle should remain: if the entire product took a lot of time and resources to build, you probably shouldn’t call it a MVP. Refactoring your code and reimplementing what has been validated takes time and resources, so be mindful of the fact that single feature MVPs may come with tradeoffs.

Other types of experiments

In my last post I referenced a tweet from David J Bland that maps a list of experiments to one or more business values from IDEO’s three lenses framework. Most of them are designed to test an aspect of your product and are a great reference to keep in mind while practicing continuous discovery.

Here’s the slide where he discusses about on he’s post: https://twitter.com/davidjbland/status/1006963566027476992/photo/1

I’ve compiled articles on some of these types of experiments and came up with the list bellow. Besides all of these different types of experiments, there’s one I found to be helpful once you have defined your problem/opportunities and are ready to present your initial findings to stakeholders and before dive into solution discovery, called Working Backwards from the great folks at Amazon.

Before moving on it’s important to say that most of these experiments work best while your still working on discovery. The are other types of experiments that you can use to teste your hypothesis once your MVP or Product is up and running. This topic will be discussed in later posts.

Continuous Discovery

Before I discuss continuous discovery I’d like to point out that the term opportunity and the term problem are used interchangeably throughout this post.

Teresa Torres has been evangelizing the concept of continuous discovery for quite some time now. She developed a guide for continuous discovery and a tool for doing so called opportunity solution tree which I briefly discussed on my last post. Her site product talk features a bunch of content on product discovery and she even offers courses on the subject.

This is what ties the part 1 and part 2 of these series of posts together. The opportunity solution tree is a way to map opportunity (or problems) and solutions to your experiments in way that you can keep track and communicate your experiments to your team and stakeholders.

Teresa Torre’s continuous discovery approach and solution tree taken from: https://www.producttalk.org/continuous-discovery/

The root of the tree starts with an outcome, which for those who use OKRs can be a Key Result. If you use the North Start Metric, that’s your outcome on the solution tree. If your interested in learning more about OKRs, checkout Tim Herbig’s view on the subject: https://herbig.co/product-goals/. I will also discuss more about OKRs on later posts. If you don’t use any of these systems you should still align/negotiate with your Product Leader or stakeholder what is the outcome or outcomes you are perusing in that quarter.

Full disclaimer: From here on out, most of the processes that I suggest are experimental as I have not yet come to a conclusion about the effectiveness of the process bellow if implemented as it is presented. However, I can say that there is little to no innovation of processes but mostly a mashup of Teresa Torres, Tim Herbig, Cali (Renato Caliari) and Dan Olsen frameworks and techniques.

At the beginning of the quarter you and your team should conduct a series of interviews with users and stakeholders to figure out the best opportunities to match your outcome. Actually you should begin deriving your opportunities and suggesting outcomes to your stakeholders before the beginning of the quarter.

Product Team’s Quarterly roadmap

The interviews and meetings, benchmarks and market research will generate insights as to possible opportunities which will help you and your stakeholders decide what are the best outcomes for your product. If you use OKRs, this a good way to propose OKRs instead of it been a top down approach.

Once you settled on your outcomes, you can start building an opportunity solution tree. You can also try impact mapping, they are very similar to opportunity solution trees. Tim Herbig has a courses on the subject: https://herbig.co/impact-mapping-training/. You should have a tree for each outcome. Miro has a template for solution trees https://miro.com/templates/opportunity-solution-tree/

As your understanding on opportunities and solutions evolve, you should start mapping your experiments in order to validate each your assumptions and hypothesis about opportunities and solutions. In my experience it is very hard to look for opportunities first and not think about solutions is the process. Most times an idea from a user or stakeholders is a solution, not an opportunity or problem. If your using empathy map, or a value proposition canvas a solution might emerge from the conversation. Right it down if you feel you have to and then you look for the opportunity at the appropriate time.

Sometimes your stakeholders might come up with a solution to a problem you believe shouldn’t be a priority. Try the five whys approach to discovery the opportunities and encourage critical thinking to discovery better solutions to the solutions real problem, part 1.

Teresa Torres encourages us to validate the opportunities by having the product team begin their research based on the needs of the users so that it can identify opportunities and design the solutions that will be implemented in the delivery track by the engineering team. To achieve the outcome we must eliminate value risks (does the solution alleviate user pain and/or creates gains?) Business risks (is the solution economically, legally viable?), Usability risk (is the solution attractive and desirable) and technical risk (is the solution feasible?). Each opportunity or problem should have multiple solutions and experiments. The reason for multiple solutions is not to fixate on a single solution and overlook alternatives routes to product success.

One thing to note about multiple solutions and experiments is that you have to set clear criteria while conducting experiments. Your experiment’s data and inputs should be reliable and your execution should be adequate for the solution you are trying to validate, or else your just wasting time. It that case, it is better to trust a Stakeholders gut about a problem. It should be more reliable. There’s bound to be some kind of study about that =], haven’t found it though. Also, about experiments, Cali (Renato Caliari) said on a podcast from Product Guru’s that product teams should really focus on validating experiments and that he doesn’t understand the concept of running lots of experiments every week. In fact, he says that teams that tend to run lots of experiments every week are probably running bad experiments. He is also the guy that coined the term tripple tack agile. At least I think so. Take a look at his article on the subject here.

Here is my take on tripple track agile:

Adapted from Cali (Renato Caliari)’s tripple track agile

As I said, this entire section on continuous discovery is still subject to experimentation on my part. Specially the concept of quarterly team roadmap represented by the image before the last and also, this version of triple track agile I have just presented is a work in progress. I’ve been trying both this semester, so far so good.

On my next post, I’ll try to demonstrate some of the fretworks you and your team can you to track your progress, to prioritize your experiments and to communicate the status of your discovery. I will also discuss funnel frameworks such as Pirate Metrics, AIDAOR and 0d-30d-90d.

Well, thanks for getting this far.

--

--

Misael Neto

Software Developer, former entrepreneur, product manager