The power of doing as thinking 🤔

The more I work with product teams, the more I fall in love with the continuous discovery process. I’ve found the best product teams are those that embrace learning what experiences will materially change their path forward and are not afraid to fail. This means they need a culture of learning, where they are not judged on success and or failures. 

It’s more valuable to be wildly unsuccessful but learn several key things from that experience, than to be successful and not know why. Reward teams for their learning are whether they succeeded or not because they can now make decisions based on evidence, not opinions. 

Whoever fails the most wins. - Seth Godin

If you fail too big you don’t get to learn any more, so it’s important to fail on a small scale and often. Prototypes are a space where you can test out ideas for a while, long enough to get good at it by failing and not annoy your audience. 

Doing is the best kind of thinking. - Tom Chi

To learn if something will be successful I encourage you to think about how you can use prototyping in more places. A prototype helps to visualise an idea and to answers of desirability, feasibility, viability or usability. The power of prototyping is immense; from improving your thinking to communicating your ideas and to help make better decisions. 

The purpose of a product team should be understanding human behaviour and to see if what you build can modify that behaviour over time. To understand if that idea has the potential to make a behaviour change before you build it. Turn these un-validated assumptions into hypothesis that you can test with users, customers and key stakeholders. 

The key to prototyping is been able to do it quickly. Focus on answering a specific questions, evaluating the most promising ideas with a design that is adaptable to feedback and the understanding that most of it will be thrown away. This allows you to decide what to do fast, and then only if it is desirable, feasible and viable should you design for scale.

When you have a good cadence of customer feedback and you’ve automated the recruitment process. You’ll want to have things ready to test togather those key learnings. However, fight the urge to polish your prototypes, rather test whatever the team are basing their decisions on that week in order to capture evidence and key learnings. 

Testing your ideas in a way that generates reliable and actionable feedback, will allow your team to iterate and turn assumption into great solutions. In addition to optimising the experience, look for delightful moments where customers have a behavioural response to your product. Through the rigger of each experiment you run, you’ll learn something new about your product and your customers. 

You probably know all of this already, but have you implemented design thinking into your product discovery. I would love for this to be the norm and to help those without a solid discovery strategy.

The deliberate practice of product discovery

How many days have passed since you last spoke to a customer?

Only in the movie Fields of Dreams does “If you build it, he (they) will come.” People don’t just show up. All profitable, effective products begin by identifying and talking to their customer first. You’ve got to know who you are serving and what they’re doing before you start.

Just like everything in product, research should have its’ own success criteria and metrics to evaluative your research objectives. What cadence of research have you established for success? 

If too much time passes between your customer interviews, then the decisions you make on a daily bases are based purely on assumptions. To learn and improve your product, it requires the deliberate practice of product discovery to see how people actually behave with your product offering.

When a product team develops a weekly habit of customer interviews, they don’t just get the benefit of interviewing more often, they also start visualising opportunities. The team improves their critical thinking skills and evaluate their solutions by testing their assumptions more often. They can minimise the risk of failing before they even start building anything. This results in connecting what is learnt from research with unbiased product decisions.

Deliberate interview questions

Asking the right interview questions will help uncover how your customers behave. To uncover the gap between what they say they do versus what they actually do requires us to ask the right types of questions. 

You can’t simply ask your customers about their behaviour and expect to get an accurate answer. Most will give you what sounds like a reasonable answer. You won’t know if they are telling you about their ideal behaviour or their actual behaviour. Nor will you know if they are simply telling you a coherent story that sounds true, but isn’t true in practice. In Thinking, Fast and Slow Daniel Kahneman argues confidence isn’t a good indicator of truth or reality. He argues, “Confidence is a feeling, which reflects the coherence of the information and the cognitive ease of processing it.” Not necessarily the truth.

If you build a product based on your customer’s ideal self, you might get the initial sale, but you’ll struggle to engage them, and you’ll churn through customer after customer. If you want to build a successful product, you need to understand your customer’s actual behaviour—their reality—not the story they tell themselves.

Instead of asking, “How often do you…?”, ask, “How many times did you… in the last week?” You can follow it up with a question like, “Is that typical?” This can help surface if the last time was unusual. If it was, ask about other instances. But don’t let your customer generalise. If they start off with, “Usually I …”, encourage them to tell you another story of a specific instance. You’ll get more reliable information.

Improve your active listening

This means focusing on what the customer is actually saying, not what we want to hear, thinking from their perspective, and not projecting our own experiences on to their words.

Try to understand their meaning, often the emotion or feeling in our message communicates as much, if not more, than the words that are used. Take note the non-verbal cues, from the tone of voice, clarity of speed, rate of speech, and whether or not they hesitate.

Reflect back to the customer what you heard, this may not seem like a listening skill, but even when you focus on what is being said you need to confirm with them, that what you heard, is in fact what they meant. If you aren’t sure what they mean, don’t assume or project your own assumptions. Instead, ask them for to clarify. 

Continuous discovery synthesis

Visual thinking is one of the most valuable parts of the creative process. It helps you think, by drawing your ideas and see them in new ways, so you can continue to iterate.

Teams hear a lot of information from customers, identify many different opportunities, and generate multiple solutions. Mapping is a critical way for teams to synthesise all this information, agree on the key points, and create an action plan for which solutions they’d like to pursue and how they’ll test out their assumptions and experiment to validate those solutions.

There are many different maps to visualise insights. The Opportunity Solution Treecreated by Teresa Torres illustrates the value of mapping around a shared understanding, and communicate how you’ll reach the desired outcome.

Maps are living documents they aren’t static. They should reflect what you currently know. If you are continuously learning, they should be continuously evolving. The opportunity solution tree should evolve week over week as the team learns about the opportunity space and explores solutions via prototyping and experimenting.

Automate the recruitment process

Teams that interview infrequently, recruit by sending large inefficient emails to their large pool of candidates to only get a handful of responses and some cancellations.

To recruit in a sustainable way for continuous discovery teams need to automate the process of getting participants booked in weekly. For consumers websites tools like Ethnio or Qualaroo recruit directly from your site. Depending on the volume of traffic this can be automated to recruit live or try Calendly to book interviews through a shared calendar. 

There are lots of ways to automate this and teams need to find what works best for them. The goal is to have participants scheduled on a regular basis. Over time you can refine the automated recruiting processes, include screening questions to be more specific about the customer you’re targeting. 

At Open Universities Australia we automate the recruitment process using Hotjar to survey participants direct from the website, triggering automatic email with a booking link to Calendly.

How many days have passed since you last checked your assumptions? More than 10 days then you might want to start reserving some time to build your own deliberate product discovery practice.

The struggling product team

Five failures of a modern product team

  1. Failure to learn

  2. Narrow focus on the product 

  3. Long time between customer validation 

  4. Building the functional and skipping the delightful

  5. Designing for scale without been adaptable 

When I see product teams struggle, it’s not because of a lack of ability to ship. It’s often because the culture of success is rewarded and failure is avoided. By now, we are all probably well aware of the only failure, is the failure not to learn motto. 

The problem isn’t teams willing to learn. All teams would welcome the insights to do better work. What teams get wrong, is they focus on shipping features and wait to see if success materialises. However, the culture of learning is to increase the chance of success by understanding what customers actually want. (Not what they say they want.)

We need to be able to learn what has made something successful or not. To repeat and refine our efforts. It's less gratifying than the praise of a successful feature but the learnings are ultimately more valuable to the product. This problem stems from when teams have a narrow focus and think the product is the code they write or the pixels their customer interacts with. But for everyone who builds products, should focus on the behaviour of your customers. The team’s jobs are to see how human behaviour is changed by what they build.

To understand how something is going to work in the real world. First, it needs to be de-risked by conducting small cycles of feedback. Teams know they should be validating their work constantly. However, when customer validation is treated with the same culture of success bias, they protect their ideas and often delay testing to as late as possible. They’re likely to only build the leanest version of an MVP as they have too many uncertainties. 

The problem is they ship functionally at the cost of delightful moments with your customers, which is actually what your product is all about. Your product is not the search box, the algorithm, your user records – it is this magic moment, and your task is to understand what experiences your customers are finding delightful. These moments don’t happen by chance, it comes from many rounds of talking to your customers and understanding their needs and creating an experience which goes beyond their basic expectations. 

During customer feedback sessions look for moments where the customer's eyes light up in response to your product.

Successful teams create a cadence of continuous customer feedback, by building into their sprints non-negotiable feedback sessions. Where they test anything and everything; assumptions, prototypes and code. Whatever the product team need to move forward by learning, instead of getting stuck in the debate of right and wrong based opinions. 

Continuously testing is important because what gets the product from A to B will not necessarily be what gets it from B to C. A product that is designed with adaptability in-mind will be able to pivot towards its vision. Most of the time development is focused on stability and scalability. However, starting with a thin slice of the experience and an adaptable design that will all be thrown away. Will allow teams to move fast and decide what to build, before then focusing on designing how to do it at scale.

As UX design lead, I help product leaders build and scale their teams to meet the needs of their companies. Starting with the product strategy, the goal is to figure out how we can accelerate these products. Often that is to ask if the teams are working on the right problem? How does what they are working on cascade-down from the product vision? Who are the right customers they should be talking to, and what can we do that will add the most value?

Saying no to good ideas

(Value / effort) x confidence = priority

When we’re looking at a backlog of potential features or a flood of new ideas to prioritise. We know every stakeholder, executive and member of the team will have a different opinion or preference. How do you choose what to do next?

When you know your why… it's easier to say no to good ideas. If an idea doesn't contribute to your objectives it won't bring you the value to meet your vision. 

(Value / effort) x confidence = priority 

This formula is by Bruce McCarthy, priorities ideas based the value you expect it would contribute to your objectives, over the effort that would be required and multiplied by the confidence in both Value and Effort.

To fill out the formula, we need to engage the wider team and have an essential discussion about what these numbers mean. In the formula; the value can be estimated with 1, 2 or 3 for low, medium or high based on the combined value the idea has to both the company and user. The effort uses a similar scale of 1, 2 or 3 representing whether it will be easy, medium, or really hard to implement.

These two numbers are divided to produce a raw value over effort score. It’s here you can identify quick-wins or longer initiative and the perceived gains. However, like with all assumptions we first need to validate these scores and increase certainty in knowing which idea will have the greatest impact on our outcomes.

Confidence from how much evidence we have, versus how much we’ve had to guess is measured on a percentage scale; from a complete guess of 0.0, there’s a little evidence of 0.25, fairly sure of 0.5, Done a lot of research and are very sure we’re right of 0.75, to absolutely, incontrovertibly sure of 1.0.

The Confidence number penalises us when we’ve wrongly guessed the value or effort. It rewards teams that invest in research to increases confidence and deliver better-designed products. We can increase the confidence scores by collecting research conducted with our customer, both quant and qual. Gathering insights from experimentations to proof of concepts. Start discovery on the high-value low-effort items and validate the scores. Your learnings will increase confidence and prove it the idea contains value or uncovers hidden complexities.

Saying no to good ideas is an absolute must

Stay focus on outcomes and the ideas that will contribute to the vision. With this formula as a framework, those ideas without confidence will have less priority. However, those without value can be simply discarded because it doesn't fit your vision.

As a UX design leader, I help product teams create better experiences to capture more value. Through the design research, I increase confidence in assumptions to create better products people are willing to change their behaviour for. I’m obsessed about crafting amazing digital experiences and help Product Managers achieve their objectives.

Turn assumptions into hypothesis

For value-based design process

It's a good habit to be curious about all aspects of the product. To ask five why’s, we all have unfounded beliefs and this can often course us to miss the obvious or take the un-validated assumption for granted which may kick us in the arse later. 

There are different types of assumptions. Those that are core assumptions, which is an assumption that must be true for your solution to work. Unknown assumptions, that must be understood to reduce possible risk. Risky assumptions that if proven wrong would cause the project to fail. 

Assumptions may start as naive statements, but it’s this venerability that will help us form a good question. We can turn these assumptions into questions through a technique from IDEO called How Might We… To phases the questions in an open-ended statement to avoid stating a solution. These are opportunities that should align back to the desired outcome and goal as defined by the product vision. 

Stand back from questions and you will start to see similarities and connections. By taking a 10,000 feet view of your assumptions, you will start to have ideas to combine the How Might We’s and form potential solutions and organise them into groups to review. It’s especially helpful if you can do this with your team and gain input for more peoples point of view. When it comes to choosing which question to tackle first, think about what is the riskiest assumption that would derail the whole project along side those that will have the greatest impact you’re product the most or what would bring the most value. 

After the prioritisation, it’s time to combine solutions with the How Might We question into a hypothesis. A hypothesis is a framework to clearly define the question, audience, solution and to eliminate the assumption. There are different mixes of hypotheses; from building or prototyping software, services or other actions that are not software-related. It’s important to break all of your hypotheses down into more specific, actionable-hypotheses that can be tracked in your project, but you may decide to separate your non software-related hypotheses and track them separately.

The formate of an actionable-hypothesis follows these four steps:

We believe that doing, building, or creating this for these people will result in this outcome and we will know we’re right when we see this metric

Next up is developing an experiment so you can test your hypothesis. Our test will follow the scientific methods, so it’s subject to collecting empirical and measurable evidence to obtain new knowledge. In other words, it’s crucial to have a measurable outcome for the hypothesis so we can determine whether it has succeeded or failed.

There are different experiment that you can run to validate your hypothesis from qualitative methods like interviews, landing page validation and usability testing etc. To quantitative data from surveys or analytics. Define what the experiment will be, and the outcomes that determine if the hypothesis is valid. A well-defined experiment should validate or invalidate the hypothesis.

After defining the experiment, it’s time to think about design. The trap that people often fall into is overly designing the experiment and thinking about too many scenarios. At this point you don’t need to have every detail thought through, rather focus on designing just what is needed to be tested. It needs to just enough design to be believable but no more. Once the hypothesis has been proven can the polish be applied. 

Hypothesis-driven experimentation will give you insight into your visitors' behaviours. These insights will generate additional questions about your visitors and their experience—drive an iterative learning process.

If you just learned that the result was positive and you maybe excited to roll out the feature. That’s great! But did you learn anything that would make the solution better? If the hypothesis failed, don’t worry—you’ll have gained insights from the experiment to apply to the next. Through the rigger of each experiment you run, you’ll learn something new about your product and your customers. 

I don’t expect any of this to be new to you. You probably know all of this already, but perhaps haven’t systematically implemented it into your product. I would love for this to be the norm and to help those without a solid discovery strategy.

What do you reckon?

Loading more posts…