• Work together
  • Product development
  • Ways of working

menu image

Have you read my two bestsellers, Unlearn and Lean Enterprise? If not, please do. If you have, please write a review!

  • Read my story
  • Get in touch

menu image

  • Oval Copy 2 Blog

How to Implement Hypothesis-Driven Development

  • Facebook__x28_alt_x29_ Copy

Remember back to the time when we were in high school science class. Our teachers had a framework for helping us learn – an experimental approach based on the best available evidence at hand. We were asked to make observations about the world around us, then attempt to form an explanation or hypothesis to explain what we had observed. We then tested this hypothesis by predicting an outcome based on our theory that would be achieved in a controlled experiment – if the outcome was achieved, we had proven our theory to be correct.

We could then apply this learning to inform and test other hypotheses by constructing more sophisticated experiments, and tuning, evolving, or abandoning any hypothesis as we made further observations from the results we achieved.

Experimentation is the foundation of the scientific method, which is a systematic means of exploring the world around us. Although some experiments take place in laboratories, it is possible to perform an experiment anywhere, at any time, even in software development.

Practicing Hypothesis-Driven Development [1] is thinking about the development of new ideas, products, and services – even organizational change – as a series of experiments to determine whether an expected outcome will be achieved. The process is iterated upon until a desirable outcome is obtained or the idea is determined to be not viable.

We need to change our mindset to view our proposed solution to a problem statement as a hypothesis, especially in new product or service development – the market we are targeting, how a business model will work, how code will execute and even how the customer will use it.

We do not do projects anymore, only experiments. Customer discovery and Lean Startup strategies are designed to test assumptions about customers. Quality Assurance is testing system behavior against defined specifications. The experimental principle also applies in Test-Driven Development – we write the test first, then use the test to validate that our code is correct, and succeed if the code passes the test. Ultimately, product or service development is a process to test a hypothesis about system behavior in the environment or market it is developed for.

The key outcome of an experimental approach is measurable evidence and learning. Learning is the information we have gained from conducting the experiment. Did what we expect to occur actually happen? If not, what did and how does that inform what we should do next?

In order to learn we need to use the scientific method for investigating phenomena, acquiring new knowledge, and correcting and integrating previous knowledge back into our thinking.

As the software development industry continues to mature, we now have an opportunity to leverage improved capabilities such as Continuous Design and Delivery to maximize our potential to learn quickly what works and what does not. By taking an experimental approach to information discovery, we can more rapidly test our solutions against the problems we have identified in the products or services we are attempting to build. With the goal to optimize our effectiveness of solving the right problems, over simply becoming a feature factory by continually building solutions.

The steps of the scientific method are to:

  • Make observations
  • Formulate a hypothesis
  • Design an experiment to test the hypothesis
  • State the indicators to evaluate if the experiment has succeeded
  • Conduct the experiment
  • Evaluate the results of the experiment
  • Accept or reject the hypothesis
  • If necessary, make and test a new hypothesis

Using an experimentation approach to software development

We need to challenge the concept of having fixed requirements for a product or service. Requirements are valuable when teams execute a well known or understood phase of an initiative and can leverage well-understood practices to achieve the outcome. However, when you are in an exploratory, complex and uncertain phase you need hypotheses. Handing teams a set of business requirements reinforces an order-taking approach and mindset that is flawed. Business does the thinking and ‘knows’ what is right. The purpose of the development team is to implement what they are told. But when operating in an area of uncertainty and complexity, all the members of the development team should be encouraged to think and share insights on the problem and potential solutions. A team simply taking orders from a business owner is not utilizing the full potential, experience and competency that a cross-functional multi-disciplined team offers.

Framing Hypotheses

The traditional user story framework is focused on capturing requirements for what we want to build and for whom, to enable the user to receive a specific benefit from the system.

As A…. <role>

I Want… <goal/desire>

So That… <receive benefit>

Behaviour Driven Development (BDD) and Feature Injection aims to improve the original framework by supporting communication and collaboration between developers, tester and non-technical participants in a software project.

In Order To… <receive benefit>

As A… <role>

When viewing work as an experiment, the traditional story framework is insufficient. As in our high school science experiment, we need to define the steps we will take to achieve the desired outcome. We then need to state the specific indicators (or signals) we expect to observe that provide evidence that our hypothesis is valid. These need to be stated before conducting the test to reduce the bias of interpretation of results.

If we observe signals that indicate our hypothesis is correct, we can be more confident that we are on the right path and can alter the user story framework to reflect this.

Therefore, a user story structure to support Hypothesis-Driven Development would be;

hdd-card

We believe < this capability >

What functionality we will develop to test our hypothesis? By defining a ‘test’ capability of the product or service that we are attempting to build, we identify the functionality and hypothesis we want to test.

Will result in < this outcome >

What is the expected outcome of our experiment? What is the specific result we expect to achieve by building the ‘test’ capability?

We will have confidence to proceed when < we see a measurable signal >

What signals will indicate that the capability we have built is effective? What key metrics (qualitative or quantitative) we will measure to provide evidence that our experiment has succeeded and give us enough confidence to move to the next stage.

The threshold you use for statistical significance will depend on your understanding of the business and context you are operating within. Not every company has the user sample size of Amazon or Google to run statistically significant experiments in a short period of time. Limits and controls need to be defined by your organization to determine acceptable evidence thresholds that will allow the team to advance to the next step.

For example, if you are building a rocket ship you may want your experiments to have a high threshold for statistical significance. If you are deciding between two different flows intended to help increase user sign up you may be happy to tolerate a lower significance threshold.

The final step is to clearly and visibly state any assumptions made about our hypothesis, to create a feedback loop for the team to provide further input, debate, and understanding of the circumstance under which we are performing the test. Are they valid and make sense from a technical and business perspective?

Hypotheses, when aligned to your MVP, can provide a testing mechanism for your product or service vision. They can test the most uncertain areas of your product or service, in order to gain information and improve confidence.

Examples of Hypothesis-Driven Development user stories are;

Business story.

We Believe That increasing the size of hotel images on the booking page Will Result In improved customer engagement and conversion We Will Have Confidence To Proceed When  we see a 5% increase in customers who review hotel images who then proceed to book in 48 hours.

It is imperative to have effective monitoring and evaluation tools in place when using an experimental approach to software development in order to measure the impact of our efforts and provide a feedback loop to the team. Otherwise, we are essentially blind to the outcomes of our efforts.

In agile software development, we define working software as the primary measure of progress. By combining Continuous Delivery and Hypothesis-Driven Development we can now define working software and validated learning as the primary measures of progress.

Ideally, we should not say we are done until we have measured the value of what is being delivered – in other words, gathered data to validate our hypothesis.

Examples of how to gather data is performing A/B Testing to test a hypothesis and measure to change in customer behavior. Alternative testings options can be customer surveys, paper prototypes, user and/or guerilla testing.

One example of a company we have worked with that uses Hypothesis-Driven Development is lastminute.com . The team formulated a hypothesis that customers are only willing to pay a max price for a hotel based on the time of day they book. Tom Klein, CEO and President of Sabre Holdings shared the story  of how they improved conversion by 400% within a week.

Combining practices such as Hypothesis-Driven Development and Continuous Delivery accelerates experimentation and amplifies validated learning. This gives us the opportunity to accelerate the rate at which we innovate while relentlessly reducing costs, leaving our competitors in the dust. Ideally, we can achieve the ideal of one-piece flow: atomic changes that enable us to identify causal relationships between the changes we make to our products and services, and their impact on key metrics.

As Kent Beck said, “Test-Driven Development is a great excuse to think about the problem before you think about the solution”. Hypothesis-Driven Development is a great opportunity to test what you think the problem is before you work on the solution.

We also run a  workshop to help teams implement Hypothesis-Driven Development . Get in touch to run it at your company. 

[1]  Hypothesis-Driven Development  By Jeffrey L. Taylor

More strategy insights

Say hello to venture capital 3.0, negotiation made simple with dr john lowry, how high performance organizations innovate at scale, read my newsletter.

Insights in every edition. News you can use. No spam, ever. Read the latest edition

We've just sent you your first email. Go check it out!

.

  • Explore Insights
  • Nobody Studios
  • LinkedIn Learning: High Performance Organizations

Advisory boards aren’t only for executives. Join the LogRocket Content Advisory Board today →

LogRocket blog logo

  • Product Management
  • Solve User-Reported Issues
  • Find Issues Faster
  • Optimize Conversion and Adoption

What is hypothesis-driven development?

hypothesis driven development framework

Uncertainty is one of the biggest challenges of modern product development. Most often, there are more question marks than answers available.

What Is Hypothesis Driven Development

This fact forces us to work in an environment of ambiguity and unpredictability.

Instead of combatting this, we should embrace the circumstances and use tools and solutions that excel in ambiguity. One of these tools is a hypothesis-driven approach to development.

Hypothesis-driven development in a nutshell

As the name suggests, hypothesis-driven development is an approach that focuses development efforts around, you guessed it, hypotheses.

To make this example more tangible, let’s compare it to two other common development approaches: feature-driven and outcome-driven.

In feature-driven development, we prioritize our work and effort based on specific features we planned and decided on upfront. The underlying goal here is predictability.

In outcome-driven development, the priorities are dictated not by specific features but by broader outcomes we want to achieve. This approach helps us maximize the value generated.

When it comes to hypothesis-driven development, the development effort is focused first and foremost on validating the most pressing hypotheses the team has. The goal is to maximize learning speed over all else.

Benefits of hypothesis-driven development

There are numerous benefits of a hypothesis-driven approach to development, but the main ones include:

Continuous learning

Mvp mindset, data-driven decision-making.

Hypothesis-driven development maximizes the amount of knowledge the team acquires with each release.

After all, if all you do is test hypotheses, each test must bring you some insight:

Continuous Learning With Hypothesis Driven Development Cycle Image

Hypothesis-driven development centers the whole prioritization and development process around learning.

Instead of designing specific features or focusing on big, multi-release outcomes, a hypothesis-driven approach forces you to focus on minimum viable solutions ( MVPs ).

After all, the primary thing you are aiming for is hypothesis validation. It often doesn’t require scalability, perfect user experience, and fully-fledged features.

hypothesis driven development framework

Over 200k developers and product managers use LogRocket to create better digital experiences

hypothesis driven development framework

By definition, hypothesis-driven development forces you to truly focus on MVPs and avoid overcomplicating.

In hypothesis-driven development, each release focuses on testing a particular assumption. That test then brings you new data points, which help you formulate and prioritize next hypotheses.

That’s truly a data-driven development loop that leaves little room for HiPPOs (the highest-paid person in the room’s opinion).

Guide to hypothesis-driven development

Let’s take a look at what hypothesis-driven development looks like in practice. On a high level, it consists of four steps:

  • Formulate a list of hypotheses and assumptions
  • Prioritize the list
  • Design an MVP
  • Test and repeat

1. Formulate hypotheses

The first step is to list all hypotheses you are interested in.

Everything you wish to know about your users and market, as well as things you believe you know but don’t have tangible evidence to support, is a form of a hypothesis.

At this stage, I’m not a big fan of robust hypotheses such as, “We believe that if <we do something> then <something will happen> because <some user action>.”

To have such robust hypotheses, you need a solid enough understanding of your users, and if you do have it, then odds are you don’t need hypothesis-driven development anymore.

Instead, I prefer simpler statements that are closer to assumptions than hypotheses, such as:

  • “Our users will love the feature X”
  • “The option to do X is very important for student segment”
  • “Exam preparation is an important and underserved need that our users have”

2. Prioritize

The next step in hypothesis-driven development is to prioritize all assumptions and hypotheses you have. This will create your product backlog:

Prioritization Graphic With Cards In Order Of Descending Priority

There are various prioritization frameworks and approaches out there, so choose whichever you prefer. I personally prioritize assumptions based on two main criteria:

  • How much will we gain if we positively validate the hypothesis?
  • How much will we learn during the validation process?

Your priorities, however, might differ depending on your current context.

3. Design an MVP

Hypothesis-driven development is centered around the idea of MVPs — that is, the smallest possible releases that will help you gather enough information to validate whether a given hypothesis is true.

User experience, maintainability, and product excellence are secondary.

4. Test and repeat

The last step is to launch the MVP and validate whether the actual impact and consequent user behavior validate or invalidate the initial hypothesis.

The success isn’t measured by whether the hypothesis turned out to be accurate, but by how many new insights and learnings you captured during the process.

Based on the experiment, revisit your current list of assumptions, and, if needed, adjust the priority list.

Challenges of hypothesis-driven development

Although hypothesis-driven development comes with great benefits, it’s not all wine and roses.

Let’s take a look at a few core challenges that come with a hypothesis-focused approach.

Lack of robust product experience

Focusing on validating hypotheses and underlying MVP mindset comes at a cost. Robust product experience and great UX often require polishes, optimizations, and iterations, which go against speed-focused hypothesis-driven development.

You can’t optimize for both learning and quality simultaneously.

Unfocused direction

Although hypothesis-driven development is great for gathering initial learnings, eventually, you need to start developing a focused and sustainable long-term product strategy. That’s where outcome-driven development shines.

There’s an infinite amount of explorations you can do, but at some point, you must flip the switch and narrow down your focus around particular outcomes.

Over-emphasis on MVPs

Teams that embrace a hypothesis-driven approach often fall into the trap of an “MVP only” approach. However, shipping an actual prototype is not the only way to validate an assumption or hypothesis.

You can utilize tools such as user interviews, usability tests, market research, or willingness to pay (WTP) experiments to validate most of your doubts.

There’s a thin line between being MVP-focused in development and overusing MVPs as a validation tool.

When to use hypothesis-driven development

As you’ve most likely noticed, a hypothesis-driven development isn’t a multi-tool solution that can be used in every context.

On the contrary, its challenges make it an unsuitable development strategy for many companies.

As a rule of thumb, hypothesis-driven development works best in early-stage products with a high dose of ambiguity. Focusing on hypotheses helps bring enough clarity for the product team to understand where even to focus:

When To Use Hypothesis Driven Development Grid

But once you discover your product-market fit and have a solid idea for your long-term strategy, it’s often better to shift into more outcome-focused development. You should still optimize for learning, but it should no longer be the primary focus of your development effort.

While at it, you might also consider feature-driven development as a next step. However, that works only under particular circumstances where predictability is more important than the impact itself — for example, B2B companies delivering custom solutions for their clients or products focused on compliance.

Hypothesis-driven development can be a powerful learning-maximization tool. Its focus on MVP, continuous learning process, and inherent data-driven approach to decision-making are great tools for reducing uncertainty and discovering a path forward in ambiguous settings.

Honestly, the whole process doesn’t differ much from other development processes. The primary difference is that backlog and priories focus on hypotheses rather than features or outcomes.

Start by listing your assumptions, prioritizing them as you would any other backlog, and working your way top-to-bottom by shipping MVPs and adjusting priorities as you learn more about your market and users.

However, since hypothesis-driven development often lacks long-term cohesiveness, focus, and sustainable product experience, it’s rarely a good long-term approach to product development.

I tend to stick to outcome-driven and feature-driven approaches most of the time and resort to hypothesis-driven development if the ambiguity in a particular area is so hard that it becomes challenging to plan sensibly.

Featured image source: IconScout

LogRocket generates product insights that lead to meaningful action

Get your teams on the same page — try LogRocket today.

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • #product strategy

hypothesis driven development framework

Stop guessing about your digital experience with LogRocket

Recent posts:.

Strategies For Identifying Your Target Market

Strategies for identifying your target market

The process of identifying your target market is an ongoing effort that evolves with your product and the marketplace.

hypothesis driven development framework

Leader Spotlight: Simplifying the complexity of product strategy, with Nadya Boone

Nadya Boone talks about how she simplifies product strategy by creating different versions of the product roadmap for various stakeholders.

hypothesis driven development framework

Leader Spotlight: Content at the heart of the business model, with Rob Keenan

Rob Keenan, VP of Product Development at Adweek, shares how the underlying heart of their strategy lies with valuable content.

hypothesis driven development framework

A guide to problem-solving techniques, steps, and skills

In this article, you’ll learn a framework for approaching problem-solving, alongside how you can improve your problem-solving skills.

hypothesis driven development framework

Leave a Reply Cancel reply

The 6 Steps that We Use for Hypothesis-Driven Development

hypothesis driven development framework

One of the greatest fears of product managers is to create an app that flopped because it's based on untested assumptions. After successfully launching more than 20 products, we're convinced that we've found the right approach for hypothesis-driven development.

In this guide, I'll show you how we validated the hypotheses to ensure that the apps met the users' expectations and needs.

What is hypothesis-driven development?

Hypothesis-driven development is a prototype methodology that allows product designers to develop, test, and rebuild a product until it’s acceptable by the users. It is an iterative measure that explores assumptions defined during the project and attempts to validate it with users’ feedbacks.

What you have assumed during the initial stage of development may not be valid for the users. Even if they are backed by historical data, user behaviors can be affected by specific audiences and other factors. Hypothesis-driven development removes these uncertainties as the project progresses. 

hypothesis-driven development

Why we use hypothesis-driven development

For us, the hypothesis-driven approach provides a structured way to consolidate ideas and build hypotheses based on objective criteria. It’s also less costly to test the prototype before production.

Using this approach has reliably allowed us to identify what, how, and in which order should the testing be done. It gives us a deep understanding of how we prioritise the features, how it’s connected to the business goals and desired user outcomes.

We’re also able to track and compare the desired and real outcomes of developing the features. 

The process of Prototype Development that we use

Our success in building apps that are well-accepted by users is based on the Lean UX definition of hypothesis. We believe that the business outcome will be achieved if the user’s outcome is fulfilled for the particular feature. 

Here’s the process flow:

How Might We technique → Dot voting (based on estimated/assumptive impact) → converting into a hypothesis → define testing methodology (research method + success/fail criteria) → impact effort scale for prioritizing → test, learn, repeat.

Once the hypothesis is proven right, the feature is escalated into the development track for UI design and development. 

hypothesis driven development

Step 1: List Down Questions And Assumptions

Whether it’s the initial stage of the project or after the launch, there are always uncertainties or ideas to further improve the existing product. In order to move forward, you’ll need to turn the ideas into structured hypotheses where they can be tested prior to production.  

To start with, jot the ideas or assumptions down on paper or a sticky note. 

Then, you’ll want to widen the scope of the questions and assumptions into possible solutions. The How Might We (HMW) technique is handy in rephrasing the statements into questions that facilitate brainstorming.

For example, if you have a social media app with a low number of users, asking, “How might we increase the number of users for the app?” makes brainstorming easier. 

Step 2: Dot Vote to Prioritize Questions and Assumptions

Once you’ve got a list of questions, it’s time to decide which are potentially more impactful for the product. The Dot Vote method, where team members are given dots to place on the questions, helps prioritize the questions and assumptions. 

Our team uses this method when we’re faced with many ideas and need to eliminate some of them. We started by grouping similar ideas and use 3-5 dots to vote. At the end of the process, we’ll have the preliminary data on the possible impact and our team’s interest in developing certain features. 

This method allows us to prioritize the statements derived from the HMW technique and we’re only converting the top ones. 

Step 3: Develop Hypotheses from Questions

The questions lead to a brainstorming session where the answers become hypotheses for the product. The hypothesis is meant to create a framework that allows the questions and solutions to be defined clearly for validation.

Our team followed a specific format in forming hypotheses. We structured the statement as follow:

We believe we will achieve [ business outcome], 

If [ the persona],

Solve their need in  [ user outcome] using [feature]. ‍

Here’s a hypothesis we’ve created:

We believe we will achieve DAU=100 if Mike (our proto persona) solve their need in recording and sharing videos instantaneously using our camera and cloud storage .

hypothesis driven team

Step 4: Test the Hypothesis with an Experiment

It’s crucial to validate each of the assumptions made on the product features. Based on the hypotheses, experiments in the form of interviews, surveys, usability testing, and so forth are created to determine if the assumptions are aligned with reality. 

Each of the methods provides some level of confidence. Therefore, you don’t want to be 100% reliant on a particular method as it’s based on a sample of users.

It’s important to choose a research method that allows validation to be done with minimal effort. Even though hypotheses validation provides a degree of confidence, not all assumptions can be tested and there could be a margin of error in data obtained as the test is conducted on a sample of people. 

The experiments are designed in such a way that feedback can be compared with the predicted outcome. Only validated hypotheses are brought forward for development.

Testing all the hypotheses can be tedious. To be more efficient, you can use the impact effort scale. This method allows you to focus on hypotheses that are potentially high value and easy to validate. 

You can also work on hypotheses that deliver high impact but require high effort. Ignore those that require high impact but low impact and keep hypotheses with low impact and effort into the backlog. 

At Uptech, we assign each hypothesis with clear testing criteria. We rank the hypothesis with a binary ‘task success’ and subjective ‘effort on task’ where the latter is scored from 1 to 10. 

While we’re conducting the test, we also collect qualitative data such as the users' feedback. We have a habit of segregation the feedback into pros, cons and neutral with color-coded stickers.  (red - cons, green -pros, blue- neutral).

The best practice is to test each hypothesis at least on 5 users. 

Step 5  Learn, Build (and Repeat)

The hypothesis-driven approach is not a single-ended process. Often, you’ll find that some of the hypotheses are proven to be false. Rather than be disheartened, you should use the data gathered to finetune the hypothesis and design a better experiment in the next phase.

Treat the entire cycle as a learning process where you’ll better understand the product and the customers. 

We’ve found the process helpful when developing an MVP for Carbon Club, an environmental startup in the UK. The app allows users to donate to charity based on the carbon-footprint produced. 

In order to calculate the carbon footprint, we’re weighing the options of

  • Connecting the app to the users’ bank account to monitor the carbon footprint based on purchases made.
  • Allowing users to take quizzes on their lifestyles.

Upon validation, we’ve found that all of the users opted for the second option as they are concerned about linking an unknown app to their banking account. 

The result makes us shelves the first assumption we’ve made during pre-Sprint research. It also saves our client $50,000, and a few months of work as connecting the app to the bank account requires a huge effort. 

hypothesis driven development

Step 6: Implement Product and Maintain

Once you’ve got the confidence that the remaining hypotheses are validated, it’s time to develop the product. However, testing must be continued even after the product is launched. 

You should be on your toes as customers’ demands, market trends, local economics, and other conditions may require some features to evolve. 

hypothesis driven development

Our takeaways for hypothesis-driven development

If there’s anything that you could pick from our experience, it’s these 5 points.

1. Should every idea go straight into the backlog? No, unless they are validated with substantial evidence. 

2. While it’s hard to define business outcomes with specific metrics and desired values, you should do it anyway. Try to be as specific as possible, and avoid general terms. Give your best effort and adjust as you receive new data.  

3. Get all product teams involved as the best ideas are born from collaboration.

4. Start with a plan consists of 2 main parameters, i.e., criteria of success and research methods. Besides qualitative insights, you need to set objective criteria to determine if a test is successful. Use the Test Card to validate the assumptions strategically. 

5. The methodology that we’ve recommended in this article works not only for products. We’ve applied it at the end of 2019 for setting the strategic goals of the company and end up with robust results, engaged and aligned team.

You'll have a better idea of which features would lead to a successful product with hypothesis-driven development. Rather than vague assumptions, the consolidated data from users will provide a clear direction for your development team. 

As for the hypotheses that don't make the cut, improvise, re-test, and leverage for future upgrades.

Keep failing with product launches? I'll be happy to point you in the right direction. Drop me a message here.

Tell us about your idea. We will reach you out.

Cookie Notice

This site uses cookies for performance, analytics, personalization and advertising purposes.

For more information about how we use cookies please see our Cookie Policy .

Cookie Policy   |   Privacy Policy

Manage Consent Preferences

Essential/Strictly Necessary Cookies

These cookies are essential in order to enable you to move around the website and use its features, such as accessing secure areas of the website.

Analytical/ Performance Cookies

These are analytics cookies that allow us to collect information about how visitors use a website, for instance which pages visitors go to most often, and if they get error messages from web pages. This helps us to improve the way the website works and allows us to test different ideas on the site.

Functional/ Preference Cookies

These cookies allow our website to properly function and in particular will allow you to use its more personal features.

Targeting/ Advertising Cookies

These cookies are used by third parties to build a profile of your interests and show you relevant adverts on other sites. You should check the relevant third party website for more information and how to opt out, as described below.

Announcing Dell Data Lakehouse Analytics Engine powered by Starburst Read the announcement >

hypothesis driven development framework

  • Starburst vs OSS Trino

By Use Cases

  • Modern Data Lake
  • Artificial Intelligence
  • ELT Data Processing
  • Data Applications
  • Data Migrations
  • Data Products

By Industry

  • Financial Services
  • Healthcare & Life Sciences
  • Retail & CPG
  • All Industries
  • Meet our Customers
  • Customer Experience
  • Starburst Data Rebels
  • Documentation
  • Technical overview
  • Starburst Galaxy
  • Starburst Enterprise
  • Upcoming Events
  • Data Universe
  • Data Fundamentals
  • Starburst Academy
  • Become a Partner
  • Partner Login
  • Security & Trust

hypothesis driven development framework

Fully managed in the cloud

Self-managed anywhere

Hypothesis-driven development is an approach used in software development and product management

What is hypothesis driven development.

Hypothesis-driven development (HDD), also known as hypothesis-driven product development, is an approach used in software development and product management.

HDD involves creating hypotheses about user behavior, needs, or desired outcomes, and then designing and implementing experiments to validate or invalidate those hypotheses.

Data Lake Blogs About modern data lakes

O’Reilly Data Mesh Book

Data Mesh Book Cover

Get your free copy

Why use a hypothesis-driven approach?

How do you implement hypothesis-driven development.

With hypothesis-driven development, instead of making assumptions and building products or features based on those assumptions, teams should formulate hypotheses and conduct experiments to gather data and insights.

This method assists with making informed decisions and reduces the overall risk of building products that do not meet user needs or solve their problems.

At a high level, here’s a general approach to implementing HDD: 

  • Identify the problem or opportunity: Begin by identifying the problem or opportunity that you want to address with your product or feature.
  • Create a hypothesis: Clearly define a hypothesis that describes a specific user behavior, need, or outcome you believe will occur if you implement the solution.
  • Design an experiment: Determine the best way to test your hypothesis. This could involve creating a prototype, conducting user interviews, A/B testing, or other forms of user research.
  • Implement the experiment: Execute the experiment by building the necessary components or conducting the research activities.
  • Collect and analyze data: Gather data from the experiment and analyze the results to determine if the hypothesis is supported or not.
  • If the hypothesis is supported, you can move forward with further development. 
  • If the hypothesis is not supported, you may need to pivot, refine the hypothesis, or explore alternative solutions.
  • Rinse and repeat: Continuously repeat the process, iterating and refining your hypotheses and experiments to guide the development of your product or feature.

Hypothesis-driven development emphasizes a data-driven and iterative approach to product development, allowing teams to make more informed decisions, validate assumptions, and ultimately deliver products that better meet user needs.

hypothesis driven development framework

2022-25 Data Strategy: Three New Actionable Ideas for the CDO

Learn more arrow_right_alt

hypothesis driven development framework

6 ways to foster curiosity in data engineering and drive real results

hypothesis driven development framework

7 key takeaways: The future of financial services with distributed data

hypothesis driven development framework

A Gentle Introduction to the Hive Connector

hypothesis driven development framework

Lie #1 — A single source of truth

A single point of access to all your data, stay in the know - sign up for our newsletter.

  • Resource Library
  • Events and Webinars
  • Open-source Trino

Quick Links

Get in touch.

  • Customer Support

LinkedIn

© Starburst Data, Inc. Starburst and Starburst Data are registered trademarks of Starburst Data, Inc. All rights reserved. Presto®, the Presto logo, Delta Lake, and the Delta Lake logo are trademarks of LF Projects, LLC

Read Starburst reviews on G2

Privacy Policy   |   Legal Terms   |   Cookie Notice

Start Free with Starburst Galaxy

Up to $500 in usage credits included

  • Query your data lake fast with Starburst's best-in-class MPP SQL query engine
  • Get up and running in less than 5 minutes

For more deployment options:

Please fill in all required fields and ensure you are using a valid email address.

By clicking Create Account , you agree to Starburst Galaxy's terms of service and privacy policy .

Mobile Menu

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

HDD & More from Me

Hypothesis-Driven Development (Practitioner’s Guide)

Table of Contents

What is hypothesis-driven development (HDD)?

How do you know if it’s working, how do you apply hdd to ‘continuous design’, how do you apply hdd to application development, how do you apply hdd to continuous delivery, how does hdd relate to agile, design thinking, lean startup, etc..

Like agile, hypothesis-driven development (HDD) is more a point of view with various associated practices than it is a single, particular practice or process. That said, my goal here for is you to leave with a solid understanding of how to do HDD and a specific set of steps that work for you to get started.

After reading this guide and trying out the related practice you will be able to:

  • Diagnose when and where hypothesis-driven development (HDD) makes sense for your team
  • Apply techniques from HDD to your work in small, success-based batches across your product pipeline
  • Frame and enhance your existing practices (where applicable) with HDD

Does your product program feel like a Netflix show you’d binge watch? Is your team excited to see what happens when you release stuff? If so, congratulations- you’re already doing it and please hit me up on Twitter so we can talk about it! If not, don’t worry- that’s pretty normal, but HDD offers some awesome opportunities to work better.

Scientific-Method

Building on the scientific method, HDD is a take on how to integrate test-driven approaches across your product development activities- everything from creating a user persona to figuring out which integration tests to automate. Yeah- wow, right?! It is a great way to energize and focus your practice of agile and your work in general.

By product pipeline, I mean the set of processes you and your team undertake to go from a certain set of product priorities to released product. If you’re doing agile, then iteration (sprints) is a big part of making these work.

Product-Pipeline-Cowan.001

It wouldn’t be very hypothesis-driven if I didn’t have an answer to that! In the diagram above, you’ll find metrics for each area. For your application of HDD to what we’ll call continuous design, your metric to improve is the ratio of all your release content to the release content that meets or exceeds your target metrics on user behavior. For example, if you developed a new, additional way for users to search for products and set the success threshold at it being used in >10% of users sessions, did that feature succeed or fail by that measure? For application development, the metric you’re working to improve is basically velocity, meaning story points or, generally, release content per sprint. For continuous delivery, it’s how often you can release. Hypothesis testing is, of course, central to HDD and generally doing agile with any kind focus on valuable outcomes, and I think it shares the metric on successful release content with continuous design.

hypothesis driven development framework

The first component is team cost, which you would sum up over whatever period you’re measuring. This includes ‘c $ ’, which is total compensation as well as loading (benefits, equipment, etc.) as well as ‘g’ which is the cost of the gear you use- that might be application infrastructure like AWS, GCP, etc. along with any other infrastructure you buy or share with other teams. For example, using a backend-as-a-service like Heroku or Firebase might push up your value for ‘g’ while deferring the cost of building your own app infrastructure.

The next component is release content, fe. If you’re already estimating story points somehow, you can use those. If you’re a NoEstimates crew, and, hey, I get it, then you’d need to do some kind of rough proportional sizing of your release content for the period in question. The next term, r f , is optional but this is an estimate of the time you’re having to invest in rework, bug fixes, manual testing, manual deployment, and anything else that doesn’t go as planned.

The last term, s d , is one of the most critical and is an estimate of the proportion of your release content that’s successful relative to the success metrics you set for it. For example, if you developed a new, additional way for users to search for products and set the success threshold at it being used in >10% of users sessions, did that feature succeed or fail by that measure? Naturally, if you’re not doing this it will require some work and changing your habits, but it’s hard to deliver value in agile if you don’t know what that means and define it against anything other than actual user behavior.

Here’s how some of the key terms lay out in the product pipeline:

hypothesis driven development framework

The example here shows how a team might tabulate this for a given month:

hypothesis driven development framework

Is the punchline that you should be shooting for a cost of $1,742 per story point? No. First, this is for a single month and would only serve the purpose of the team setting a baseline for itself. Like any agile practice, the interesting part of this is seeing how your value for ‘F’ changes from period to period, using your team retrospectives to talk about how to improve it. Second, this is just a single team and the economic value (ex: revenue) related to a given story point will vary enormously from product to product. There’s a Google Sheets-based calculator that you can use here: Innovation Accounting with ‘F’ .

Like any metric, ‘F’ only matters if you find it workable to get in the habit of measuring it and paying attention to it. As a team, say, evaluates its progress on OKR (objectives and key results), ‘F’ offers a view on the health of the team’s collaboration together in the context of their product and organization. For example, if the team’s accruing technical debt, that will show up as a steady increase in ‘F’. If a team’s invested in test or deploy automation or started testing their release content with users more specifically, that should show up as a steady lowering of ‘F’.

In the next few sections, we’ll step through how to apply HDD to your product pipeline by area, starting with continuous design.

pipeline-continuous-design

It’s a mistake to ask your designer to explain every little thing they’re doing, but it’s also a mistake to decouple their work from your product’s economics. On the one hand, no one likes someone looking over their shoulder and you may not have the professional training to reasonably understand what they’re doing hour to hour, even day to day. On the other hand, it’s a mistake not to charter a designer’s work without a testable definition of success and not to collaborate around that.

Managing this is hard since most of us aren’t designers and because it takes a lot of work and attention to detail to work out what you really want to achieve with a given design.

Beginning with the End in Mind

The difference between art and design is intention- in design we always have one and, in practice, it should be testable. For this, I like the practice of customer experience (CX) mapping. CX mapping is a process for focusing the work of a team on outcomes–day to day, week to week, and quarter to quarter. It’s amenable to both qualitative and quantitative evidence but it is strictly focused on observed customer behaviors, as opposed to less direct, more lagging observations.

CX mapping works to define the CX in testable terms that are amenable to both qualitative and quantitative evidence. Specifically for each phase of a potential customer getting to behaviors that accrue to your product/market fit (customer funnel), it answers the following questions:

1. What do we mean by this phase of the customer funnel? 

What do we mean by, say, ‘Acquisition’ for this product or individual feature? How would we know it if we see it?

2. How do we observe this (in quantitative terms)? What’s the DV?

This come next after we answer the question “What does this mean?”. The goal is to come up with a focal single metric (maybe two), a ‘dependent variable’ (DV) that tells you how a customer has behaved in a given phase of the CX (ex: Acquisition, Onboarding, etc.).

3. What is the cut off for a transition?

Not super exciting, but extremely important in actual practice, the idea here is to establish the cutoff for deciding whether a user has progressed from one phase to the next or abandoned/churned.

4. What is our ‘Line in the Sand’ threshold?

Popularized by the book ‘Lean Analytics’, the idea here is that good metrics are ones that change a team’s behavior (decisions) and for that you need to establish a threshold in advance for decision making.

5. How might we test this? What new IVs are worth testing?

The ‘independent variables’ (IV’s) you might test are basically just ideas for improving the DV (#2 above).

6. What’s tricky? What do we need to watch out for?

Getting this working will take some tuning, but it’s infinitely doable and there aren’t a lot of good substitutes for focusing on what’s a win and what’s a waste of time.

The image below shows a working CX map for a company (HVAC in a Hurry) that services commercial heating, ventilation, and air-conditioning systems. And this particular CX map is for the specific ‘job’/task/problem of how their field technicians get the replacement parts they need.

CX-Map-Full-HinH

For more on CX mapping you can also check out it’s page- Tutorial: Customer Experience (CX) Mapping .

Unpacking Continuous Design for HDD

For the unpacking the work of design/Continuous Design with HDD , I like to use the ‘double diamond’ framing of ‘right problem’ vs. ‘right solution’, which I first learned about in Donald Norman’s seminal book, ‘The Design of Everyday Things’.

I’ve organized the balance of this section around three big questions:

How do you test that you’ve found the ‘Right Problem’?

How do you test that you’ve found demand and have the ‘right solution’, how do you test that you’ve designed the ‘right solution’.

hdd+design-thinking-UX

Let’s say it’s an internal project- a ‘digital transformation’ for an HVAC (heating, ventilation, and air conditioning) service company. The digital team thinks it would be cool to organize the documentation for all the different HVAC equipment the company’s technicians service. But, would it be?

The only way to find out is to go out and talk to these technicians and find out! First, you need to test whether you’re talking to someone who is one of these technicians. For example, you might have a screening question like: ‘How many HVAC’s did you repair last week?’. If it’s <10,  you might instead be talking to a handyman or a manager (or someone who’s not an HVAC tech at all).

Second, you need to ask non-leading questions. The evidentiary value of a specific answer to a general question is much higher than a specific answer to a specific questions. Also, some questions are just leading. For example, if you ask such a subject ‘Would you use a documentation system if we built it?’, they’re going to say yes, just to avoid the awkwardness and sales pitch they expect if they say no.

How do you draft personas? Much more renowned designers than myself (Donald Norman among them) disagree with me about this, but personally I like to draft my personas while I’m creating my interview guide and before I do my first set of interviews. Whether you draft or interview first is also of secondary important if you’re doing HDD- if you’re not iteratively interviewing and revising your material based on what you’ve found, it’s not going to be very functional anyway.

Really, the persona (and the jobs-to-be-done) is a means to an end- it should be answering some facet of the question ‘Who is our customer, and what’s important to them?’. It’s iterative, with a process that looks something like this:

personas-process-v3

How do you draft jobs-to-be-done? Personally- I like to work these in a similar fashion- draft, interview, revise, and then repeat, repeat, repeat.

You’ll use the same interview guide and subjects for these. The template is the same as the personas, but I maintain a separate (though related) tutorial for these–

A guide on creating Jobs-to-be-Done (JTBD) A template for drafting jobs-to-be-done (JTBD)

How do you interview subjects? And, action! The #1 place I see teams struggle is at the beginning and it’s with the paradox that to get to a big market you need to nail a series of small markets. Sure, they might have heard something about segmentation in a marketing class, but here you need to apply that from the very beginning.

The fix is to create a screener for each persona. This is a factual question whose job is specifically and only to determine whether a given subject does or does not map to your target persona. In the HVAC in a Hurry technician persona (see above), you might have a screening question like: ‘How many HVAC’s did you repair last week?’. If it’s <10,  you might instead be talking to a handyman or a manager (or someone who’s not an HVAC tech at all).

And this is the point where (if I’ve made them comfortable enough to be candid with me) teams will ask me ‘But we want to go big- be the next Facebook.’ And then we talk about how just about all those success stories where there’s a product that has for all intents and purpose a universal user base started out by killing it in small, specific segments and learning and growing from there.

Sorry for all that, reader, but I find all this so frequently at this point and it’s so crucial to what I think is a healthy practice of HDD it seemed necessary.

The key with the interview guide is to start with general questions where you’re testing for a specific answer and then progressively get into more specific questions. Here are some resources–

An example interview guide related to the previous tutorials A general take on these interviews in the context of a larger customer discovery/design research program A template for drafting an interview guide

To recap, what’s a ‘Right Problem’ hypothesis? The Right Problem (persona and PS/JTBD) hypothesis is the most fundamental, but the hardest to pin down. You should know what kind of shoes your customer wears and when and why they use your product. You should be able to apply factual screeners to identify subjects that map to your persona or personas.

You should know what people who look like/behave like your customer who don’t use your product are doing instead, particularly if you’re in an industry undergoing change. You should be analyzing your quantitative data with strong, specific, emphatic hypotheses.

If you make software for HVAC (heating, ventilation and air conditioning) technicians, you should have a decent idea of what you’re likely to hear if you ask such a person a question like ‘What are the top 5 hardest things about finishing an HVAC repair?’

In summary, HDD here looks something like this:

Persona-Hypothesis

01 IDEA : The working idea is that you know your customer and you’re solving a problem/doing a job (whatever term feels like it fits for you) that is important to them. If this isn’t the case, everything else you’re going to do isn’t going to matter.

Also, you know the top alternatives, which may or may not be what you see as your direct competitors. This is important as an input into focused testing demand to see if you have the Right Solution.

02 HYPOTHESIS : If you ask non-leading questions (like ‘What are the top 5 hardest things about finishing an HVAC repair?’), then you should generally hear relatively similar responses.

03 EXPERIMENTAL DESIGN : You’ll want an Interview Guide and, critically, a screener. This is a factual question you can use to make sure any given subject maps to your persona. With the HVAC repair example, this would be something like ‘How many HVAC repairs have you done in the last week?’ where you’re expecting an answer >5. This is important because if your screener isn’t tight enough, your interview responses may not converge.

04 EXPERIMENTATION : Get out and interview some subjects- but with a screener and an interview guide. The resources above has more on this, but one key thing to remember is that the interview guide is a guide, not a questionnaire. Your job is to make the interaction as normal as possible and it’s perfectly OK to skip questions or change them. It’s also 1000% OK to revise your interview guide during the process.

05: PIVOT OR PERSEVERE : What did you learn? Was it consistent? Good results are: a) We didn’t know what was on their A-list and what alternatives they are using, but we do know. b) We knew what was on their A-list and what alternatives they are using- we were pretty much right (doesn’t happen as much as you’d think). c) Our interviews just didn’t work/converge. Let’s try this again with some changes (happens all the time to smart teams and is very healthy).

By this, I mean: How do you test whether you have demand for your proposition? How do you know whether it’s better enough at solving a problem (doing a job, etc.) than the current alternatives your target persona has available to them now?

If an existing team was going to pick one of these areas to start with, I’d pick this one. While they’ll waste time if they haven’t found the right problem to solve and, yes, usability does matter, in practice this area of HDD is a good forcing function for really finding out what the team knows vs. doesn’t. This is why I show it as a kind of fulcrum between Right Problem and Right Solution:

Right-Solution-VP

This is not about usability and it does not involve showing someone a prototype, asking them if they like it, and checking the box.

Lean Startup offers a body of practice that’s an excellent fit for this. However, it’s widely misused because it’s so much more fun to build stuff than to test whether or not anyone cares about your idea. Yeah, seriously- that is the central challenge of Lean Startup.

Here’s the exciting part: You can massively improve your odds of success. While Lean Startup does not claim to be able to take any idea and make it successful, it does claim to minimize waste- and that matters a lot. Let’s just say that a new product or feature has a 1 in 5 chance of being successful. Using Lean Startup, you can iterate through 5 ideas in the space it would take you to build 1 out (and hope for the best)- this makes the improbably probable which is pretty much the most you can ask for in the innovation game .

Build, measure, learn, right? Kind of. I’ll harp on this since it’s important and a common failure mode relate to Lean Startup: an MVP is not a 1.0. As the Lean Startup folks (and Eric Ries’ book) will tell you, the right order is learn, build, measure. Specifically–

Learn: Who your customer is and what matters to them (see Solving the Right Problem, above). If you don’t do this, you’ll throwing darts with your eyes closed. Those darts are a lot cheaper than the darts you’d throw if you were building out the solution all the way (to strain the metaphor some), but far from free.

In particular, I see lots of teams run an MVP experiment and get confusing, inconsistent results. Most of the time, this is because they don’t have a screener and they’re putting the MVP in front of an audience that’s too wide ranging. A grandmother is going to respond differently than a millennial to the same thing.

Build : An experiment, not a real product, if at all possible (and it almost always is). Then consider MVP archetypes (see below) that will deliver the best results and try them out. You’ll likely have to iterate on the experiment itself some, particularly if it’s your first go.

Measure : Have metrics and link them to a kill decision. The Lean Startup term is ‘pivot or persevere’, which is great and makes perfect sense, but in practice the pivot/kill decisions are hard and as you decision your experiment you should really think about what metrics and thresholds are really going to convince you.

How do you code an MVP? You don’t. This MVP is a means to running an experiment to test motivation- so formulate your experiment first and then figure out an MVP that will get you the best results with the least amount of time and money. Just since this is a practitioner’s guide, with regard to ‘time’, that’s both time you’ll have to invest as well as how long the experiment will take to conclude. I’ve seen them both matter.

The most important first step is just to start with a simple hypothesis about your idea, and I like the form of ‘If we [do something] for [a specific customer/persona], then they will [respond in a specific, observable way that we can measure]. For example, if you’re building an app for parents to manage allowances for their children, it would be something like ‘If we offer parents and app to manage their kids’ allowances, they will download it, try it, make a habit of using it, and pay for a subscription.’

All that said, for getting started here is- A guide on testing with Lean Startup A template for creating motivation/demand experiments

To recap, what’s a Right Solution hypothesis for testing demand? The core hypothesis is that you have a value proposition that’s better enough than the target persona’s current alternatives that you’re going to acquire customers.

As you may notice, this creates a tight linkage with your testing from Solving the Right Problem. This is important because while testing value propositions with Lean Startup is way cheaper than building product, it still takes work and you can only run a finite set of tests. So, before you do this kind of testing I highly recommend you’ve iterated to validated learning on the what you see below: a persona, one or more PS/JTBD, the alternatives they’re using, and a testable view of why your VP is going to displace those alternatives. With that, your odds of doing quality work in this area dramatically increase!

trent-value-proposition.001

What’s the testing, then? Well, it looks something like this:

hypothesis driven development framework

01 IDEA : Most practicing scientists will tell you that the best way to get a good experimental result is to start with a strong hypothesis. Validating that you have the Right Problem and know what alternatives you’re competing against is critical to making investments in this kind of testing yield valuable results.

With that, you have a nice clear view of what alternative you’re trying to see if you’re better than.

02 HYPOTHESIS : I like a cause an effect stated here, like: ‘If we [offer something to said persona], they will [react in some observable way].’ This really helps focus your work on the MVP.

03 EXPERIMENTAL DESIGN : The MVP is a means to enable an experiment. It’s important to have a clear, explicit declaration of that hypothesis and for the MVP to delivery a metric for which you will (in advance) decide on a fail threshold. Most teams find it easier to kill an idea decisively with a kill metric vs. a success metric, even though they’re literally different sides of the same threshold.

04 EXPERIMENTATION : It is OK to tweak the parameters some as you run the experiment. For example, if you’re running a Google AdWords test, feel free to try new and different keyword phrases.

05: PIVOT OR PERSEVERE : Did you end up above or below your fail threshold? If below, pivot and focus on something else. If above, great- what is the next step to scaling up this proposition?

How does this related to usability? What’s usability vs. motivation? You might reasonably wonder: If my MVP has something that’s hard to understand, won’t that affect the results? Yes, sure. Testing for usability and the related tasks of building stuff are much more fun and (short-term) gratifying. I can’t emphasize enough how much harder it is for most founders, etc. is to push themselves to focus on motivation.

There’s certainly a relationship and, as we transition to the next section on usability, it seems like a good time to introduce the relationship between motivation and usability. My favorite tool for this is BJ Fogg’s Fogg Curve, which appears below. On the y-axis is motivation and on the x-axis is ‘ability’, the inverse of usability. If you imagine a point in the upper left, that would be, say, a cure for cancer where no matter if it’s hard to deal with you really want. On the bottom right would be something like checking Facebook- you may not be super motivated but it’s so easy.

The punchline is that there’s certainly a relationship but beware that for most of us our natural bias is to neglect testing our hypotheses about motivation in favor of testing usability.

Fogg-Curve

First and foremost, delivering great usability is a team sport. Without a strong, co-created narrative, your performance is going to be sub-par. This means your developers, testers, analysts should be asking lots of hard, inconvenient (but relevant) questions about the user stories. For more on how these fit into an overall design program, let’s zoom out and we’ll again stand on the shoulders of Donald Norman.

Usability and User Cognition

To unpack usability in a coherent, testable fashion, I like to use Donald Norman’s 7-step model of user cognition:

user-cognition

The process starts with a Goal and that goals interacts with an object in an environment, the ‘World’. With the concepts we’ve been using here, the Goal is equivalent to a job-to-be-done. The World is your application in whatever circumstances your customer will use it (in a cubicle, on a plane, etc.).

The Reflective layer is where the customer is making a decision about alternatives for their JTBD/PS. In his seminal book, The Design of Everyday Things, Donald Normal’s is to continue reading a book as the sun goes down. In the framings we’ve been using, we looked at understanding your customers Goals/JTBD in ‘How do you test that you’ve found the ‘right problem’?’, and we looked evaluating their alternatives relative to your own (proposition) in ‘How do you test that you’ve found the ‘right solution’?’.

The Behavioral layer is where the user interacts with your application to get what they want- hopefully engaging with interface patterns they know so well they barely have to think about it. This is what we’ll focus on in this section. Critical here is leading with strong narrative (user stories), pairing those with well-understood (by your persona) interface patterns, and then iterating through qualitative and quantitative testing.

The Visceral layer is the lower level visual cues that a user gets- in the design world this is a lot about good visual design and even more about visual consistency. We’re not going to look at that in depth here, but if you haven’t already I’d make sure you have a working style guide to ensure consistency (see  Creating a Style Guide ).

How do you unpack the UX Stack for Testability? Back to our example company, HVAC in a Hurry, which services commercial heating, ventilation, and A/C systems, let’s say we’ve arrived at the following tested learnings for Trent the Technician:

As we look at how we’ll iterate to the right solution in terms of usability, let’s say we arrive at the following user story we want to unpack (this would be one of many, even just for the PS/JTBD above):

As Trent the Technician, I know the part number and I want to find it on the system, so that I can find out its price and availability.

Let’s step through the 7 steps above in the context of HDD, with a particular focus on achieving strong usability.

1. Goal This is the PS/JTBD: Getting replacement parts to a job site. An HDD-enabled team would have found this out by doing customer discovery interviews with subjects they’ve screened and validated to be relevant to the target persona. They would have asked non-leading questions like ‘What are the top five hardest things about finishing an HVAC repair?’ and consistently heard that one such thing is sorting our replacement parts. This validates the PS/JTBD hypothesis that said PS/JTBD matters.

2. Plan For the PS/JTBD/Goal, which alternative are they likely to select? Is our proposition better enough than the alternatives? This is where Lean Startup and demand/motivation testing is critical. This is where we focused in ‘How do you test that you’ve found the ‘right solution’?’ and the HVAC in a Hurry team might have run a series of MVP to both understand how their subject might interact with a solution (concierge MVP) as well as whether they’re likely to engage (Smoke Test MVP).

3. Specify Our first step here is just to think through what the user expects to do and how we can make that as natural as possible. This is where drafting testable user stories, looking at comp’s, and then pairing clickable prototypes with iterative usability testing is critical. Following that, make sure your analytics are answering the same questions but at scale and with the observations available.

4. Perform If you did a good job in Specify and there are not overt visual problems (like ‘Can I click this part of the interface?’), you’ll be fine here.

5. Perceive We’re at the bottom of the stack and looping back up from World: Is the feedback from your application readily apparent to the user? For example, if you turn a switch for a lightbulb, you know if it worked or not. Is your user testing delivering similar clarity on user reactions?

6. Interpret Do they understand what they’re seeing? Does is make sense relative to what they expected to happen. For example, if the user just clicked ‘Save’, do they’re know that whatever they wanted to save is saved and OK? Or not?

7. Compare Have you delivered your target VP? Did they get what they wanted relative to the Goal/PS/JTBD?

How do you draft relevant, focused, testable user stories? Without these, everything else is on a shaky foundation. Sometimes, things will work out. Other times, they won’t. And it won’t be that clear why/not. Also, getting in the habit of pushing yourself on the relevance and testability of each little detail will make you a much better designer and a much better steward of where and why your team invests in building software.

For getting started here is- A guide on creating user stories A template for drafting user stories

How do you create find the relevant patterns and apply them? Once you’ve got great narrative, it’s time to put the best-understood, most expected, most relevant interface patterns in front of your user. Getting there is a process.

For getting started here is- A guide on interface patterns and prototyping

How do you run qualitative user testing early and often? Once you’ve got great something to test, it’s time to get that design in front of a user, give them a prompt, and see what happens- then rinse and repeat with your design.

For getting started here is- A guide on qualitative usability testing A template for testing your user stories

How do you focus your outcomes and instrument actionable observation? Once you release product (features, etc.) into the wild, it’s important to make sure you’re always closing the loop with analytics that are a regular part of your agile cadences. For example, in a high-functioning practice of HDD the team should be interested in and  reviewing focused analytics to see how their pair with the results of their qualitative usability testing.

For getting started here is- A guide on quantitative usability testing with Google Analytics .

To recap, what’s a Right Solution hypothesis for usability? Essentially, the usability hypothesis is that you’ve arrived at a high-performing UI pattern that minimizes the cognitive load, maximizes the user’s ability to act on their motivation to connect with your proposition.

Right-Solution-Usability-Hypothesis

01 IDEA : If you’re writing good user stories , you already have your ideas implemented in the form of testable hypotheses. Stay focused and use these to anchor your testing. You’re not trying to test what color drop-down works best- you’re testing which affordances best deliver on a given user story.

02 HYPOTHESIS : Basically, the hypothesis is that ‘For [x] user story, this interface pattern will perform will, assuming we supply the relevant motivation and have the right assessments in place.

03 EXPERIMENTAL DESIGN : Really, this means have a tests set up that, beyond working, links user stories to prompts and narrative which supply motivation and have discernible assessments that help you make sure the subject didn’t click in the wrong place by mistake.

04 EXPERIMENTATION : It is OK to iterate on your prototypes and even your test plan in between sessions, particularly at the exploratory stages.

05: PIVOT OR PERSEVERE : Did the patterns perform well, or is it worth reviewing patterns and comparables and giving it another go?

There’s a lot of great material and successful practice on the engineering management part of application development. But should you pair program? Do estimates or go NoEstimates? None of these are the right choice for every team all of the time. In this sense, HDD is the only way to reliably drive up your velocity, or f e . What I love about agile is that fundamental to its design is the coupling and integration of working out how to make your release content successful while you’re figuring out how to make your team more successful.

What does HDD have to offer application development, then? First, I think it’s useful to consider how well HDD integrates with agile in this sense and what existing habits you can borrow from it to improve your practice of HDD. For example, let’s say your team is used to doing weekly retrospectives about its practice of agile. That’s the obvious place to start introducing a retrospective on how your hypothesis testing went and deciding what that should mean for the next sprint’s backlog.

Second, let’s look at the linkage from continuous design. Primarily, what we’re looking to do is move fewer designs into development through more disciplined experimentation before we invest in development. This leaves the developers the do things better and keep the pipeline healthier (faster and able to produce more content or story points per sprint). We’d do this by making sure we’re dealing with a user that exists, a job/problem that exists for them, and only propositions that we’ve successfully tested with non-product MVP’s.

But wait– what does that exactly mean: ‘only propositions that we’ve successfully tested with non-product MVP’s’? In practice, there’s no such thing as fully validating a proposition. You’re constantly looking at user behavior and deciding where you’d be best off improving. To create balance and consistency from sprint to sprint, I like to use a ‘ UX map ‘. You can read more about it at that link but the basic idea is that for a given JTBD:VP pairing you map out the customer experience (CX) arc broken into progressive stages that each have a description, a dependent variable you’ll observe to assess success, and ideas on things (independent variables or ‘IV’s’) to test. For example, here’s what such a UX map might look like for HVAC in a Hurry’s work on the JTBD of ‘getting replacement parts to a job site’.

hypothesis driven development framework

From there, how can we use HDD to bring better, more testable design into the development process? One thing I like to do with user stories and HDD is to make a habit of pairing every single story with a simple, analytical question that would tell me whether the story is ‘done’ from the standpoint of creating the target user behavior or not. From there, I consider focal metrics. Here’s what that might look like at HinH.

hypothesis driven development framework

For the last couple of decades, test and deploy/ops was often treated like a kind of stepchild to the development- something that had to happen at the end of development and was the sole responsibility of an outside group of specialists. It didn’t make sense then, and now an integral test capability is table stakes for getting to a continuous product pipeline, which at the core of HDD itself.

A continuous pipeline means that you release a lot. Getting good at releasing relieves a lot of energy-draining stress on the product team as well as creating the opportunity for rapid learning that HDD requires. Interestingly, research by outfits like DORA (now part of Google) and CircleCI shows teams that are able to do this both release faster and encounter fewer bugs in production.

Amazon famously releases code every 11.6 seconds. What this means is that a developer can push a button to commit code and everything from there to that code showing up in front of a customer is automated. How does that happen? For starters, there are two big (related) areas: Test & Deploy.

While there is some important plumbing that I’ll cover in the next couple of sections, in practice most teams struggle with test coverage. What does that mean? In principal, what it means is that even though you can’t test everything, you iterate to test automation coverage that is catching most bugs before they end up in front of a user. For most teams, that means a ‘pyramid’ of tests like you see here, where the x-axis the number of tests and the y-axis is the level of abstraction of the tests.

test-pyramid-v2

The reason for the pyramid shape is that the tests are progressively more work to create and maintain, and also each one provides less and less isolation about where a bug actually resides. In terms of iteration and retrospectives, what this means is that you’re always asking ‘What’s the lowest level test that could have caught this bug?’.

Unit tests isolate the operation of a single function and make sure it works as expected. Integration tests span two functions and system tests, as you’d guess, more or less emulate the way a user or endpoint would interact with a system.

Feature Flags: These are a separate but somewhat complimentary facility. The basic idea is that as you add new features, they each have a flag that can enable or disable them. They are start out disabled and you make sure they don’t break anything. Then, on small sets of users, you can enable them and test whether a) the metrics look normal and nothing’s broken and, closer to the core of HDD, whether users are actually interacting with the new feature.

In the olden days (which is when I last did this kind of thing for work), if you wanted to update a web application, you had to log in to a server, upload the software, and then configure it, maybe with the help of some scripts. Very often, things didn’t go accordingly to plan for the predictable reason that there was a lot of opportunity for variation between how the update was tested and the machine you were updating, not to mention how you were updating.

Now computers do all that- but you still have to program them. As such, the job of deployment has increasingly become a job where you’re coding solutions on top of platforms like Kubernetes, Chef, and Terraform. These folks are (hopefully) working closely with developers on this. For example, rather than spending time and money on writing documentation for an upgrade, the team would collaborate on code/config. that runs on the kind of application I mentioned earlier.

Pipeline Automation

Most teams with a continuous pipeline orchestrate something like what you see below with an application made for this like Jenkins or CircleCI. The Manual Validation step you see is, of course, optional and not a prevalent part of a truly continuous delivery. In fact, if you automate up to the point of a staging server or similar before you release, that’s what’s generally called continuous integration.

Finally, the two yellow items you see are where the team centralizes their code (version control) and the build that they’re taking from commit to deploy (artifact repository).

Continuous-Delivery

To recap, what’s the hypothesis?

Well, you can’t test everything but you can make sure that you’re testing what tends to affect your users and likewise in the deployment process. I’d summarize this area of HDD as follows:

CD-Hypothesis

01 IDEA : You can’t test everything and you can’t foresee everything that might go wrong. This is important for the team to internalize. But you can iteratively, purposefully focus your test investments.

02 HYPOTHESIS : Relative to the test pyramid, you’re looking to get to a place where you’re finding issues with the least expensive, least complex test possible- not an integration test when a unit test could have caught the issue, and so forth.

03 EXPERIMENTAL DESIGN : As you run integrations and deployments, you see what happens! Most teams move from continuous integration (deploy-ready system that’s not actually in front of customers) to continuous deployment.

04 EXPERIMENTATION : In  retrospectives, it’s important to look at the tests suite and ask what would have made the most sense and how the current processes were or weren’t facilitating that.

05: PIVOT OR PERSEVERE : It takes work, but teams get there all the time- and research shows they end up both releasing more often and encounter fewer production bugs, believe it or not!

Topline, I would say it’s a way to unify and focus your work across those disciplines. I’ve found that’s a pretty big deal. While none of those practices are hard to understand, practice on the ground is patchy. Usually, the problem is having the confidence that doing things well is going to be worthwhile, and knowing who should be participating when.

My hope is that with this guide and the supporting material (and of course the wider body of practice), that teams will get in the habit of always having a set of hypotheses and that will improve their work and their confidence as a team.

Naturally, these various disciplines have a lot to do with each other, and I’ve summarized some of that here:

Hypothesis-Driven-Dev-Diagram

Mostly, I find practitioners learn about this through their work, but I’ll point out a few big points of intersection that I think are particularly notable:

  • Learn by Observing Humans We all tend to jump on solutions and over invest in them when we should be observing our user, seeing how they behave, and then iterating. HDD helps reinforce problem-first diagnosis through its connections to relevant practice.
  • Focus on What Users Actually Do A lot of thing might happen- more than we can deal with properly. The goods news is that by just observing what actually happens you can make things a lot easier on yourself.
  • Move Fast, but Minimize Blast Radius Working across so many types of org’s at present (startups, corporations, a university), I can’t overstate how important this is and yet how big a shift it is for more traditional organizations. The idea of ‘moving fast and breaking things’ is terrifying to these places, and the reality is with practice you can move fast and rarely break things/only break them a tiny bit. Without this, you end up stuck waiting for someone else to create the perfect plan or for that next super important hire to fix everything (spoiler: it won’t and they don’t).
  • Minimize Waste Succeeding at innovation is improbable, and yet it happens all the time. Practices like Lean Startup do not warrant that by following them you’ll always succeed; however, they do promise that by minimizing waste you can test five ideas in the time/money/energy it would otherwise take you to test one, making the improbable probable.

What I love about Hypothesis-Driven Development is that it solves a really hard problem with practice: that all these behaviors are important and yet you can’t learn to practice them all immediately. What HDD does is it gives you a foundation where you can see what’s similar across these and how your practice in one is reenforcing the other. It’s also a good tool to decide where you need to focus on any given project or team.

Copyright © 2022 Alex Cowan · All rights reserved.

Why hypothesis-driven development is key to DevOps

gears and lightbulb to represent innovation

Opensource.com

The definition of DevOps, offered by  Donovan Brown is  "The union of people , process , and products to enable continuous delivery of value to our customers. " It accentuates the importance of continuous delivery of value. Let's discuss how experimentation is at the heart of modern development practices.

hypothesis driven development framework

Reflecting on the past

Before we get into hypothesis-driven development, let's quickly review how we deliver value using waterfall, agile, deployment rings, and feature flags.

In the days of waterfall , we had predictable and process-driven delivery. However, we only delivered value towards the end of the development lifecycle, often failing late as the solution drifted from the original requirements, or our killer features were outdated by the time we finally shipped.

hypothesis driven development framework

Here, we have one release X and eight features, which are all deployed and exposed to the patiently waiting user. We are continuously delivering value—but with a typical release cadence of six months to two years, the value of the features declines as the world continues to move on . It worked well enough when there was time to plan and a lower expectation to react to more immediate needs.

The introduction of agile allowed us to create and respond to change so we could continuously deliver working software, sense, learn, and respond.

hypothesis driven development framework

Now, we have three releases: X.1, X.2, and X.3. After the X.1 release, we improved feature 3 based on feedback and re-deployed it in release X.3. This is a simple example of delivering features more often, focused on working software, and responding to user feedback. We are on the path of continuous delivery, focused on our key stakeholders: our users.

Using deployment rings and/or feature flags , we can decouple release deployment and feature exposure, down to the individual user, to control the exposure—the blast radius—of features. We can conduct experiments; progressively expose, test, enable, and hide features; fine-tune releases, and continuously pivot on learnings and feedback.

When we add feature flags to the previous workflow, we can toggle features to be ON (enabled and exposed) or OFF (hidden).

hypothesis driven development framework

Here, feature flags for features 2, 4, and 8 are OFF, which results in the user being exposed to fewer of the features. All features have been deployed but are not exposed (yet). We can fine-tune the features (value) of each release after deploying to production.

Ring-based deployment limits the impact (blast) on users while we gradually deploy and evaluate one or more features through observation. Rings allow us to deploy features progressively and have multiple releases (v1, v1.1, and v1.2) running in parallel.

Ring-based deployment

Exposing features in the canary and early-adopter rings enables us to evaluate features without the risk of an all-or-nothing big-bang deployment.

Feature flags decouple release deployment and feature exposure. You "flip the flag" to expose a new feature, perform an emergency rollback by resetting the flag, use rules to hide features, and allow users to toggle preview features.

Toggling feature flags on/off

When you combine deployment rings and feature flags, you can progressively deploy a release through rings and use feature flags to fine-tune the deployed release.

See deploying new releases: Feature flags or rings , what's the cost of feature flags , and breaking down walls between people, process, and products for discussions on feature flags, deployment rings, and related topics.

Adding hypothesis-driven development to the mix

Hypothesis-driven development is based on a series of experiments to validate or disprove a hypothesis in a complex problem domain where we have unknown-unknowns. We want to find viable ideas or fail fast. Instead of developing a monolithic solution and performing a big-bang release, we iterate through hypotheses, evaluating how features perform and, most importantly, how and if customers use them.

Template: We believe {customer/business segment} wants {product/feature/service} because {value proposition}. Example: We believe that users want to be able to select different themes because it will result in improved user satisfaction. We expect 50% or more users to select a non-default theme and to see a 5% increase in user engagement.

Every experiment must be based on a hypothesis, have a measurable conclusion, and contribute to feature and overall product learning. For each experiment, consider these steps:

  • Observe your user
  • Define a hypothesis and an experiment to assess the hypothesis
  • Define clear success criteria (e.g., a 5% increase in user engagement)
  • Run the experiment
  • Evaluate the results and either accept or reject the hypothesis

Let's have another look at our sample release with eight hypothetical features.

hypothesis driven development framework

When we deploy each feature, we can observe user behavior and feedback, and prove or disprove the hypothesis that motivated the deployment. As you can see, the experiment fails for features 2 and 6, allowing us to fail-fast and remove them from the solution. We do not want to carry waste that is not delivering value or delighting our users! The experiment for feature 3 is inconclusive, so we adapt the feature, repeat the experiment, and perform A/B testing in Release X.2. Based on observations, we identify the variant feature 3.2 as the winner and re-deploy in release X.3. We only expose the features that passed the experiment and satisfy the users.

Hypothesis-driven development lights up progressive exposure

When we combine hypothesis-driven development with progressive exposure strategies, we can vertically slice our solution, incrementally delivering on our long-term vision. With each slice, we progressively expose experiments, enable features that delight our users and hide those that did not make the cut.

But there is more. When we embrace hypothesis-driven development, we can learn how technology works together, or not, and what our customers need and want. We also complement the test-driven development (TDD) principle. TDD encourages us to write the test first (hypothesis), then confirm our features are correct (experiment), and succeed or fail the test (evaluate). It is all about quality and delighting our users , as outlined in principles 1, 3, and 7  of the Agile Manifesto :

  • Our highest priority is to satisfy the customers through early and continuous delivery of value.
  • Deliver software often, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  • Working software is the primary measure of progress.

More importantly, we introduce a new mindset that breaks down the walls between development, business, and operations to view, design, develop, deliver, and observe our solution in an iterative series of experiments, adopting features based on scientific analysis, user behavior, and feedback in production. We can evolve our solutions in thin slices through observation and learning in production, a luxury that other engineering disciplines, such as aerospace or civil engineering, can only dream of.

The good news is that hypothesis-driven development supports the empirical process theory and its three pillars: Transparency , Inspection , and Adaption .

hypothesis driven development framework

But there is more. Based on lean principles, we must pivot or persevere after we measure and inspect the feedback. Using feature toggles in conjunction with hypothesis-driven development, we get the best of both worlds, as well as the ability to use A|B testing to make decisions on feedback, such as likes/dislikes and value/waste.

Hypothesis-driven development:

  • Is about a series of experiments to confirm or disprove a hypothesis. Identify value!
  • Delivers a measurable conclusion and enables continued learning.
  • Enables continuous feedback from the key stakeholder—the user—to understand the unknown-unknowns!
  • Enables us to understand the evolving landscape into which we progressively expose value.

Progressive exposure:

  • Is not an excuse to hide non-production-ready code. Always ship quality!
  • Is about deploying a release of features through rings in production. Limit blast radius!
  • Is about enabling or disabling features in production. Fine-tune release values!
  • Relies on circuit breakers to protect the infrastructure from implications of progressive exposure. Observe, sense, act!

What have you learned about progressive exposure strategies and hypothesis-driven development? We look forward to your candid feedback.

User profile image.

Comments are closed.

Related content.

Working on a team, busy worklife

InVisionApp, Inc.

Inside Design

5 steps to a hypothesis-driven design process

  •   mar 22, 2018.

S ay you’re starting a greenfield project, or you’re redesigning a legacy app. The product owner gives you some high-level goals. Lots of ideas and questions are in your mind, and you’re not sure where to start.

Hypothesis-driven design will help you navigate through a unknown space so you can come out at the end of the process with actionable next steps.

Ready? Let’s dive in.

Step 1: Start with questions and assumptions

On the first day of the project, you’re curious about all the different aspects of your product. “How could we increase the engagement on the homepage? ” “ What features are important for our users? ”

Related: 6 ways to speed up and improve your product design process

To reduce risk, I like to take some time to write down all the unanswered questions and assumptions. So grab some sticky notes and write all your questions down on the notes (one question per note).

I recommend that you use the How Might We technique from IDEO to phrase the questions and turn your assumptions into questions. It’ll help you frame the questions in a more open-ended way to avoid building the solution into the statement prematurely. For example, you have an idea that you want to make riders feel more comfortable by showing them how many rides the driver has completed. You can rephrase the question to “ How might we ensure rider feel comfortable when taking ride, ” and leave the solution part out to the later step.

“It’s easy to come up with design ideas, but it’s hard to solve the right problem.”

It’s even more valuable to have your team members participate in the question brainstorming session. Having diverse disciplines in the room always brings fresh perspectives and leads to a more productive conversation.

Step 2: Prioritize the questions and assumptions

Now that you have all the questions on sticky notes, organize them into groups to make it easier to review them. It’s especially helpful if you can do the activity with your team so you can have more input from everybody.

When it comes to choosing which question to tackle first, think about what would impact your product the most or what would bring the most value to your users.

If you have a big group, you can Dot Vote to prioritize the questions. Here’s how it works: Everyone has three dots, and each person gets to vote on what they think is the most important question to answer in order to build a successful product. It’s a common prioritization technique that’s also used in the Sprint book by Jake Knapp —he writes, “ The prioritization process isn’t perfect, but it leads to pretty good decisions and it happens fast. ”

Related: Go inside design at Google Ventures

Step 3: Turn them into hypotheses

After the prioritization, you now have a clear question in mind. It’s time to turn the question into a hypothesis. Think about how you would answer the question.

Let’s continue the previous ride-hailing service example. The question you have is “ How might we make people feel safe and comfortable when using the service? ”

Based on this question, the solutions can be:

  • Sharing the rider’s location with friends and family automatically
  • Displaying more information about the driver
  • Showing feedback from previous riders

Now you can combine the solution and question, and turn it into a hypothesis. Hypothesis is a framework that can help you clearly define the question and solution, and eliminate assumption.

From Lean UX

We believe that [ sharing more information about the driver’s experience and stories ] For [ the riders ] Will [ make riders feel more comfortable and connected throughout the ride ]

4. Develop an experiment and testing the hypothesis

Develop an experiment so you can test your hypothesis. Our test will follow the scientific methods, so it’s subject to collecting empirical and measurable evidence in order to obtain new knowledge. In other words, it’s crucial to have a measurable outcome for the hypothesis so we can determine whether it has succeeded or failed.

There are different ways you can create an experiment, such as interview, survey , landing page validation, usability testing, etc. It could also be something that’s built into the software to get quantitative data from users. Write down what the experiment will be, and define the outcomes that determine whether the hypothesis is valids. A well-defined experiment can validate/invalidate the hypothesis.

In our example, we could define the experiment as “ We will run X studies to show more information about a driver (number of ride, years of experience), and ask follow-up questions to identify the rider’s emotion associated with this ride (safe, fun, interesting, etc.). We will know the hypothesis is valid when we get more than 70% identify the ride as safe or comfortable. ”

After defining the experiment, it’s time to get the design done. You don’t need to have every design detail thought through. You can focus on designing what is needed to be tested.

When the design is ready, you’re ready to run the test. Recruit the users you want to target , have a time frame, and put the design in front of the users.

5. Learn and build

You just learned that the result was positive and you’re excited to roll out the feature. That’s great! If the hypothesis failed, don’t worry—you’ll be able to gain some insights from that experiment. Now you have some new evidence that you can use to run your next experiment. In each experiment, you’ll learn something new about your product and your customers.

“Design is a never-ending process.”

What other information can you show to make riders feel safe and comfortable? That can be your next hypothesis. You now have a feature that’s ready to be built, and a new hypothesis to be tested.

Principles from from The Lean Startup

We often assume that we understand our users and know what they want. It’s important to slow down and take a moment to understand the questions and assumptions we have about our product.

After testing each hypothesis, you’ll get a clearer path of what’s most important to the users and where you need to dig deeper. You’ll have a clear direction for what to do next.

by Sylvia Lai

Sylvia Lai helps startup and enterprise solve complex problems through design thinking and user-centered design methodologies at Pivotal Labs . She is the biggest advocate for the users, making sure their voices are heard is her number one priority. Outside of work, she loves mentoring other designers through one-on-one conversation. Connect with her through LinkedIn or Twitter .

Collaborate in real time on a digital whiteboard Try Freehand

Get awesome design content in your inbox each week, give it a try—it only takes a click to unsubscribe., thanks for signing up, you should have a thank you gift in your inbox now-and you’ll hear from us again soon, get started designing better. faster. together. and free forever., give it a try. nothing’s holding you back..

Scrum and Hypothesis Driven Development

Profile picture for user Dave West

  • Website for Dave West
  • Twitter for Dave West
  • LinkedIn for Dave West

hypothesis driven development framework

Scrum was built to better manage risk and deliver value by focusing on inspection and encouraging adaptation. It uses an empirical approach combined with self organizing, empowered teams to effectively work on complex problems. And after reading Jeff Gothelf ’s and Josh Seiden ’s book “ Sense and Respond: How Successful Organizations Listen to Customers and Create New Products Continuously ”, I realized that the world is full of complex problems. This got me thinking about the relationship between Scrum and modern organizations as they pivot toward becoming able to ‘sense and respond’. So, I decided to ask Jeff Gothelf… Here is a condensed version of our conversation.

hypothesis driven development framework

Sense & Respond was exactly this attempt to change the hearts and minds of managers, executives and aspiring managers. It makes the case that first and foremost, any business of scale or that seeks to scale is in the software business. We share a series of compelling case studies to illustrate how this is true across nearly every industry. We then move on to the second half of the book where we discuss how managing a software-based business is different. We cover culture, process, staffing, planning, budgeting and incentives. Change has to be holistic.

What you are describing is the challenge of ownership. Product Owner (PO) is the role in the Scrum Framework empowered to make decisions about what and when things are in the product. But disempowerment is a real problem in most organizations, with their POs not having the power to make decisions. Is this something you see when introducing the ideas of Sense and Respond?

There will always be situations where things simply have to get built. Legal and compliance are two great examples of this. In these, low risk, low uncertainty situations a more straightforward execution is usually warranted. That said, just because a feature has to be included for compliance reasons doesn’t mean there is only one way to implement it. What teams will often find is that there is actual flexibility in how these (actual) requirements can be implemented with some being more successful and less distracting to the overall user experience than others. The level of discovery that you would expend on these features is admittedly smaller but it shouldn’t be thrown out altogether as these features still need to figure into a holistic workflow.   

What did you think about this post?

Share with your network.

  • Share this page via email
  • Share this page on Facebook
  • Share this page on Twitter
  • Share this page on LinkedIn

View the discussion thread.

Stratechi.com

  • What is Strategy?
  • Business Models
  • Developing a Strategy
  • Strategic Planning
  • Competitive Advantage
  • Growth Strategy
  • Market Strategy
  • Customer Strategy
  • Geographic Strategy
  • Product Strategy
  • Service Strategy
  • Pricing Strategy
  • Distribution Strategy
  • Sales Strategy
  • Marketing Strategy
  • Digital Marketing Strategy
  • Organizational Strategy
  • HR Strategy – Organizational Design
  • HR Strategy – Employee Journey & Culture
  • Process Strategy
  • Procurement Strategy
  • Cost and Capital Strategy
  • Business Value
  • Market Analysis
  • Problem Solving Skills
  • Strategic Options
  • Business Analytics
  • Strategic Decision Making
  • Process Improvement
  • Project Planning
  • Team Leadership
  • Personal Development
  • Leadership Maturity Model
  • Leadership Team Strategy
  • The Leadership Team
  • Leadership Mindset
  • Communication & Collaboration
  • Problem Solving
  • Decision Making
  • People Leadership
  • Strategic Execution
  • Executive Coaching
  • Strategy Coaching
  • Business Transformation
  • Strategy Workshops
  • Leadership Strategy Survey
  • Leadership Training
  • Who’s Joe?

“A fact is a simple statement that everyone believes. It is innocent, unless found guilty. A hypothesis is a novel suggestion that no one wants to believe. It is guilty until found effective.”

– Edward Teller, Nuclear Physicist

During my first brainstorming meeting on my first project at McKinsey, this very serious partner, who had a PhD in Physics, looked at me and said, “So, Joe, what are your main hypotheses.” I looked back at him, perplexed, and said, “Ummm, my what?” I was used to people simply asking, “what are your best ideas, opinions, thoughts, etc.” Over time, I began to understand the importance of hypotheses and how it plays an important role in McKinsey’s problem solving of separating ideas and opinions from facts.

What is a Hypothesis?

“Hypothesis” is probably one of the top 5 words used by McKinsey consultants. And, being hypothesis-driven was required to have any success at McKinsey. A hypothesis is an idea or theory, often based on limited data, which is typically the beginning of a thread of further investigation to prove, disprove or improve the hypothesis through facts and empirical data.

The first step in being hypothesis-driven is to focus on the highest potential ideas and theories of how to solve a problem or realize an opportunity.

Let’s go over an example of being hypothesis-driven.

Let’s say you own a website, and you brainstorm ten ideas to improve web traffic, but you don’t have the budget to execute all ten ideas. The first step in being hypothesis-driven is to prioritize the ten ideas based on how much impact you hypothesize they will create.

hypothesis driven example

The second step in being hypothesis-driven is to apply the scientific method to your hypotheses by creating the fact base to prove or disprove your hypothesis, which then allows you to turn your hypothesis into fact and knowledge. Running with our example, you could prove or disprove your hypothesis on the ideas you think will drive the most impact by executing:

1. An analysis of previous research and the performance of the different ideas 2. A survey where customers rank order the ideas 3. An actual test of the ten ideas to create a fact base on click-through rates and cost

While there are many other ways to validate the hypothesis on your prioritization , I find most people do not take this critical step in validating a hypothesis. Instead, they apply bad logic to many important decisions . An idea pops into their head, and then somehow it just becomes a fact.

One of my favorite lousy logic moments was a CEO who stated,

“I’ve never heard our customers talk about price, so the price doesn’t matter with our products , and I’ve decided we’re going to raise prices.”

Luckily, his management team was able to do a survey to dig deeper into the hypothesis that customers weren’t price-sensitive. Well, of course, they were and through the survey, they built a fantastic fact base that proved and disproved many other important hypotheses.

Why is being hypothesis-driven so important?

Imagine if medicine never actually used the scientific method. We would probably still be living in a world of lobotomies and bleeding people. Many organizations are still stuck in the dark ages, having built a house of cards on opinions disguised as facts, because they don’t prove or disprove their hypotheses. Decisions made on top of decisions, made on top of opinions, steer organizations clear of reality and the facts necessary to objectively evolve their strategic understanding and knowledge. I’ve seen too many leadership teams led solely by gut and opinion. The problem with intuition and gut is if you don’t ever prove or disprove if your gut is right or wrong, you’re never going to improve your intuition. There is a reason why being hypothesis-driven is the cornerstone of problem solving at McKinsey and every other top strategy consulting firm.

How do you become hypothesis-driven?

Most people are idea-driven, and constantly have hypotheses on how the world works and what they or their organization should do to improve. Though, there is often a fatal flaw in that many people turn their hypotheses into false facts, without actually finding or creating the facts to prove or disprove their hypotheses. These people aren’t hypothesis-driven; they are gut-driven.

The conversation typically goes something like “doing this discount promotion will increase our profits” or “our customers need to have this feature” or “morale is in the toilet because we don’t pay well, so we need to increase pay.” These should all be hypotheses that need the appropriate fact base, but instead, they become false facts, often leading to unintended results and consequences. In each of these cases, to become hypothesis-driven necessitates a different framing.

• Instead of “doing this discount promotion will increase our profits,” a hypothesis-driven approach is to ask “what are the best marketing ideas to increase our profits?” and then conduct a marketing experiment to see which ideas increase profits the most.

• Instead of “our customers need to have this feature,” ask the question, “what features would our customers value most?” And, then conduct a simple survey having customers rank order the features based on value to them.

• Instead of “morale is in the toilet because we don’t pay well, so we need to increase pay,” conduct a survey asking, “what is the level of morale?” what are potential issues affecting morale?” and what are the best ideas to improve morale?”

Beyond, watching out for just following your gut, here are some of the other best practices in being hypothesis-driven:

Listen to Your Intuition

Your mind has taken the collision of your experiences and everything you’ve learned over the years to create your intuition, which are those ideas that pop into your head and those hunches that come from your gut. Your intuition is your wellspring of hypotheses. So listen to your intuition, build hypotheses from it, and then prove or disprove those hypotheses, which will, in turn, improve your intuition. Intuition without feedback will over time typically evolve into poor intuition, which leads to poor judgment, thinking, and decisions.

Constantly Be Curious

I’m always curious about cause and effect. At Sports Authority, I had a hypothesis that customers that received service and assistance as they shopped, were worth more than customers who didn’t receive assistance from an associate. We figured out how to prove or disprove this hypothesis by tying surveys to transactional data of customers, and we found the hypothesis was true, which led us to a broad initiative around improving service. The key is you have to be always curious about what you think does or will drive value, create hypotheses and then prove or disprove those hypotheses.

Validate Hypotheses

You need to validate and prove or disprove hypotheses. Don’t just chalk up an idea as fact. In most cases, you’re going to have to create a fact base utilizing logic, observation, testing (see the section on Experimentation ), surveys, and analysis.

Be a Learning Organization

The foundation of learning organizations is the testing of and learning from hypotheses. I remember my first strategy internship at Mercer Management Consulting when I spent a good part of the summer combing through the results, findings, and insights of thousands of experiments that a banking client had conducted. It was fascinating to see the vastness and depth of their collective knowledge base. And, in today’s world of knowledge portals, it is so easy to disseminate, learn from, and build upon the knowledge created by companies.

NEXT SECTION: DISAGGREGATION

DOWNLOAD STRATEGY PRESENTATION TEMPLATES

168-PAGE COMPENDIUM OF STRATEGY FRAMEWORKS & TEMPLATES 186-PAGE HR & ORG STRATEGY PRESENTATION 100-PAGE SALES PLAN PRESENTATION 121-PAGE STRATEGIC PLAN & COMPANY OVERVIEW PRESENTATION 114-PAGE MARKET & COMPETITIVE ANALYSIS PRESENTATION 18-PAGE BUSINESS MODEL TEMPLATE

JOE NEWSUM COACHING

EXECUTIVE COACHING STRATEGY COACHING ELEVATE360 BUSINESS TRANSFORMATION STRATEGY WORKSHOPS LEADERSHIP STRATEGY SURVEY & WORKSHOP STRATEGY & LEADERSHIP TRAINING

THE LEADERSHIP MATURITY MODEL

Explore other types of strategy.

BIG PICTURE WHAT IS STRATEGY? BUSINESS MODEL COMP. ADVANTAGE GROWTH

TARGETS MARKET CUSTOMER GEOGRAPHIC

VALUE PROPOSITION PRODUCT SERVICE PRICING

GO TO MARKET DISTRIBUTION SALES MARKETING

ORGANIZATIONAL ORG DESIGN HR & CULTURE PROCESS PARTNER

EXPLORE THE TOP 100 STRATEGIC LEADERSHIP COMPETENCIES

TYPES OF VALUE MARKET ANALYSIS PROBLEM SOLVING

OPTION CREATION ANALYTICS DECISION MAKING PROCESS TOOLS

PLANNING & PROJECTS PEOPLE LEADERSHIP PERSONAL DEVELOPMENT

QuantStrat TradeR

Trading, quantstrat, r, and more..

QuantStrat TradeR

Introduction to Hypothesis Driven Development — Overview of a Simple Strategy and Indicator Hypotheses

This post will begin to apply a hypothesis-driven development framework (that is, the framework written by Brian Peterson on how to do strategy construction correctly, found here ) to a strategy I’ve come across on SeekingAlpha. Namely, Cliff Smith posted about a conservative bond rotation strategy , which makes use of short-term treasuries, long-term treasuries, convertibles, emerging market debt, and high-yield corporate debt–that is, SHY, TLT, CWB, PCY, and JNK. What this post will do is try to put a more formal framework on whether or not this strategy is a valid one to begin with.

One note: For the sake of balancing succinctness for blog consumption and to demonstrate the computational techniques more quickly, I’ll be glossing over background research write-ups for this post/strategy, since it’s yet another take on time-series/cross-sectional momentum, except pared down to something more implementable for individual investors, as opposed to something that requires a massive collection of different instruments for massive, institutional-class portfolios.

Introduction, Overview, Objectives, Constraints, Assumptions, and Hypotheses to be Tested:

Momentum . It has been documented many times . For the sake of brevity, I’ll let readers follow the links if they’re so inclined, but among them are Jegadeesh and Titman’s seminal 1993 paper, Mark Carhart’s 1997 paper, Andreu et. Al (2012), Barroso and Santa-Clara (2013), Ilmanen’s Expected Returns (which covers momentum), and others. This list, of course, is far from exhaustive, but the point stands. Formation periods of several months (up to a year) should predict returns moving forward on some holding period, be it several months, or as is more commonly seen, one month.

Furthermore, momentum applies in two varieties–cross sectional, and time-series. Cross-sectional momentum asserts that assets that outperformed among a group will continue to outperform, while time-series momentum asserts that assets that have risen in price during a formation period will continue to do so for the short-term future.

Cliff Smith’s strategy depends on the latter, effectively, among a group of five bond ETFs. I am not certain of the objective of the strategy (he didn’t mention it), as PCY, JNK, and CWB, while they may be fixed-income in name, possess volatility on the order of equities. I suppose one possible “default” objective would be to achieve an outperforming total return against an equal-weighted benchmark, both rebalanced monthly.

The constraints are that one would need a sufficient amount of capital such that fixed transaction costs are negligible, since the strategy is a single-instrument rotation type, meaning that each month may have two-way turnover of 200% (sell one ETF, buy another). On the other hand, one would assume that the amount of capital deployed is small enough such that execution costs of trading do not materially impact the performance of the strategy. That is to say, moving multiple billions from one of these ETFs to the other is a non-starter. As all returns are computed close-to-close for the sake of simplicity, this creates the implicit assumption that the market impact and execution costs are very small compared to overall returns.

There are two overarching hypotheses to be tested in order to validate the efficacy of this strategy:

1) Time-series momentum: while it has been documented for equities and even industry/country ETFs, it may not have been formally done so yet for fixed-income ETFs, and their corresponding mutual funds. In order to validate this strategy, it should be investigated if the particular instruments it selects adhere to the same phenomena.

2) Cross-sectional momentum: again, while this has been heavily demonstrated in the past with regards to equities, ETFs are fairly new, and of the five mutual funds Cliff Smith selected, the latest one only has data going back to 1997, thus allowing less sophisticated investors to easily access diversified fixed income markets a relatively new innovation.

Essentially, both of these can be tested over a range of parameters (1-24 months).

Another note: with hypothesis-driven strategy development, the backtest is to be *nothing more than a confirmation of all the hypotheses up to that point*. That is, re-optimizing on the backtest itself means overfitting. Any proposed change to a strategy should be done in the form of tested hypotheses, as opposed to running a bunch of backtests and selecting the best trials. Taken another way, this means that every single proposed element of a strategy needs to have some form of strong hypothesis accompanying it, in order to be justified.

So, here are the two hypotheses I tested on the corresponding mutual funds:

Essentially, in this case, I take a pooled regression (that is, take the five instruments and pool them together into one giant vector), and regress the cumulative sum of monthly returns against the next month’s return. Also, I do the same thing as the above, except also using cross-sectional ranks for each month, and performing a rank-rank regression. The sample I used was the five mutual funds (CNSAX, FAHDX, VUSTX, VFISX, and PREMX) since their inception to March 2009, since the data for the final ETF begins in April of 2009, so I set aside the ETF data for out-of-sample backtesting.

Here are the results:

hypothesis driven development framework

Of interest to note is that while much of the momentum literature specifies a reversion effect on time-series momentum at 12 months or greater, all the regression coefficients in this case (even up to 24 months!) proved to be positive, with the very long-term coefficients possessing more statistical significance than the short-term ones. Nevertheless, Cliff Smith’s chosen parameters (the two and four month settings) possess statistical significance at least at the 10% level. However, if one were to be highly conservative in terms of rejecting strategies, that in and of itself may be reason enough to reject this strategy right here.

However, the rank-rank regression (that is, regressing the future month’s cross-sectional rank on the past n month sum cross sectional rank) proved to be statistically significant beyond any doubt, with all p-values being effectively zero. In short, there is extremely strong evidence for cross-sectional momentum among these five assets, which extends out to at least two years. Furthermore, since SHY or VFISX, aka the short-term treasury fund, is among the assets chosen, since it’s a proxy for the risk-free rate, by including it among the cross-sectional rankings, the cross-sectional rankings also implicitly state that in order to be invested into (as this strategy is a top-1 asset rotation strategy), it must outperform the risk-free asset, otherwise, by process of elimination, the strategy will invest into the risk-free asset itself.

In upcoming posts, I’ll look into testing hypotheses on signals and rules.

Lastly, Volatility Made Simple has just released a blog post on the performance of volatility-based strategies for the month of August . Given the massive volatility spike, the dispersion in performance of strategies is quite interesting. I’m happy that in terms of YTD returns, the modified version of my strategy is among the top 10 for the year.

Thanks for reading.

NOTE: while I am currently consulting, I am always open to networking, meeting up (Philadelphia and New York City both work), consulting arrangements, and job discussions. Contact me through my email at [email protected], or through my LinkedIn, found here .

Share this:

21 thoughts on “ introduction to hypothesis driven development — overview of a simple strategy and indicator hypotheses ”.

Pingback: Introduction to Hypothesis Driven Development — Overview of a Simple Strategy and Indicator Hypotheses | Mubashir Qasim

Pingback: Quantocracy's Daily Wrap for 09/03/2015 | Quantocracy

Pingback: Distilled News | Data Analytics & R

Pingback: Introduction to Hypothesis Driven Development — Overview of a Simple Strategy and Indicator Hypotheses « Manipulate Magazine: Math 4 You By Us Group Illinois

Pingback: IMHO BEST LINKS FROM QUANTOCRACY FOR THE WEEK 31 AUG 15 — 6 SEP 15 | Quantitative Investor Blog

Ilya, good post. I have two questions: Why are you not removing the intersect for the rankings as you do for the returns (y~x-1 vs y~x). The estimates and probabilityes actualy refer to the intersect in the case of rankings. Why do you use averages of discrete returns rather than cumulative returns or averages of log returns?

Keep up the good work.

Hello Hugo,

Actually, I do use the p-value for the regression estimate. The second row is the regression estimate, not the intercept, which you can find accessed inside the loop here:

for(i in 1:24) { tmp <- returnRegression(monthRets, nMonths=i) pvals[[i]] <- tmp[1,4] estimates[[i]] <- tmp[1,1] rankPs[[i]] <- tmp[2,4] rankEstimates[[i]] <- tmp[2,1] }

As for averages of discrete returns instead of cumulative returns, it's that ROC is the difference between two points. So this gives me more data. But it's most likely very similar in nature.

And I don't remove the intersect for ranking because returns are already zero-centered, ranks aren't, so I keep the intercept there.

Maybe I am missing something… The second row seems like the intersection of the rank linear regression. rbind(summary(lmfit)$coefficients, summary(rankLmfit)$coefficients) Estimate Std. Error t value Pr(>|t|) meltedAverage$value 0.01829089 0.006436298 2.841835 4.643492e-03 (Intercept) 2.69224138 0.137225579 19.619093 4.568979e-66 meltedRankAvg$value 0.10258621 0.041375069 2.479421 1.344357e-02

Thanks for the explanation about why the intersect is needed.

> a b lmfit summary(lmfit)

Call: lm(formula = a ~ b)

Residuals: Min 1Q Median 3Q Max -2.56744 -0.76535 0.06351 0.76057 2.46539

Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.002372 0.105546 -0.022 0.982 b -0.002547 0.113137 -0.023 0.982

Residual standard error: 1.047 on 98 degrees of freedom Multiple R-squared: 5.17e-06, Adjusted R-squared: -0.0102 F-statistic: 0.0005067 on 1 and 98 DF, p-value: 0.9821

The value is the second row of the coefficients.

Hope this helps.

Pingback: Hypothesis-Driven Development Part II | QuantStrat TradeR

Thanks for your post. I suggest adding the line of code

require(reshape2)

below the other “require” lines. When I first ran your script R complained about not finding the “melt” function.

It was in the returnRegression function already, but I edited it to the top.

Pingback: Hypothesis Driven Development Part IV: Testing The Barroso/Santa Clara Rule | QuantStrat TradeR

I tested the code with random portfolio and the rank-rank regression looks very similar. Any thoughts about that?

This was the code to generate the random rankings. I hope that I got it right

nMonthAverage <- apply(returns, 2, runSum, n = nMonths) nMonthAverage <- xts(nMonthAverage, order.by = index(returns)) nMonthAverage <- na.omit(lag(nMonthAverage))

random <- returns for(i in 1:nrow(random)) { random[i,] <- runif(ncol(random)) } nMonthAverage <- random

So you’re generating from a uniform distribution every month, and assuming it’s integer, then sure, you’re effectively doing the same thing.

Pingback: Create Amazing Looking Backtests With This One Wrong–I Mean Weird–Trick! (And Some Troubling Logical Invest Results) | QuantStrat TradeR

Why do you subtract 1 when running the regression here?

lmfit <- lm(meltedReturns$value ~ meltedAverage$value – 1)

To remove the intercept. I am stating that I want to regress solely against the independent variable, not an intercept.

My stats knowledge isn’t great. How are you sure the intercept is zero here? I checked the qqplot and it looks fine but I don’t get the intuition. Thanks

I don’t understand your answer to Hugo. As you used rbind, the object tmp consists of three rows. First row for regression coefficient in meltedAverage Second row for intercept in meltedRankAvg Third row for regression coefficient in meltedRankAvg

So, I guess tmp[1,], tmp[3,] is needed to show regression coefficient.

Leave a comment Cancel reply

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

Data-driven hypothesis development

Data-driven hypothesis development

Have you ever tried solving a difficult problem with no clear path forward? Perhaps it’s a problem that may or may not be well understood or it’s a problem with many ideas of things that might work, and you are facing this without an approach to guide you. We've been there and lived this very scenario and will take you through an approach we've found to be very effective.

As Donald Rumsfeld once said, the problems we solve everyday can be classified into four categories:

“There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.”

Risk matrix

Problems can be classified into four categories. 

When working on problems with little data and high levels of risks (i.e. the “known unknowns” and “unknown unknowns”), it’s important to focus on finding the shortest path to the ‘correct’ solutions and removing the ‘incorrect’ solutions as soon as possible. In our experience, the best approach for solving these problems is to use hypotheses to focus your thinking and inform your decisions with data, which is known as: data-driven hypothesis development.

What is data-driven hypothesis development?

Data-driven hypothesis development (DDHD) is an effective approach when facing complex “known unknown” and “unknown unknown” problems. There are four steps to the approach:

1. Define the goal using data

Problem solving starts with well defined problems, however, we know that “known unknowns” and “unknown unknowns” are rarely well defined. Most of the time, the only thing we do know is that the system is broken and has many problems; what we don’t know is which problem is critical to help us achieve the strategic business goal.

The key is to define the problem using data, thus bringing the needed clarity. State the problem, define the metrics upfront and align these with your goals.

A dashboard to visualize and track all key metrics.

Setup a dashboard to visualize and track all key metrics.

2. Hypothesize

Hypotheses are introduced to create a path to the next state. This requires a change in mindset; proposed solutions are viewed as a series of experiments done in rapid iterations until a desirable outcome is achieved or the experiment is proved not viable.

Hypothesis driven development card

One hypothesis is made up of one or many experiments. Each experiment is independent with a clear outcome, criteria and metrics. It should be short to build and short to test and learn. Results should be a clear indicator of success or not. 

If the result of the experiment has a positive impact on the outcome, the next step would be to implement the change in production. 

If an experiment is proved not viable, mark it as a failure, track and share lessons learned. 

Capability to fail fast is pivotal. As we don’t know the exact path to the destination, we need to have the ability to quickly test different paths to effectively identify the next experiment. 

Each hypothesis needs to be able to answer the question: when should we stop? At what point will you have enough information to make an informed decision? 

3. Fast feedback

Experiments need to be small, specific, so that we can receive feedback in days rather than weeks. There are at least two feedback loops to build in when there is code change involved:

An isolated testing environment: to run the same set of testing suites to baseline the metrics and compare them with our experiment’s results

The production environment: once the experiment is proven in the testing environment it needs to be further tested in a production environment. 

Fast feedback delivered through feedback loops is critical in determining the next step.

Fast feedback delivered through feedback loops is critical in determining the next step.

Fast feedback requires solid engineering practices like continuous delivery to accelerate experimentation and maximize what we can learn. We call out a few practices as an example, different systems might require different techniques:

Regression testing automation: for an orphaned legacy system, it’s important to build a regression testing suite as the learning progresses (have a baseline first then evolve as you go), providing a safety net and early feedback if any change is wrong. 

Monitoring and observability: monitoring is quite often a big gap in legacy systems, not to mention observability. Start with monitoring, you will learn how the system is functioning, utilizing resources, when it will break and how it will behave in failure modes.

Performance testing automation: when there’s a problem about performance, there is a need to automate the performance testing so you can baseline the problem and continuously run it with every change.

A/B testing in production: set up the system to have basic ability to run the current system and the change in parallel; and rollback automatically if there is a need. 

4. Incremental delivery of value

The value created by experiments, successful and failed, can be divided into three categories:

Tangible improvements on the system

Increased understanding of the problem and more data-informed decisions

Improved understanding of system via documentation, monitoring, test and etc.

It’s easy to take successful experiments as the only value delivered. Yet in the complex world of “known unknowns” and “unknown unknowns”, the value of  “failed experiments” is equally important, providing clarity in decision making.

Another often ignored value delivered is the understanding of the problem/system, using data. This is extremely useful when there’s heavy loss of domain knowledge, providing a low cost, less risky way to rebuild knowledge.

Data-driven hypothesis development enables you to find the shortest path to your desired outcome, in itself delivering value. 

Data driven hypothesis development approach

Data driven hypothesis development approach

When facing a complex problem with many known unknowns and unknown unknowns, being data-driven serves as a compass, helping the team stay focused and prioritizing the right work. A data-driven approach helps you deliver incremental value and let’s you know when to stop.

Why we decided to use data-driven hypothesis development

Our client presented us with a proposed solution — a rebuild —- asking us to implement their shiny new design, believing this approach would solve the problem. However, it wasn’t clear what problem we’d be solving by implementing a new build, so we tried to better understand the problem to know if the proposed solution was likely to succeed. 

Once we looked at the underlying architecture and discussed how we might do it differently, we discovered there wasn’t a lot we would change. The architecture at its core was solid, however, there were just too many layers of band-aid fixes. There was low visibility, the system was poorly understood and it had been neglected for many years.   

DDHD would allow us to run short experiments, to learn as we delivered incremental value to the customer, and to continuously apply our lessons learned to have greater impact and rebuild domain knowledge.

Indicators data-driven hypothesis development might work for you

No or low visibility of the system and the problem

Little knowledge of the system exists within your organization

The system has been neglected for some time with band-aids applied loosely

You don’t know what to fix or where to start

You want to de-risk a large piece of work

You want to deliver value incrementally

You are looking at a complete rebuild as the solution

Our approach

1. understand the problem and explore all options.

To understand all sides of the problem, consider the underlying architecture, the customer, the business, and the issues being surfaced. One activity we ran recorded every known problem, discussing what we knew or didn’t know about it. This process involved people outside the immediate team. We gathered anyone who might have some knowledge on the system to join the problem discussion. 

Once you have an understanding of the problem, share it far and wide. This is the beginning of your story; you will keep telling this story with data throughout the process, building interest, buy-in, support and knowledge. 

The framework we used to guide us in our problem discussion.

The framework we used to guide us in our problem discussion. 

2. Define the goals using data

As a team, define your goals or the desired outcomes. What is it you want to achieve? Discuss how you will measure success. What are the key metrics you will use? What does success look like? Once you’ve reached agreement on this, you’ll need to set about baselining your metrics.

Define the goals using data

We used a template similar to the one above to share our goals and record the metrics. The goals were front and center in our daily activities, we talked about them in stand-up, included them on story cards and shared them in our showcases, helping to anchor our thoughts and hold our focus. In an ideal world, you’ll see a direct line from your goal through to your organization's overarching objectives. 

3. Hypothesize 

One of the reasons we were successful in solving the problem and delivering outstanding results for our client was due to involving the whole team. We didn’t have just one or two team members writing hypotheses, defining, and driving experiments - every single member of the team was involved. To set your team up for success, align on the approach and how you’ll work together. Empower your team to write hypotheses from day one, no matter their role.

A table setting the goals, the approach, and what to deliver

We created templates to work from and encouraged pairing on writing hypotheses and defining experiments.

Hypothesis canvas

4. Experiment

Run small, data-driven experiments. One hypothesis can have one or many experiments. Experiments should be short to build and short to test. They should be independent and must be measurable.

Experiment template

5. Conclude the experiment

Acceptance criteria plays a critical role in determining whether the experiment is successful or not. For successful experiments, we will need to build a plan to apply the changes. For all experiments, successful or not, you should revisit other remaining experiments with the new data you have collected and change accordingly upon completion. This could mean updating, stopping or creating new experiments. 

Every conclusion of an experiment is a starting point of the next step plan.

6. Track the experiment and share results

Use data to tell stories and share your lessons learned. Don’t just share this with your immediate team; share your lessons learned and data with the business and your stakeholders. The more they know, the more empowered they will feel too. Take people on the journey with you. Build an experiment dashboard and use it as an info radar to visualize the learning. 

Experiment tracking

Key takeaways

Our key takeaways from running a DDHD approach:

Use data to tell stories. Data was key in all of this. We used it in every conversation, every showcase, every brainstorming session. Data helped us to align the business, get buy-in from stakeholders, empower the team, and to celebrate wins. 

De-risk a large piece of work. We were asking our clients to trust us to fix the “unknown unknowns” over implementing a shiny new solution. DDHD enabled us to deliver incremental value, gaining trust each week and de-risking a piece of work with a potential 12 - 24 month timeframe and equally big price tag. 

Be comfortable with failure. We encouraged the team to celebrate the failed experiments as much as the successful ones. Lessons come from failure, failure enables decision making and through this we find the quickest path to the desired outcome. 

Empower the team to own the problem and the goals. Our success was a direct result of the whole team taking ownership of the problem and the goals. The team were empowered early on to form hypotheses and write experiments. Everytime they learned something, it was shared back and new hypotheses and/or experiments were formed.   

Deliver incremental parcels of value. Keep focussed on delivering small, incremental changes. When faced with a large piece of work and/or a system that has been neglected for some time, it can feel impossible to have an impact. We focussed on delivering value weekly. Delivering value wasn’t just about getting something into the customers’ hands, it was also learning from failed experiments. Celebrate every step, it means you are inching closer to success. 

We’ve found this to be a really valuable approach to dealing with problems that can be classified as ‘known unknowns’ and ‘unknown unknowns’ and we hope you find this technique useful too.

' title=

Let's talk about your next project

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 01 April 2024

Theoretical framework for mixed-potential-driven catalysis

  • Mo Yan   ORCID: orcid.org/0009-0008-4465-7342 1 ,
  • Nuning Anugrah Putri Namari 1 ,
  • Junji Nakamura   ORCID: orcid.org/0000-0002-2837-0535 2 , 3 , 4 &
  • Kotaro Takeyasu   ORCID: orcid.org/0000-0002-4472-6992 2 , 3 , 5  

Communications Chemistry volume  7 , Article number:  69 ( 2024 ) Cite this article

88 Accesses

4 Altmetric

Metrics details

  • Catalytic mechanisms
  • Electrocatalysis
  • Energy transfer

Mixed-potential-driven catalysis is expected to be a distinctive heterogeneous catalytic reaction that produces products different from those produced by thermal catalytic reactions without the application of external energy. Electrochemically, the mechanism is similar to that of corrosion. However, a theory that incorporates catalytic activity as a parameter has not been established. Herein, we report the theoretical framework of mixed-potential-driven catalysis, including exchange currents, as a parameter of catalytic activity. The mixed potential and partitioning of the overpotential were determined from the exchange current by applying the Butler–Volmer equation at a steady state far from equilibrium. Mixed-potential-driven catalysis is expected to open new areas not only in the concept of catalyst development but also in the field of energetics of biological enzymatic reactions.

Similar content being viewed by others

hypothesis driven development framework

Key role of chemistry versus bias in electrocatalytic oxygen evolution

Hong Nhan Nong, Lorenz J. Falling, … Travis E. Jones

hypothesis driven development framework

Reaction environment impacts charge transfer but not chemical reaction steps in hydrogen evolution catalysis

Bryan Y. Tang, Ryan P. Bisbey, … Yogesh Surendranath

hypothesis driven development framework

Reversible catalysis

Vincent Fourmond, Nicolas Plumeré & Christophe Léger

Introduction

Heterogeneous catalysis is crucial for solving various problems related to environment, energy, biology, and materials 1 , 2 , 3 , 4 . Generally, heterogeneous catalysis occurs thermally or electrochemically 5 , 6 , 7 . Recently, it has been suggested that thermal heterogeneous catalysis indeed includes electrochemical processes, leading to markedly different selectivity compared to conventional thermocatalysis 8 , 9 , 10 . In particular, electrode reactions that form mixed potentials typified by corrosion phenomena have attracted attention. Alternatively, the anodic and cathodic half-reactions occur in pairs on a single catalyst surface, where a mixed potential is expected to form if the catalyst is electrically conductive and a suitable electrolyte is present near the active sites, as shown in Fig.  1a . Here, we introduce the concept of “mixed-potential-driven catalysis” as such catalytic systems. The characteristic point of mixed-potential-driven catalysis is that anode and cathode catalysts are exposed to identical reactants, diverging from conventional electric cells where distinct reactants are supplied to each electrode. Intriguingly, it has been reported that some heterogeneous catalytic reactions of gas molecules involve mixed-potential-driven catalysis 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 . For example, it has been reported that H 2 O 2 is selectively produced on various monometallic and bimetallic catalysts, which is considered to consist of an anodic reaction H 2  → 2H + + 2e − and a cathodic reaction O 2  + 2H + + 2e −  → H 2 O 2 9 . Mixed-potential-driven catalysis has also been suggested for the oxidation of formic acid 10 and hydroquinone 11 . The occurrence of a mixed-potential-driven reaction during 4-nitrophenol hydrogenation was also proposed previously 15 . More interestingly, the mixed-potential-driven mechanism is caused by binary heterogeneous catalysts. The oxidation of alcohols (hydroxymethylfurfural) on Au-Pd binary catalysts seems to proceed via mixed-potential-driven catalysis 12 , 13 , 16 . It is also worth noting that ethanol is produced with surprisingly high selectivity by CO 2 hydrogenation on CuPd binary powder catalysts in the presence of water, which is an unexpected product in thermal catalysis 14 . These reports strongly suggest that electrochemical processes play a role in controlling the activity and selectivity of heterogeneous catalysis without the need for external energy. Mixed-potential-driven catalysis is expected to open up a new category of heterogeneous catalysis in both basic research and industrial applications. However, the determining principle behind both the activity and selectivity, specifically the partitioning of the driving force for each half-reaction, has not been considered.

figure 1

a Electrons released in the oxidation reaction from reductant R 1 to oxidant O 1 are used in the reduction reaction from oxidant O 2 to reductant R 2 . b Illustrative polarization curves for the cathodic and anodic half-reactions. The mixed potential is the point at which the net of the cathodic and anodic currents is zero.

Mixed-potential-driven reactions have been mainly discussed in the field of electrochemistry, but not in heterogeneous catalysis. The mixed potential theory was first introduced by Wagner and Traud in 1938 in corrosion science 17 . As shown in Fig.  1b , the basic principle can be understood in terms of the polarization curves of two electrochemical reactions described by the Butler–Volmer equation, where \({i}_{1}\) , \({i}_{2}\) and \({\phi }_{1}^{{{{{{\rm{eq}}}}}}}\) , \({\phi }_{2}^{{{{{{\rm{eq}}}}}}}\) represent the current and equilibrium potential of two redox reactions \({{{{{{\rm{R}}}}}}}_{1}\rightleftarrows {{{{{{\rm{O}}}}}}}_{1}+{{{{{{\rm{e}}}}}}}^{-}\) and \({{{{{{\rm{O}}}}}}}_{2}+{{{{{{\rm{e}}}}}}}^{-}\rightleftarrows {{{{{{\rm{R}}}}}}}_{2}\) , with \({\phi }_{2}^{{{{{{\rm{eq}}}}}}} > {\phi }_{1}^{{{{{{\rm{eq}}}}}}}\) . When the two reactions proceed concurrently, \({i}_{1}+{i}_{2}=0\) is satisfied owing to the conservation of electric charge forming a mixed-potential \({\phi }^{{{{{{\rm{mix}}}}}}}\) . Here, \({\phi }^{{{{{{\rm{mix}}}}}}}-{\phi }_{1}^{{{{{{\rm{eq}}}}}}}\) and \({\phi }^{{{{{{\rm{mix}}}}}}}-{\phi }_{2}^{{{{{{\rm{eq}}}}}}}\) act as the overpotentials \({\eta }_{1}^{{{{{{\rm{mix}}}}}}}\) and \({\eta }_{2}^{{{{{{\rm{mix}}}}}}}\) in the two reactions 18 . In the literature 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , mixed potentials and reaction currents have been formulated for simple pair and parallel reactions with the effects of mass diffusion. Notably, an overpotential accelerates the electrochemical reactions 33 , 34 . Therefore, the partitioning of the overpotential is essential because some kinetically unfavorable half-reactions can be accelerated with a high overpotential by coupling a kinetically favorable half-reaction. It is argued that to achieve the same rate, a larger overpotential is required to conduct the electrode reaction with a higher activation barrier 9 . Despite advances in the understanding of mixed-potential-driven catalysis, determining the overpotential as the driving force based on the catalytic activity has not been elucidated so far.

Mixed-potential-driven catalysis is classified as a non-equilibrium thermodynamic phenomenon. The chemical potential drop between the reactants and products becomes the driving force for the reaction, overcoming the activation energy and converting it into energy to increase the reaction rate 35 . When the equilibrium state is achieved, the driving force becomes zero, which is converted to the heat of reaction 36 . Prigogine constructed a theoretical framework based on entropy change to conserve the energy 37 , 38 . However, it has not been explicitly stated that d i S (entropy production) corresponds to the overpotentials that promote the reaction in the case of mixed-potential-driven catalysis. In this paper, we extend Prigogine’s theory to the mixed-potential-driven catalysis and present the kinetic equations. Enzymatic reaction systems in living organisms, such as glucose oxidase and lactate oxidase, may also proceed via a mixed-potential-driven reaction, in which the anodic and cathodic reactions are paired 39 , 40 , 41 , 42 , 43 . Thus, the framework of mixed-potential-driven catalysis is fundamental for considering the energy pathways of how entropy is generated in the body, which is used to drive metabolic reactions, maintain body temperature as heat, and dissipated outside. The non-equilibrium theory of mixed-potential-driven catalysis is expected to improve our understanding of the energetics of biological systems.

In this study, we present an equation for the conversion of the Gibbs free energy drop between the cathodic and anodic half-reactions into overpotentials by the formation of a mixed potential. In particular, the equation explains how the catalytic activity plays a pivotal role in determining the mixed potential, overpotentials, reaction current, and selection of the cathodic and anodic reactions. This is the equation for the concept of mixed-potential-driven catalysis. This concept is important in the development of kinetically difficult catalytic reactions, understanding the energy transfer of enzymatic reactions in living organisms, and in the non-equilibrium theory of chemical reactions.

Driving force of mixed-potential-driven catalysis

First, we show how the total driving force of the entire mixed-potential-driven catalytic reaction system is distributed to the overpotentials for accelerating the anodic and cathodic half-reactions, depending on the catalytic activity of the catalysts. We consider a mixed potential system, as shown in Fig.  2a , where we assume one-electron transfer processes of anodic reaction 1 and cathodic reaction 2 occurring at both components I and II of the catalyst.

figure 2

a Schematic of a mixed-potential-driven catalytic reaction occurring on the catalyst composed of component I and II. Cathodic and anodic half-reactions can occur in each of the component I and II. Electrons are transferred within and between the component I and II. b Illustration of the four polarization curves for the cathodic and anodic half-reactions on catalyst component I and II. The mixed potential is the point at which the sum of the four currents is zero.

The net reaction is expressed by the following equation.

Electrochemically, microelectrodes I and II can be regarded as short-circuited, with both electrodes exposed to identical gas or liquid conditions, regardless of whether they are spatially separated. Unlike ordinary electrochemical cells, the distinction between the anode and cathode is not fixed before starting the reaction. Consequently, Eqs. ( 1 ) and ( 2 ) each occur on two different catalyst components leading to \({i}_{1}^{{{{{{\rm{I}}}}}}}\) , \({i}_{2}^{{{{{{\rm{I}}}}}}}\) , \({i}_{1}^{{{{{{\rm{II}}}}}}}\) , and \({i}_{2}^{{{{{{\rm{II}}}}}}}\) , where one assumes that equilibrium potential of reaction 1 ( \({\phi }_{1}^{{{{{{\rm{eq}}}}}}}\) ) is lower than that of reaction 2 ( \({\phi }_{2}^{{{{{{\rm{eq}}}}}}}\) ). The potential difference of \({\phi }_{2}^{{{{{{\rm{eq}}}}}}}-{\phi }_{1}^{{{{{{\rm{eq}}}}}}}\) corresponds to the total driving force of the net reaction Eq. ( 3 ).

To estimate the mixed potential and current at the mixed potential, it is necessary to analyze the polarization curve, which depends on the catalytic activity and is expressed by the Butler–Volmer equation. The currents of the electrochemical half-reactions (1) and (2) on components I and II are given by the Butler–Volmer equation with no mass-transfer effect:

where f  =  F/RT and F , R , and T are the Faraday constant, gas constant, and temperature, respectively. \({\alpha }_{1}\) and \({\alpha }_{2}\) are the transfer coefficients for reaction 1 and reaction 2, respectively. \({{i}_{1}^{{{{{{\rm{I}}}}}}}}^{0}\) , \({{i}_{2}^{{{{{{\rm{I}}}}}}}}^{0}\) , \({{i}_{1}^{{{{{{\rm{II}}}}}}}}^{0}\) , and \({{i}_{2}^{{{{{{\rm{II}}}}}}}}^{0}\) are the exchange currents for reactions 1 and 2 on components I and II, respectively. \(\phi -{\phi }_{1}^{{{{{{\rm{eq}}}}}}}\) and \(\phi -{\phi }_{2}^{{{{{{\rm{eq}}}}}}}\) are the overpotentials \({\eta }_{1}\) and \({\eta }_{2}\) for reactions 1 and 2, respectively. The exchange current \({i}^{0}\) corresponds to the catalytic activity and determines the shape of the polarization curve 44 . Here, the mixed potential \({\phi }^{{{{{{\rm{mix}}}}}}}\) is defined as the potential at which the net current is zero, as shown in Fig.  2b 17 .

By substituting Eqs. ( 4 )–( 7 ) into Eq. ( 8 ), one can calculate the mixed potential \({\phi }^{{{{{{\rm{mix}}}}}}}\) numerically using practical values of exchange currents, equilibrium potentials, and transfer coefficients. On the other hand, one can obtain the relationship among mixed potentials, overpotentials, and exchange currents based on analytical solutions with the assumption of identical transfer coefficients ( \({\alpha }_{1}={\alpha }_{2}=\alpha\) ). Then, one can derive Eq. ( 9 ) for \({\phi }^{{{{{{\rm{mix}}}}}}}\) (detailed derivation shown in Supplementary Note  1 ).

The absolute value of the anodic and cathodic currents must be the same, which is the current at the mixed potential ( \({i}^{{{{{{\rm{mix}}}}}}}\) ) for the net reaction.

Substituting Eq. ( 9 ) into Eqs. ( 4 ), ( 5 ), and ( 10 ) gives \({i}^{{{{{{\rm{mix}}}}}}}\) .

It is shown that \({i}^{{{{{{\rm{mix}}}}}}}\) is a function of the driving force \({\phi }_{2}^{{{{{{\rm{eq}}}}}}}-{\phi }_{1}^{{{{{{\rm{eq}}}}}}}\) and the exchange current for each reaction. When we define \({\phi }^{{{{{{\rm{mix}}}}}}}-{\phi }_{1}^{{{{{{\rm{eq}}}}}}}\) and \({\phi }^{{{{{{\rm{mix}}}}}}}-{\phi }_{2}^{{{{{{\rm{eq}}}}}}}\) as \({\eta }_{1}^{{{{{{\rm{mix}}}}}}}\) and \({\eta }_{2}^{{{{{{\rm{mix}}}}}}}\) , respectively, \({\phi }_{2}^{{{{{{\rm{eq}}}}}}}-{\phi }_{1}^{{{{{{\rm{eq}}}}}}}\) can be regarded as the sum of overpotentials \(|{\eta }_{1}^{{{{{{\rm{mix}}}}}}}|+|{\eta }_{2}^{{{{{{\rm{mix}}}}}}}|\) to promote catalytic reactions as applied from the outside. Here, the partitioning of \({\phi }_{2}^{{{{{{\rm{eq}}}}}}}-{\phi }_{1}^{{{{{{\rm{eq}}}}}}}\) to the overpotentials \(|{\eta }_{1}^{{{{{{\rm{mix}}}}}}}|\) and \(|{\eta }_{2}^{{{{{{\rm{mix}}}}}}}|\) can be expressed using Eq. ( 9 ).

It should be noted that in Eqs. ( 12 ) and ( 13 ) the total overpotential \({\phi }_{2}^{{{{{{\rm{eq}}}}}}}-{\phi }_{1}^{{{{{{\rm{eq}}}}}}}\) is partitioned to \(|{\eta }_{1}^{{{{{{\rm{mix}}}}}}}|\) and \(|{\eta }_{2}^{{{{{{\rm{mix}}}}}}}|\) according to the exchange current or catalytic activity. The crucial factor influencing the overpotential \(|{\eta }_{1}^{{{{{{\rm{mix}}}}}}}|\) and \(|{\eta }_{2}^{{{{{{\rm{mix}}}}}}}|\) is the ratio of \(\left({{i}_{1}^{{{{{{\rm{I}}}}}}}}^{0}+{{i}_{1}^{{{{{{\rm{II}}}}}}}}^{0}\right):\left({{i}_{2}^{{{{{{\rm{I}}}}}}}}^{0}+{{i}_{2}^{{{{{{\rm{II}}}}}}}}^{0}\right)\) .

Assuming that a single oxidation reaction and a single reduction reaction take place on each catalyst component (while \({{i}_{1}^{{{{{{\rm{I}}}}}}}}^{0}\) and \(\,{{i}_{2}^{{{{{{\rm{II}}}}}}}}^{0}\) remain, but \({{i}_{2}^{{{{{{\rm{I}}}}}}}}^{0}\) and \(\,{{i}_{1}^{{{{{{\rm{II}}}}}}}}^{0}\) are zero), it is clearly shown in Eqs. ( S2–2 ) and ( S2–3) that the ratio of \({{i}_{1}^{{{{{{\rm{I}}}}}}}}^{0}:{{i}_{2}^{{{{{{\rm{II}}}}}}}}^{0}\) determines the overpotential \(|{\eta }_{1}^{{{{{{\rm{mix}}}}}}}|\) and \(|{\eta }_{2}^{{{{{{\rm{mix}}}}}}}|\) (detailed discussion shown in Supplementary Note  2 and Supplementary Fig.  1 ). For example, if \({{i}_{1}^{{{{{{\rm{I}}}}}}}}^{0}\ll \,{{i}_{2}^{{{{{{\rm{II}}}}}}}}^{0}\) , \(|{\eta }_{1}^{{{{{{\rm{mix}}}}}}}|\gg |{\eta }_{2}^{{{{{{\rm{mix}}}}}}}|\) will be obtained, \(|{\eta }_{1}^{{{{{{\rm{mix}}}}}}}|\) and \(|{\eta }_{2}^{{{{{{\rm{mix}}}}}}}|\) will approach \({\phi }_{2}^{{{{{{\rm{eq}}}}}}}-{\phi }_{1}^{{{{{{\rm{eq}}}}}}}\) and zero, respectively. This example is significant for heterogeneous catalysis because the catalytically difficult reaction 1 with small \({{i}_{1}^{{{{{{\rm{I}}}}}}}}^{0}\) can be promoted by applying larger overpotential of \(|{\eta }_{1}^{{{{{{\rm{mix}}}}}}}|\) that corresponds to the total driving force \({\phi }_{2}^{{{{{{\rm{eq}}}}}}}-{\phi }_{1}^{{{{{{\rm{eq}}}}}}}\) of the net reaction.

Overpotential partitioning depending on exchange current

To comprehend the physical meaning of overpotential partitioning, which is the relationship between the overpotential and exchange current, two approximation methods were adopted. One is the linear approximation of the Taylor expansion for small overpotentials, and the other is Tafel approximation for large overpotentials (see Supplementary Note  1 for the case of Tafel approximation; the error estimation is discussed in Supplementary Note  4 , Supplementary Fig.  2 , and Supplementary Table  1 ). Here, in the linear approximation, for the catalyst component I, the currents in Eqs. ( 4 ) and ( 5 ) are approximated as follows:

For the currents on catalyst component II, “I” in Eqs. ( 14 ) and ( 15 ) can be replaced by “II”. Combining the four equations for currents with Eq. ( 8 ) yields \({\phi }^{{{{{{\rm{mix}}}}}}}\) .

Equation ( 16 ) clearly shows that the mixed potential is determined by internal division with a ratio of \(({{i}_{1}^{{{{{{\rm{I}}}}}}}}^{0}+{{i}_{1}^{{{{{{\rm{II}}}}}}}}^{0}):({{i}_{2}^{{{{{{\rm{I}}}}}}}}^{0}+{{i}_{2}^{{{{{{\rm{II}}}}}}}}^{0})\) . Simultaneously, the current at the mixed potential is obtained as follows:

\({i}^{{{{{{\rm{mix}}}}}}}\) corresponds to the apparent catalytic activity in the mixed-potential-driven catalysis, which is determined by exchange current and the driving force of \({\phi }_{2}^{{{{{{\rm{eq}}}}}}}-{\phi }_{1}^{{{{{{\rm{eq}}}}}}}\) .

In addition, the overpotentials \(|{\eta }_{1}^{{{{{{\rm{mix}}}}}}}|\) and \(|{\eta }_{2}^{{{{{{\rm{mix}}}}}}}|\) are rewritten using \({\phi }^{{{{{{\rm{mix}}}}}}}\) of Eq. ( 16 ).

Then, the ratio of the overpotential is expressed by:

Here, it is clear that the driving force of the entire reaction, \({\phi }_{2}^{{{{{{\rm{eq}}}}}}}-{\phi }_{1}^{{{{{{\rm{eq}}}}}}}\) , is partitioned to overpotential \(|{\eta }_{1}^{{{{{{\rm{mix}}}}}}}|\) and \(|{\eta }_{2}^{{{{{{\rm{mix}}}}}}}|\) according to the ratio of the sum of the exchange current \({{i}_{1}^{{{{{{\rm{I}}}}}}}}^{0}+{{i}_{1}^{{{{{{\rm{II}}}}}}}}^{0}\) and \({{i}_{2}^{{{{{{\rm{I}}}}}}}}^{0}+{{i}_{2}^{{{{{{\rm{II}}}}}}}}^{0}\) for reactions 1 and 2, i.e., the catalytic activity. Figure  3 is a conceptual electric series circuit representing a mixed-potential-driven catalytic reaction where \({\phi }_{2}^{{{{{{\rm{eq}}}}}}}-{\phi }_{1}^{{{{{{\rm{eq}}}}}}}\) corresponds to the electromotive source due to reaction 1 and 2, and \(|{\eta }_{1}^{{{{{{\rm{mix}}}}}}}|\) and \(|{\eta }_{2}^{{{{{{\rm{mix}}}}}}}|\) corresponds to the overpotentials of reaction 1 and 2 without external electric work.

figure 3

The internal total voltage \({\phi }_{2}^{{{{{{\rm{eq}}}}}}}-{\phi }_{1}^{{{{{{\rm{eq}}}}}}}\) is due to the Gibbs free energy drop \(-\Delta {G}_{{{{{{\rm{r}}}}}}}\) across the entire mixed-potential-driven catalytic reaction. The charge-transfer resistance ( \({r}_{1}\) and \({r}_{2}\) ), proportional to the reciprocal of the exchange current, plays a role similar to electrical resistors in a circuit. The voltage drops, \({i}^{{{{{{\rm{mix}}}}}}}{r}_{1}\) and \({i}^{{{{{{\rm{mix}}}}}}}{r}_{2}\) , signifies the overpotentials \(|{\eta }_{1}^{{{{{{\rm{mix}}}}}}}|\) and \(|{\eta }_{2}^{{{{{{\rm{mix}}}}}}}|\) , following the voltage divider rule. The energy utilized for driving reactions 1 and 2 eventually transforms into Joule heat ( \(\eta i\) ).

Here, \({r}_{1}\) and \({r}_{2}\) are the so-called charge-transfer resistances depending on the catalytic activity, which are proportional to \(1/({{i}_{1}^{{{{{{\rm{I}}}}}}}}^{0}+{{i}_{1}^{{{{{{\rm{II}}}}}}}}^{0})\) and \(1/({{i}_{2}^{{{{{{\rm{I}}}}}}}}^{0}+{{i}_{2}^{{{{{{\rm{II}}}}}}}}^{0})\) for the half-reactions 1 and 2, respectively 28 , 44 . The partitioning of the driving force \({\phi }_{2}^{{{{{{\rm{eq}}}}}}}-{\phi }_{1}^{{{{{{\rm{eq}}}}}}}\) into \(|{\eta }_{1}^{{{{{{\rm{mix}}}}}}}|\) and \(|{\eta }_{2}^{{{{{{\rm{mix}}}}}}}|\) in the mixed-potential-driven catalysis follows the voltage divider rule in the series circuit as \({i}^{{{{{{\rm{mix}}}}}}}{r}_{1}\) and \({i}^{{{{{{\rm{mix}}}}}}}{r}_{2}\) . This implies that a larger overpotential is partitioned to accelerate processes with a higher charge-transfer resistance.

One can regard this as short circuit where no external work and the Gibbs free energy term is converted to the overpotentials of \(|{\eta }_{1}^{{{{{{\rm{mix}}}}}}}|\) and \(|{\eta }_{2}^{{{{{{\rm{mix}}}}}}}|\) away from equilibrium. Here, the driving force \({\phi }_{2}^{{{{{{\rm{eq}}}}}}}-{\phi }_{1}^{{{{{{\rm{eq}}}}}}}\) corresponds to the Gibbs free energy change Δ G r of the net reaction with certain concentration of molecules involved at which the reaction proceeds 32 , 45 , 46 .

We assumed that half-reactions 1 and 2 are one-electron transfer reactions so that the number of “moles of electrons” exchanged in the half-reactions, n , is equal to 1 and drops away in Eq. ( 21 ). This mechanism efficiently drives reactions by utilizing the overpotential to accelerate the forward reaction and decelerate the backward reaction 46 . This differs from thermocatalytic reactions, which use a driving force to accelerate both the forward and backward reactions. This is a non-equilibrium steady state, which is discussed in detail below. However, it is noted here that the energy used to drive reactions 1 and 2 will be dissipated as Joule heat expressed as:

Equation ( 22 ) indicates that the heat generated by each reaction was determined by the exchange current. By contrast, thermochemical reactions directly convert the Gibbs free energy to heat. This distinction is one of the secrets to how mixed-potential-driven catalysis can efficiently accelerate reactions.

Direction of the current flow or electron transfer

In mixed-potential-driven catalysis, the direction of current flow or electron transfer is governed by the exchange current or catalytic activity. Understanding how electrons are transferred between the components is crucial for catalyst design. However, before starting the reaction, the anode and cathode components are unknown. After initiation of the reaction, the magnitude of the exchange current or catalytic activity determines the direction of the current flow or electron transfer and distinguishes between the anode and cathode. Essentially, the roles of components I and II are uncertain and interchangeable.

This uncertainty leads to three possible cases regarding the direction of the current flow or the designation of components I and II as the anode and cathode of the catalyst, respectively, as shown in Fig.  4 . Case (a): Overall, the anodic and cathodic current predominates in component I and II, respectively. Case (b): The cathodic and anodic current predominate in component I and II, respectively. Case (c): Anodic and cathodic currents proceed in pairs in components I and II, respectively, resulting in no current flow between them. By substituting the approximation equations (both the Tafel and linear approximation methods yielded identical results) for the currents of reactions 1 and 2 on components I and II, the direction of the current flow in the three cases can be expressed by the exchange currents as follows (detailed derivation shown in Supplementary Note  3 ):

figure 4

a Component I is anode and component II is cathode, i.e., the current flows from II to I; ( b ) Component I is cathode and component II is anode, i.e., the current flows from I to II; ( c ) The current flows within both components I and II but there is no current flows between I and II.

Equations ( 23 )–( 25 ) indicate that the current flow direction is kinetically governed by the exchange current ratio or catalytic activity. The exchange current values are sensitive to substance concentrations and pH, as reported in the literature 47 . Controlling the current direction by adjusting the exchange current can help researchers harness the benefits of the internal electric field of the catalyst and enhance selectivity for the desired products.

Non-equilibrium thermodynamics for mixed-potential-driven catalysis at steady-state

Herein, non-equilibrium thermodynamics at steady-state are discussed for the mixed-potential-driven catalysis based on the entropy production concept proposed by Prigogine. The starting point of Prigogine’s theory is to express the changes in entropy as the sum of two parts:

where \({{{{{\rm{d}}}}}}{S}_{{{{{{\rm{sys}}}}}}}\) is the total variation in the entropy of a system, \({{{{{{\rm{d}}}}}}}_{{{{{{\rm{e}}}}}}}{S}_{{{{{{\rm{sys}}}}}}}\) is the entropy change of the system owing to the exchange of matter and energy with the exterior, and \({{{{{{\rm{d}}}}}}}_{{{{{{\rm{i}}}}}}}{S}_{{{{{{\rm{sys}}}}}}}\) is the entropy produced by the irreversible processes inside the system 38 . The entropy production term \({{{{{{\rm{d}}}}}}}_{{{{{{\rm{i}}}}}}}{S}_{{{{{{\rm{sys}}}}}}}\) , can serve as a basis for the systematic description of irreversible processes occurring in a system, and \({{{{{{\rm{d}}}}}}}_{{{{{{\rm{i}}}}}}}{S}_{{{{{{\rm{sys}}}}}}}\) is always non-negative. Moreover, in the steady-state, the time derivative of the system entropy, \({{{{{\rm{d}}}}}}{S}_{{{{{{\rm{sys}}}}}}}/{{{{{\rm{d}}}}}}t\) , is zero, that is, the entropy spontaneously generated inside the system is balanced by a flow of the entropy exchange with the outside 38 , 48 :

For chemical processes in a closed system at constant pressure and temperature, the rate of entropy production can be expressed in the form of the Gibbs free energy 38 :

where \({{{{{\rm{d}}}}}}{G}_{{{{{{\rm{sys}}}}}}}\) is the change of total Gibbs free energy of the reaction system, \(\xi\) is the extent of reaction, \({{{{{\rm{d}}}}}}\xi\) / \({{{{{\rm{d}}}}}}t\) is the rate of the reaction, and \(-\Delta {G}_{{{{{{\rm{r}}}}}}}\) is the driving force for the net reaction corresponding to affinity A in Prigogine’s textbook (defined as \(-{{{{{\rm{d}}}}}}{G}_{{{{{{\rm{sys}}}}}}}/{{{{{\rm{d}}}}}}\xi\) and shown in Supplementary Fig.  3 ). In electrical conduction system, the rate of entropy production corresponds to the Joule heat (per unit time):

where \(V\) is the potential difference across the entire conductor, \(I\) is the convention electric current, and \({{{{{\rm{d}}}}}}{Q}^{{\prime} }\) is the Joule heat generated from the electric current 38 , 49 , 50 .

The equations above are generally present in textbook. Applying these equations to the mixed-potential-driven catalysis allows us to describe the energy conversion pathway within the framework of non-equilibrium thermodynamics at steady-state, as follows (detailed derivation shown in Supplementary Note  5 ):

where \({{{{{\rm{d}}}}}}Q\) denotes the Joule heat generated due to the reaction. Equation ( 30 ) can be illustrated using a closed, isothermal, and isobaric mixed-potential-driven catalytic reaction system at steady-state, as depicted in Fig.  5 . We may consider that the surroundings of the reaction system are enclosed by rigid adiabatic walls, meaning that the surroundings achieve equilibrium throughout; that is, the temperature, pressure, and chemical potentials remain constant 48 . Clearly, the mixed-potential-driven catalysis theory can be categorized as a non-equilibrium theory. Note here that the mixed-potential-driven catalysis provides a mechanism of internal driving force transformation where the Gibbs free energy drop of the net reaction ( \(-\Delta {G}_{{{{{{\rm{r}}}}}}}\) ) is converted to overpotentials for the two half-reactions ( \(|{\eta }_{1}^{{{{{{\rm{mix}}}}}}}|\) and \(|{\eta }_{2}^{{{{{{\rm{mix}}}}}}}|\) ) inside the reaction system. Thus, it can be concluded that the mixed-potential-driven catalysis converts the Gibbs free energy driving force to internal electric energy and finally to Joule heat.

figure 5

Surroundings are enclosed by rigid adiabatic walls, completely isolated from the external world, a common experimental approximation. At steady-state, the “internal” entropy created in the reaction system ( \({{{{{\rm{T}}}}}}{{{{{{\rm{d}}}}}}}_{{{{{{\rm{i}}}}}}}{S}_{{{{{{\rm{sys}}}}}}}/{{{{{\rm{d}}}}}}t\) ) which exactly balances the “exchange” entropy to the surroundings ( \(-T{{{{{{\rm{d}}}}}}}_{{{{{{\rm{e}}}}}}}{S}_{{{{{{\rm{sys}}}}}}}/{{{{{\rm{d}}}}}}t\) ), and would be dissipated as heat ( \({{{{{\rm{d}}}}}}Q/{{{{{\rm{d}}}}}}t\) ) in the surroundings. At any particular time, in the mixed-potential-driven catalysis, Gibbs free energy drop of the net reaction ( \(-\Delta {G}_{{{{{{\rm{r}}}}}}}\) ) undergoes transformation into the overpotentials ( \(|{\eta }_{1}^{{{{{{\rm{mix}}}}}}}|\) and \(|{\eta }_{2}^{{{{{{\rm{mix}}}}}}}|\) ), which serve to accelerate each of half-reactions, and are ultimately dissipated as Joule heat to the surroundings through the exchange entropy.

Mixed-potential-driven catalysis occurs when the anodic and cathodic reactions are short-circuited in an appropriate electrolyte, and the difference in Gibbs free energy between the anodic and cathodic reactions converts into overpotentials to promote both reactions. In this study, we generalize the theory of mixed-potential-driven catalysis, including the parameters of catalytic activity. We formulate the relationship between the Gibbs free energy and the overpotential using the exchange current as the catalytic activity. The present theoretical analysis has clearly demonstrated how the mixed potential is determined, overpotential is partitioned, and anode and cathode are selected by the exchange current. Although the present theoretical framework is fundamental and is constructed using a simple model, many additional effects must be taken into account for further development and application in the future.

In principle, the theoretical framework of mixed-potential-driven catalysis can be applied to both solid-gas and solid-liquid interfaces, where an electrolyte is necessary to convey ions. One open issue is how the overpotential is applied to electrode reactions at solid-gas and solid-liquid interfaces. At present, we consider that the overpotential in mixed-potential-driven catalysis corresponds to an electric double layer (EDL) at the catalyst surface, where electrochemical reactions are accelerated or deaccelerated. The nanoscale EDL at the interface may play a large role, where the shape of local electric field of EDL is determined by concentrations and distributions of cations, anions, and electrons depending on the overpotential. That local electric field should critically influence the reaction kinetics. Therefore, it is important to study the local structure of the EDL at the gas-solid and liquid-solid interfaces. Furthermore, as the size of the electrode decreases, a strong electric field may be generated. Thus, it is necessary to clarify the relationship among the overpotential, electrode structure, and EDL structure. Recent studies have reported that EDLs at spatially distant cathodes and anodes change in an intrinsically coupled manner 51 . Future research will employ both experimental and theoretical studies of the EDL in mixed-potential-driven catalysis.

The mass transport effect is not included in the present theoretical model because the main aim of this study was to show that catalytic activity mainly determines the mixed potential. However, it is necessary to consider the non-linear mass transport effect to determine the current value in addition to Bulter-Volmer equations. The position of mixed potential and reaction rate are shifted depending on the mass transport effect (Supplementary Fig.  5 ), as discussed in Supplementary Note  7 . Even more complex, electron transfer numbers, transfer coefficients, and the co-occurrence of thermal reactions must be considered in the kinetic model of mixed-potential-driven catalysis. In actual catalytic reaction systems, these additional effects must be considered in an extremely complex manner. Therefore, it is necessary to combine research on relatively simple systems to approach real catalytic reactions that involve extremely complex elements.

Another important aspect of mixed-potential-driven catalysis is non-equilibrium thermodynamics. Mixed-potential-driven catalysis will be particularly important in the energetics of enzymatic reactions of biological systems (discussed in Supplementary Note  6 and Supplementary Fig.  4 ). As described above, the Gibbs free energy drop or uncompensated heat ( \({{{{{{\rm{d}}}}}}}_{{{{{{\rm{i}}}}}}}{S}_{{{{{{\rm{sys}}}}}}}\) ) is first converted into overpotential and then into heat. This energy conversion is a characteristic feature of non-equilibrium thermodynamics and is expected to greatly contribute to the future development of non-equilibrium thermodynamics itself. The energy conversion is particularly important in enzymatic reactions of biological systems is important because the mechanism of thermogenesis in biological systems is expected to be closely linked to the present non-equilibrium theory 52 .

Data availability

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

Hutchings, G. J. Heterogeneous gold catalysis. ACS Cent. Sci. 4 , 1095–1101 (2018).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Bell, A. T. Impact of nanoscience on heterogeneous catalysis. Science 299 , 1688–1691 (2003).

Article   CAS   PubMed   Google Scholar  

Lin, L. et al. Heterogeneous catalysis in water. JACS Au 1 , 1834–1848 (2021).

Wieckowski, A. & Neurock, M. Contrast and synergy between electrocatalysis and heterogeneous catalysis. Adv. Phys. Chem. 2011 , 907129 (2011).

Article   Google Scholar  

Boettcher, S. W. & Surendranath, Y. Heterogeneous electrocatalysis goes chemical. Nat. Catal. 4 , 4–5 (2021).

Article   CAS   Google Scholar  

Fortunato, G. V. et al. Analysing the relationship between the fields of thermo- and electrocatalysis taking hydrogen peroxide as a case study. Nat. Commun. 13 , 1–7 (2022).

Koshy, D. M. et al. Bridging thermal catalysis and electrocatalysis: catalyzing CO2 conversion with carbon-based materials. Angew. Chem. Int. Ed. 60 , 17472–17480 (2021).

Zope, B. N., Hibbitts, D. D., Neurock, M. & Davis, R. J. Reactivity of the gold/water interface during selective oxidation catalysis. Science 330 , 74–78 (2010).

Adams, J. S., Kromer, M. L., Rodríguez-López, J. & Flaherty, D. W. Unifying concepts in electro- and thermocatalysis toward hydrogen peroxide production. J. Am. Chem. Soc. 143 , 7940–7957 (2021).

Ryu, J. et al. Thermochemical aerobic oxidation catalysis in water can be analysed as two coupled electrochemical half-reactions. Nat. Catal. 4 , 742–752 (2021).

Howland, W. C., Gerken, J. B., Stahl, S. S. & Surendranath, Y. Thermal hydroquinone oxidation on CO/n-doped carbon proceeds by a band-mediated electrochemical mechanism. J. Am. Chem. Soc. 144 , 11253–11262 (2022).

Huang, X. et al. Au–Pd separation enhances bimetallic catalysis of alcohol oxidation. Nature 603 , 271–275 (2022).

Zhao, L. et al. Insights into the effect of metal ratio on cooperative redox enhancement effects over Au- and Pd-mediated alcohol oxidation. ACS Catal. 13 , 2892–2903 (2023).

Takeyasu, K., Katane, Y., Miyamoto, N., Yan, M. & Nakamura, J. Experimental verification of mixed-potential-driven catalysis. e-J. Surf. Sci. Nanotechnol. 21 , 164–168 (2022).

An, H., Sun, G., Hülsey, M. J., Sautet, P. & Yan, N. Demonstrating the electron-proton-transfer mechanism of aqueous phase 4-nitrophenol hydrogenation using unbiased electrochemical cells. ACS Catal. 12 , 15021–15027 (2022).

Daniel, I. T. et al. Kinetic analysis to describe Co-operative redox enhancement effects exhibited by bimetallic Au-Pd systems in aerobic oxidation. Catal. Sci. Technol. 13 , 47–55 (2022).

Wagner, C. & Traud, W. On the interpretation of corrosion processes through the superposition of electrochemical partial processes and on the potential of mixed electrodes. Corrosion 62 , 844–855 (1938).

Google Scholar  

Spiro, M. & Ravno, A. B. Heterogeneous catalysis in solution. part II. The effect of platinum on oxidation-reduction reactions. J. Chem. Soc. 78–96 https://doi.org/10.1039/FT9918700977 (1965).

Smirnov, E., Peljo, P., Scanlon, M. D. & Girault, H. H. Interfacial redox catalysis on gold nanofilms at soft interfaces. ACS Nano 9 , 6565–6575 (2015).

Gray, D. & Cahill, A. Theoretical analysis of mixed potentials. J. Electrochem. Soc. 116 , 443 (1969).

Spiro, M. Heterogeneous catalysis in solution. Part 17.—Kinetics of oxidation–reduction reaction catalysed by electron transfer through the solid: an electrochemical treatment. J. Chem. Soc. Faraday Trans. 1: Phys. Chem. Condens. Phases 75 , 1507 (1979).

Miller, D. S. & McLendon, G. Quantitative electrochemical kinetics studies of ‘microelectrodes’: catalytic water reduction by methyl viologen/colloidal platinum. J. Am. Chem. Soc. 103 , 6791–6796 (1981).

Bindra, P. & Roldan, J. Mechanisms of electroless metal plating. III. Mixed potential theory and the interdependence of partial reactions. J. Appl Electrochem 17 , 1254–1266 (1987).

Power, G. P., Staunton, W. P. & Ritchie, I. M. Mixed potential measurements in the elucidation of corrosion mechanisms-II. Some measurements. Electrochim. Acta 27 , 165–169 (1982).

Spiro, M. Catalysis by noble metals of redox reactions in solution. Catal. Today 17 , 517–525 (1993).

Mills, A. Heterogeneous redox catalysts for oxygen and chlorine evolution. Chem. Soc. Rev. 18 , 285–316 (1989).

Bockris, J. O. M. & Khan, S. U. M. Surface Electrochemistry: A Molecular Level Approach . (Springer, 1993).

Kodera, T., Kita, H. & Honda, M. Kinetic analysis of the mixed potential. Electrochim. Acta 17 , 1361–1376 (1972).

Michael, B. Y. & Spiro, M. Heterogeneous catalysis in solution. Part 22.—Oxidation–reduction reactions catalysed by electron transfer through the solid: theory for partial and complete mass-transport control. J. Chem. Soc. Faraday Trans. 1: Phys. Chem. Condens. Phases 79 , 481–490 (1983).

Spiro, M. & Griffin, P. W. Proof of an electron-transfer mechanism by which metals can catalyse oxidation-reduction reactions. J. Chem. Soc. D: Chem. Commun. 262b–263 https://doi.org/10.1039/C2969000262B (1969).

Zhou, H., Park, J. H., Fan, F. R. F. & Bard, A. J. Observation of single metal nanoparticle collisions by open circuit (Mixed) potential changes at an ultramicroelectrode. J. Am. Chem. Soc. 134 , 13212–13215 (2012).

Bockris, J. O. & Reddy, A. K. N. Modern Electrochemistry 2B: Electrodics in Chemistry, Engineering, Biology and Environmental Science. Jurnal Penelitian Pendidikan Guru Sekolah Dasar (Kluwer Academic Publishers, 2001).

Peljo, P. et al. Redox electrocatalysis of floating nanoparticles: determining electrocatalytic properties without the influence of solid supports. J. Phys. Chem. Lett. 8 , 3564–3575 (2017).

Miller, D. S., Bard, A. J., McLendon, G. & Ferguson, J. Catalytic water reduction at colloidal metal “microelectrodes”. 2. Theory and experiment. J. Am. Chem. Soc. 103 , 5336–5341 (1981).

de Donder, T. & van Rysselberghe, P. Thermodynamic Theory of Affinity: A Book of Principles . (Milford, Stanford University Press; Oxford university press, 1936).

Nernst, W. The New Heat Theorem: Its Foundations in Theory and Experiment . (Dover Publications, 1969).

Prigogine, I. Introduction to Thermodynamics of Irreversible Processes . (John Wiley & Sons, 1968).

Kondepudi, D. & Prigogine, I. Modern Thermodynamics: From Heat Engines to Dissipative Structures . (John Wiley & Sons, 1998).

Smith, L. A., Glasscott, M. W., Vannoy, K. J. & Dick, J. E. Enzyme kinetics via open circuit potentiometry. Anal. Chem. 92 , 2266–2273 (2020).

Cammann, K. & Rechnitz, G. A. Exchange kinetics at ion-selective membrane electrodes. Anal. Chem. 48 , 856–862 (1976).

Kao, W. C. et al. The obligate respiratory supercomplex from Actinobacteria. Biochim. Biophys. Acta Bioenerg. 1857 , 1705–1714 (2016).

Freguia, S., Virdis, B., Harnisch, F. & Keller, J. Bioelectrochemical systems: microbial versus enzymatic catalysis. Electrochim. Acta 82 , 165–174 (2012).

Bertholet, A. M., Kirichok, Y. & Mitochondrial, H. Leak and thermogenesis. Annu. Rev. Physiol. 84 , 381–407 (2022).

Bard, A. J. & Faulkner, L. R. Electrochemical Methods: Fundamentals and Applications (John Wiley & Sons, 2001).

Lazzari, L. Encyclopaedia of Hydrocarbons 485–505 (Eni: Istituto della Enciclopedia italiana, 2005).

O’Hayre, R., Cha, S.-W., Colella, W. & Prinz, F. B. Fuel Cell Fundamentals . (John Wiley & Sons, 2016).

Percival, S. J. & Bard, A. J. Ultra-sensitive potentiometric measurements of dilute redox molecule solutions and determination of sensitivity factors at platinum ultramicroelectrodes. Anal. Chem. 89 , 9843–9849 (2017).

Caplan, S. R. & Essig, A. Bioenergetics and Linear Nonequilibrium Thermodynamics: The Steady State . (Harvard University Press, 1983).

Newman, J. & Balsara, N. P. Electrochemical Systems . (John Wiley & Sons, 2004).

Bockris, J. O., Conway, B. E., Yeager, E. & White, R. E. Comprehensive Treatise of Electrochetnistry Volume 3: Electrochemical Energy Conversion and Storage . (Plenum Publishing Corporation, 1981).

Huang, J., Chen, Y. & Eikerling, M. Correlated surface-charging behaviors of two electrodes in an electrochemical cell. Proc. Natl Acad. Sci. USA 120 , 1–7 (2023).

Namari, N., Yan, M., Nakamura, J. & Takeyasu, K. Overpotential-derived thermogenesis in mitochondrial respiratory chain. ChemRxiv https://doi.org/10.26434/chemrxiv-2023-366tt-v2 (2024).

Download references

Acknowledgements

This work was supported by JSPS Grant-in-Aid for Scientific Research (KAKENHI) Grant Number 23H05459, JST the establishment of university fellowships towards the creation of science technology innovation Grant Number JPMJFS2106, Project for University-Industry Cooperation Strengthening in Tsukuba, and TRiSTAR Program, a Top Runner Development Program Engaging Universities, National Labs, and Companies. M.Y., N.A.P.N., J.N., and K.T. thank Prof. Hiroaki Suzuki for fruitful discussions on the mixed potential.

Author information

Authors and affiliations.

Graduate School of Science and Technology, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8573, Japan

Mo Yan & Nuning Anugrah Putri Namari

Department of Materials Science, Faculty of Pure and Applied Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8573, Japan

Junji Nakamura & Kotaro Takeyasu

Tsukuba Research Centre for Energy and Materials Science, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8573, Japan

International Institute for Carbon-Neutral Energy Research (I²CNER), Kyushu University, 744 Motooka, Nishi-ku, Fukuoka-shi, Fukuoka, 819-0395, Japan

Junji Nakamura

R&D Center for Zero CO2 Emission with Functional Materials, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8573, Japan

Kotaro Takeyasu

You can also search for this author in PubMed   Google Scholar

Contributions

J.N. conceived the concept. J.N. and K.T. supervised the project. M.Y. and K.T. derived the equations. M.Y., K.T., and J.N. designed the figures. M.Y., N.A.P.N., J.N., and K.T. wrote the paper.

Corresponding authors

Correspondence to Junji Nakamura or Kotaro Takeyasu .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Communications Chemistry thanks the anonymous reviewers for their contribution to the peer review of this work. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Peer review file, supplementary information, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Yan, M., Namari, N.A.P., Nakamura, J. et al. Theoretical framework for mixed-potential-driven catalysis. Commun Chem 7 , 69 (2024). https://doi.org/10.1038/s42004-024-01145-y

Download citation

Received : 25 December 2023

Accepted : 11 March 2024

Published : 01 April 2024

DOI : https://doi.org/10.1038/s42004-024-01145-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

hypothesis driven development framework

IMAGES

  1. Data-driven hypothesis development

    hypothesis driven development framework

  2. The 6 Steps that We Use for Hypothesis-Driven Development

    hypothesis driven development framework

  3. Hypothesis-driven Development

    hypothesis driven development framework

  4. Hypothesis Driven Developmenpt

    hypothesis driven development framework

  5. Hypothesis-driven Development

    hypothesis driven development framework

  6. Data-driven hypothesis development

    hypothesis driven development framework

VIDEO

  1. Theoretical Framework and Hypothesis Development

  2. W8:P1 Theoretical Framework and Hypothesis Development

  3. Selenium data driven framework -C# tutorial|Data Driven Testing Overview

  4. Day-2, Hypothesis Development and Testing

  5. 1.4.4 Development of working hypothesis

  6. Deloitte Problem Solving: Overview of Hypothesis Based Problem Solving

COMMENTS

  1. How to Implement Hypothesis-Driven Development

    Make observations. Formulate a hypothesis. Design an experiment to test the hypothesis. State the indicators to evaluate if the experiment has succeeded. Conduct the experiment. Evaluate the results of the experiment. Accept or reject the hypothesis. If necessary, make and test a new hypothesis.

  2. What is hypothesis-driven development?

    Hypothesis-driven development in a nutshell. As the name suggests, hypothesis-driven development is an approach that focuses development efforts around, you guessed it, hypotheses. To make this example more tangible, let's compare it to two other common development approaches: feature-driven and outcome-driven.

  3. How to Implement Hypothesis-Driven Development

    Practicing Hypothesis-Driven Development is thinking about the development of new ideas, products and services - even organizational change - as a series of experiments to determine whether an expected outcome will be achieved. ... Our teachers had a framework for helping us learn - an experimental approach based on the best available ...

  4. Practicing Hypothesis-Driven Development in Azure DevOps

    The blog post suggests that Professional Scrum Product Owners should leverage Hypothesis-Driven Development (HDD) to make data-informed decisions, enhancing Scrum's iterative nature. HDD is presented as a complementary practice that starts with a hypothesis which guides the Scrum Team's actions based on empirical evidence and user feedback, particularly for risky and/or expensive features.

  5. The 6 Steps that We Use for Hypothesis-Driven Development

    Hypothesis-driven development removes these uncertainties as the project progresses. ... The hypothesis is meant to create a framework that allows the questions and solutions to be defined clearly for validation. Our team followed a specific format in forming hypotheses. We structured the statement as follow:

  6. Hypothesis-driven development: Definition, why and implementation

    Hypothesis-driven development emphasizes a data-driven and iterative approach to product development, allowing teams to make more informed decisions, validate assumptions, and ultimately deliver products that better meet user needs. Hypothesis-driven development (HDD) is an approach used in software development and product management.

  7. Understanding Hypothesis-Driven Development in Software Development

    The future of Hypothesis-Driven Development (HDD) looks promising, as it aligns well with the growing emphasis on agile practices and data-driven decision-making in the software development community. However, the potential of HDD goes beyond its current state, with several trends shaping its future and the role it plays in agile practices ...

  8. Hypothesis-Driven Development (Practitioner's Guide)

    Like agile, hypothesis-driven development (HDD) is more a point of view with various associated practices than it is a single, particular practice or process. That said, my goal here for is you to leave with a solid understanding of how to do HDD and a specific set of steps that work for you to get started. After reading this guide and trying ...

  9. Why hypothesis-driven development is key to DevOps

    We also complement the test-driven development (TDD) principle. TDD encourages us to write the test first (hypothesis), then confirm our features are correct (experiment), and succeed or fail the test (evaluate). It is all about quality and delighting our users, as outlined in principles 1, 3, and 7 of the Agile Manifesto:

  10. An Explanation of Hypothesis-Driven Development

    In this Scrum Tapas video, PST Martin Hinshelwood delves into the Lean idea of Hypothesis-driven Development and explains how it works when it comes to delivering value. (6:04 Minutes) ... Scaled Professional Scrum Validate knowledge of scaling Scrum and the Nexus framework.

  11. 5 steps to a hypothesis-driven design process

    Recruit the users you want to target, have a time frame, and put the design in front of the users. 5. Learn and build. You just learned that the result was positive and you're excited to roll out the feature. That's great! If the hypothesis failed, don't worry—you'll be able to gain some insights from that experiment.

  12. Guide for Hypothesis-Driven Development: How to Form a List of

    The hypothesis-driven development management cycle begins with formulating a hypothesis according to the "if" and "then" principles. In the second stage, it is necessary to carry out several works to launch the experiment (Action), then collect data for a given period (Data), and at the end, make an unambiguous conclusion about whether ...

  13. What I learned at McKinsey: How to be hypothesis-driven

    McKinsey consultants follow three steps in this cycle: Form a hypothesis about the problem and determine the data needed to test the hypothesis. Gather and analyze the necessary data, comparing ...

  14. Hypothesis-Driven Development

    the right outcome. By borrowing concepts from Hypothesis-Driven Development, the DoD can improve its ability to produce leaner, more relevant, and more resilient capabilities by continuously learning through data-driven methodologies. Need Often, development tasks in the DoD lack good criteria to measure or evaluate the output.

  15. Hypothesis-Driven Development

    Course Introduction • 4 minutes • Preview module. Hypotheses-Driven Development & Your Product Pipeline • 7 minutes. Introducing Example Company: HVAC in a Hurry • 1 minute. Driving Outcomes With Your Product Pipeline • 7 minutes. The Persona Hypothesis • 3 minutes. The JTBD Hypothesis • 3 minutes.

  16. Scrum and Hypothesis Driven Development

    Scrum and Hypothesis Driven Development. The opportunities and consequences of being responsive to change have never been higher. Organizations that once had many years to respond to competitive, environmental or socio/political pressures now have to respond within months or weeks. Organizations have to transition from thoughtful, careful ...

  17. Lessons from Hypothesis-Driven Development

    The principle of hypothesis-driven development is to apply scientific methods to product development. Defining success criteria and then forming testable hypotheses around how to meet them. Over ...

  18. How McKinsey uses Hypotheses in Business & Strategy by McKinsey Alum

    And, being hypothesis-driven was required to have any success at McKinsey. A hypothesis is an idea or theory, often based on limited data, which is typically the beginning of a thread of further investigation to prove, disprove or improve the hypothesis through facts and empirical data. The first step in being hypothesis-driven is to focus on ...

  19. Introduction to Hypothesis Driven Development

    This post will begin to apply a hypothesis-driven development framework (that is, the framework written by Brian Peterson on how to do strategy construction correctly, found here) to a strategy I've come across on SeekingAlpha.Namely, Cliff Smith posted about a conservative bond rotation strategy, which makes use of short-term treasuries, long-term treasuries, convertibles, emerging market ...

  20. Hypothesis Driven Problem-Solving Explained: Tactics and Training

    The four steps to hypothesis-driven problem solving are simple. In a nutshell: 1) Define the problem. The first step is to define the problem. This may seem like an obvious step, but it's important to be clear about what you're trying to solve. Sometimes people jump right into solving a problem without taking the time to fully understand it.

  21. Customer-Driven Engineering. Driving customer empathy into ...

    The 5 stages of the Hypothesis Progression Framework (HPF) The HPF is a series of stages that represent customer, product, and business development.

  22. Hypothesis-Driven Development

    Hypothesis-driven decisions. Specifically, you need to shift your teammates focus from their natural tendency to focus on their own output to focusing out user outcomes. ... Applying the 7 Steps Model to Hypothesis-Driven Development ... but I thought it gave me a very good framework. C. CK. 5. Reviewed on Sep 24, 2018.

  23. Setting Up An Experiment Framework & Hypothesis Driven Development

    It has it's roots in lean startup and is derived from the hypothesis driven development framework my friend Beni Tait implemented over at Australia Post. We implemented the system in a Kanban ...

  24. Data-driven hypothesis development

    Data-driven hypothesis development (DDHD) is an effective approach when facing complex "known unknown" and "unknown unknown" problems. There are four steps to the approach: 1. Define the goal using data. Problem solving starts with well defined problems, however, we know that "known unknowns" and "unknown unknowns" are rarely ...

  25. Theoretical framework for mixed-potential-driven catalysis

    In principle, the theoretical framework of mixed-potential-driven catalysis can be applied to both solid-gas and solid-liquid interfaces, where an electrolyte is necessary to convey ions. One open ...