Product Manager Framework: HIPE

Hipe

The 4 most important factors when evaluating growth opportunities

About the author

Jeff Chang (@JeffChang30) is a growth technical leader at Pinterest and angel investor. If your startup is looking for an angel investor who can help with all things growth, please send over an email!

Introduction

The two most important skills in growth are finding great opportunities and executing with high velocity. In this blog post, we’ll talk about the first one, finding great opportunities. When new growth team members join growth, their initial mindset usually is to take on projects given to them and execute them well. However, to have more impact, growth team members need to expand their scope and deliver growth end to end, from ideation to execution to analysis. This blog post will talk about an essential part of the ideation process, which is evaluating new ideas!

Talking about evaluating ideas before sourcing them is important because learning how to evaluate will shape how you look for opportunities. There are many factors that you can consider when evaluating opportunities, but I boil it down to four main factors: Hypothesis, Investment, Precedent, and Experience. It’s easy to remember these four with the acronym HIPE (sounds like hype!).

Hypothesis



Why will this idea have a significant impact on metrics?

You should have a good hypothesis as to why certain metrics will change. Your hypothesis should take into account the opportunity size, which is the number of users who might be affected by the feature. It doesn’t matter how good an idea is if very few people will be affected by it in the first place. Most hypotheses fall under the categories of increasing intent or decreasing friction.

Examples:

  • Increasing intent: Highlighting our unique features will increase the intent of the user to sign up, and therefore increase signups. The increase in signups will be significant because 1 million users per day visit this page.

  • Decreasing friction: Removing this extra step in the new user flow will decrease the number of steps it takes to get to a key product feature, which will increase activation rates. The increase will be significant because 1 million sign-ups go through this flow every day.

    Investment



    How much time will we have to invest in this project?

    Growth is all about making smart time investments. It is better to work on 10 1-day projects with an expected value of 1K than 1 10-day project with an expected value of 5K . One reason why growth is tough is that if your experiment fails, all that hard work gets undone and you are back to the previous state, plus some learnings. So, knowing the possible metrics gains of an experiment is not enough, you have to weigh it with the time investment. Another thing to consider is not just time to develop the feature, but time to maintain the feature. For example, although country-specific sign up flows might be a great opportunity to significantly increase the signup rates of specific countries, the maintenance cost is high to make sure multiple flows are bug-free over time. So, running this kind of experiment may not be worth it.

Examples:

  • The time investment for this project is 1 day, plus a few hours every month to maintain

  • The time investment for the project is 1 month, plus a few days every month to maintain

    Precedent



    Is there a precedent for this working in the past?

    This factor looks at past experiments ran by your team or in the industry. In the beginning, you won’t have any previous results to look at, so you will have to see what works in the industry, but over time you should rely mainly on your own past experiment results. It’s important to know that past experiment results are much more valuable pieces of information than industry results because every product has a different set of features and customers.

    Short rant on industry benchmarks



    I almost never trust industry benchmark metrics because the variance is usually very large. For example, if I google “email industry open rates”, the first link tells me that the industry standard seems to fall between 20-30%. Does that mean I should expect email clicks between 20-30%? No, I could get 10% open rate and that wouldn’t be crazy, or I could get over 50%. In fact, my email open rates have been over 50% but this has been due to the kind of audience I have subscribed. I tell my visitors exactly what they would get by subscribing - notifications when I publish new posts, and I make subscription completely optional. Since subscribers know exactly what they will receive and that is all they receive, they have high intent to open. If instead, I made email subscription required to read, my open rates probably would be significantly lower because some users signed up to get access to the blog post, not because they wanted emails. To summarize, industry benchmarks have high variance depending on context and tend to not have that much value in evaluating how good your metrics are.

    Examples:

  • In the past, we tried an experiment on another similar page and it increased signup conversion rate by 10%.
  • In the past, we tried an experiment on another similar email and it increased open rate by 10%.

    Experience



    Is this change a good user experience

    When working on growth, a common problem is focusing on one “north star” metric and making ship decisions solely on that metric. Usually, if you optimize for only one metric, your experience will trend towards an extreme that performs very well for that metric. For example, if you are optimizing for subscribers, the best highest performing experience might be aggressive blocks on core user features, but that is not a good user experience. How do you determine what is a good experience and what isn’t?

    It can seem subjective as people usually have different notions of what is a “good experience”. To add to this issue, usually company employees are pretty different than customers, so it’s hard to know exactly what users think. One way (of many) to judge experience is by looking at quality metrics, such as long-term retention and user actions generating value. For example, if you use an aggressive upsell to increase subscriptions, but users who don’t subscribe still have great retention and perform just as many value actions, perhaps it’s not that bad of an experience. However, if users who don’t subscribe have a significant drop in retention, the user experience is likely poor.

    Examples:

    Showing a non-dismissible modal immediately when a page loads is likely a bad experience

    Conclusion



    Take experiment evaluation seriously. In growth, working on the right projects instead of the wrong ones is usually at least a factor of 10 in terms of experiment impact. It’s normal to have many experiments that don't beat the control group and therefore have no impact. However, you do want some major successes to balance those out. Great experiment evaluation is a required skill for any high impact growth team.

Source: growthengblog.com

Next Post Previous Post