Growth

A Complete Guide to Growth Experimentation in 2023

Look at growth as a mindset, not a playbook.

Growth is a combination of clean data infrastructure, excellent customer experiences, and rapid experimentation. Achieving sustainable growth requires an understanding of all stages of the customer journey, and a commitment to ruthlessly improving & optimising each step.

A growth experimentation framework provides the visibility, prioritisation & insights to do so. Three pillars are critical to its success: culture, process, and skills.

Whilst there are best-practice principles that can be applied across businesses, there isn't an out-of-the-box framework you can copy-paste into your company.

It takes time and effort to centralise & tailor your processes, keep teams aligned and build a culture that values experimentation.

Why experiment?

  • Sift through a sea of ideas to prioritise your next moves
  • Skyrocket growth while maximising return-on-spend
  • Rapidly generate insight into your product & customers
  • Get executive buy-in for your product road-map
  • Build a culture of high-velocity optimisation

Getting over the first hurdle

Kicking off growth experimentation requires a significant investment into team alignment & coaching.

The usual internal mentalities holding companies back are:

A siloed approach to teams

Sales is purely sales, product is purely product, marketing is purely marketing (and so on). There is little collaboration across disciplines. In this set-up, companies often compete with themselves internally (e.g. internal KPIs), sharing little insight between them.

Fear of failure

Where avoiding poor results is more important than success, teams can find solace in traditional marketing and generic growth experiments. Naturally, these risk-averse choices deliver less meaningful takeaways.

Common pitfalls

Such mentalities often lead to the following:

Businesses often mistake A/B testing on their website for growth experiments

This is not the case majority of the time, as the majority of businesses don’t have sufficient traffic to do micro optimisations in a meaningful timeframe. Instead, they should focus on high-impact experiments.

Businesses think experiments must be successful at all costs

Often, it even leads to the ‘shifting of goal-posts’ mid-campaign in order to show success; this minimises an organisation's opportunity for learning and improvement.

Businesses think they need long planning cycles for one-off campaigns, with high production values

High cost campaigns lead to high production values which lead to higher ad spend to justify the investment—a vicious cycle.

Businesses suffer paralysis by analysis

Afraid of making mistakes, employees pass decisions up the chain, slowing down the speed with which they can be made.

Businesses have insufficient analytics infrastructure to measure test performance

The inability to measure the performance of experiments effectively slows down any experimentation culture.

Companies held back by these hurdles struggle to implement a build-measure-learn cycle that let’s them grow their understanding of customers, markets and trends at high speed. On one end, start-ups don’t have the luxury of relying on institution knowledge that has been created over years or even decades—to survive they need to learn much faster. On the other end, the business landscape for established corporates is changing rapidly—whether it’s marketing channels, the competitive environment, or consumer preferences, they need to stay on-top of these changes.

Three factors are critical to introducing experimentation successfully: Culture, Process, and Skills/Tools. 

Growth culture

A growth culture should exist across all areas of the business—from product and engineering through to sales and customer service. However, the growth team needs to have the authority and capability to prioritize and implement experiments across the business.

Growth is everyone’s job

Growth is not only about customer acquisition, but also (or even mostly) involves customer retention, various stages of activation (e.g. from trial to paying customer to advocate), pricing strategy and of course product development. This means everyone can and should contribute their ideas. Within the growth team itself, all areas of the business should be represented as otherwise you may miss the critical insight that is obvious to one but not the other. The growth team can then prioritise, derive and run experiments from these ideas and coordinate implementation of validated.

Celebrate ‘failure’ as a learning opportunity

Fear of failure and being wrong stifles ideas and leads to teams shifting goalposts to demonstrate success, robbing the team and business of the ability to do better next time. In order to accomplish this, ‘framing’ is important. As long as a growth experiment generates knowledge it did not fail. To encourage this thinking we suggest to categorise experiments as ‘accepted’ and ‘rejected’ upon their conclusion instead of ‘success’ and ‘failure.'

Ask for forgiveness, not permission.

Speed is fundamental as the faster you are able to validate or reject your hypotheses the faster you’ll generate the insight to drive further growth. One of the major barriers in larger organisations is red-tape and fear of making decisions without coverage from ‘above’ (this goes hand-in-hand with fear of failure). This slows down the experimentation process significantly. In order to address this, the growth team should have clearly defined authority—and testing budget without return expectations—within which they can run experiments.

Keep in mind: If you make 100 decisions a day and get only 50% right instead of making 10 decisions and getting 100% right you’re still moving at 5x the speed of your competitor. And this assumes you’re not even learning from those incorrect decisions.

Automation-driven > campaign-driven

We want our efforts to accumulate over time, rather than driving one-off results.

Solve problems. Be creative. Collaborate.

Growth hacking is about solving problems. Each experiment needs to minimise cost—in both time and money—and maximise the insight generated by the experiment. At the same time, identifying high-impact tests (more about this in the process section below) requires creativity and collaboration across team boundaries.

Building culture is a slow process. Buy-in and leading by example from senior management is critical in making it possible at all. At the same time constraining the culture shift to the growth team while the rest of the business continues on as usual can lead to counterproductive friction between teams. 

Growth process overview

There are a number of different schools of thought around team structure and processes. We’ve found the following building blocks to work well.

A full-time growth team

A dedicated team to manage the processes, infrastructure, testing backlog, implementation and analysis. This should cover growth, marketing, product, engineering  and analytics experts. Sales, customer service, operations, and other teams should be included in addition to their primary role. For these ‘part-time growth members,’ it is important they have sufficient time carved out of their primary role to actively contribute to the growth team.

An Experiment board

To outline and track your experiments: those in the backlog, planned, active & completed. We like to use , but this works just as well in , Asana, JIRA or even Google Sheets when you’re starting out. There are some specialised solutions, but these are usually overkill. We’d generally recommend using the same project management solution your teams are already using.

Sprints

Sprint planning is very helpful in focusing the team on high testing velocity. However, the sprint duration is highly dependent on how quickly you are able to generate a sufficiently large sample for meaningful statistical analysis. Some experiments may have to run through multiple sprints, but your sprint duration should be set to allow the majority of your experiments to complete within a single sprint. At the end of each sprint, the team should come back together for a retrospective to discuss not only the experiment outcome and learnings, but also what improvements can be made to the experiment and implementation processes.

Guiding principles & frameworks for ideation, documentation & leadership.

The meat of the growth experimentation process. Let's deep dive below.

Growth process deep-dive

Part 1 - Experimentation Ideation

Experiment ideation is a mix of art & science.

Inspiration should be drawn from:

  • Customer data
  • Customer feedback
  • Previous experiments
  • Competitors / Research
  • Intuition 

These factors will form the basis of our experiment rationale. While ideating, it is crucial to flag limitations around data collection / understanding. E.g. We may have strong opinions, but do we have the data to back it up?

Your approach to ideation should differ, depending on whether you're looking through a product-led or marketing-led lens.

Product-led Ideation

What do our customers love?  What is our main benefit?
  • Most used features
  • Products / features with the best customer feedback
  • USPs
  • How can we double down? 
  • Where is the opportunity? 
  • What can we add, and what can we expect in return? 
Why are our / our customers’ pain-points? How can we improve our product?
  • Missing features
  • Missing products (growth / tech stack)
  • Missing products (service-offering)
  • Efficiency bottlenecks (sales / support)
  • What do you think is the root cause of the problem? 
  • How we can overcome it? What return on investment we can expect from overcoming this?

Marketing / Sales-led Ideation

What elements of the funnel (AAARRR) do we need to improve?
  • Who is targeted
  • What is going to be created
  • Where is the idea implemented
  • When in the customer journey 
What are my customer behaviours? For example:
  • What features do they use the most?
  • What pages do they visit?
  • How often do they engage?
  • What do they buy? How much do they spend?
  • When do they engage (are there time/day or other patterns)?
What are the characteristics of my best customers? For example:
  • What source were they acquired from?
  • What device do they use?
  • What is their demographic background? Where do they live?
  • What other products/services do they use?
What events cause people to abandon the product? For example:
  • What pages have the highest exit rates?
  • Are there bugs preventing certain actions?
  • How is the product/service priced relative to competitors?
  • What is their customer journey? How much time do they spend before abandoning?
What are our North Star Metrics / important growth metrics?
  • Revenue (e.g. ARR, GMV) - The amount of money being generated
  • Customer growth (e.g. paid users, marketshare) - The number of users who are paying
  • Consumption growth (e.g. messages sent, nights booked) - The intensity of usage of your product
  • Engagement growth (e.g. MAU, DAU) - Number of active users
  • Growth efficiency (e.g. LTV/CAC, margins, ops efficiency) - Efficiency at which you spend vs make $$
  • User Experience (e.g. NPS) - How enjoyable customers find the product experience

Part 2 - Documenting Ideas as Experiments

We divide our experiments into:

  • Diagnostic Experiments: Experiments to provide better insight into the product, market, personas & customer behaviours. Often these experiments are run to address limitations we face in ideation phase.
  • Incremental Growth Experiments: Narrow experiments to validate an assumption/idea (e.g. around a feature) or experiments focusing on specific conversion elements within the funnel.
  • Radical (‘Big Bang’) Experiments: High-risk, high-reward experiments - something that requires extensive research, planning & resource allocation & has impact across the entire funnel.

Diagnostic & Incremental Experimentation

Experiments should be listed in an experiments space in a project management board (e.g. Monday, ClickUp) and defined using RICE methodology. Everyone across the business is encouraged to submit experiments. When adding experiment ideas, follow the 'Documenting Experiments' guidelines below.

Radical Experimentation

Think of these as particularly bold experiments, strategies or products that you hypothesise can have astronomical impacts on our growth. Usually, Big Bangs are inspired by diagnostic or incremental growth experiments..

Time will need to allocated across the quarter, for planning and executing. For product-led Big Bangs, the team should follow a typical MVP process (mock-ups, interviews, build, test, optimise). The objective should be to build a minimum valuable product, not a minimum viable product (our learnings & preparation should give us the confidence to impress users from the get-go).

Documenting Experiments

Name of Experiment = Hypothesis

  1. Hypothesis: What do you think will happen? A clear, short statement without justification or explanation. At the end of the experiment you’ll need to be able to validate or invalidate this hypothesis. This is what goes in the title of your experiment.
  2. Description/Rationale: Why do you think this? Provide an explanation of the hypotheses— what is the supporting material that makes you think this hypothesis is correct? Write it out in this description, and attach any supporting material in the comment/update section below.
  3. Implementation: Describe the quickest and cheapest way to test the hypothesis.
  4. Success Definition: The KPIs and threshold to be reached before the hypothesis is counted as validated. These KPIs can be relative (e.g. an A/B test), absolute (e.g. we need to achieve a certain CAC to consider this channel effective) & even qualitative (good feedback). Generally - the more measured & scientific we can be the better - but it's important to not feel limited in the early days of testing,
  5. RICE SCORE: Reach, Impact, Confidence, Effort. Give a rating for these factors 1-10, then average them. Higher scores = better. E.g. higher effort score = easier, lower effort score = harder.

As is the general case with experimentation, RICE methodology is a mix of art & science. Wherever possible, we should default to more quantifiable measures (e.g. confidence should correlate with data vs intuition, experiment should be sufficiently scoped to guide effort score (e.g. time estimates).

Before it enters CLOSED you must add to the experiment:

  1. Results: The quantitative outcome of the experiment based on the KPIs defined in the acceptance criteria and whether the threshold was reached.
  2. Learning: A qualitative interpretation of the result, key takeaways and new questions/hypotheses the experiment has resulted in.
  3. Outcome: Validated, Invalidated or Unclear

Structuring Your Experiment Board

In our experience, no experiment board will ever be the same. Each head of growth & team will have preferences in tooling, structure & layout. Tools like ClickUp are great as there are endless permutations of 'views' you can customise.

Depending on whether you are a young startup looking for product-market-fit, or an established scale-up or corporate looking to optimise & skyrocket growth - you may choose to incorporate additional features. For example, adding the 4-fit framework (Market, Product, Channel, Model) or a variation thereof to your board is a great way to frame a company's early experiments against your core burden-of-proof questions. Furthermore, you may choose to add labels for lifecycle stage, experiment type and so on.


The end goal is to create something extremely clear and user-friendly. Everyone in the team (including new starters) should be able to navigate a clear list of completed experiments to understand what experiments have been completed and whether they are validated or invalidated. A head of growth should be able to easily groom and prioritise a backlog.

Part 3 - Leading a Growth Team

Practical Tips

A head of Growth’s role is primarily project management.

A Sprint should be held weekly or fortnightly. Ideation sessions can be held on an ad-hoc basis.

The growth team should be composed of a growth lead to coordinate the process as well as marketing generalists/specialists, BI/analytics, engineering, sales and customer success. This composition can be adapted to your company setup (standalone pod vs matrix vs mixed).

Completed growth experiments should be documented in a company-wide knowledge base and results shared with the wider business, for example by:

1 - Setting-up a wins distribution list/email newsletter for sharing experiment results and business impact with interested parties (this can be only successful tests or also include failed ones)

2 - Publishing experiment results to company dashboards 

All employees should be encouraged to add ideas to the idea pipeline. The growth lead will vet ideas (only to make sure all required information is included and they are specific enough) and coordinate with the ideator if more information is required.

Before the Growth Meeting

Head of Growth reviews activity:

Experimental velocity vs goal (tempo)
Coordinate update of key metrics
Gather data about tests that were concluded
High-level assessment of last weeks experiments & results
Review & update new experiments - prioritise/rank them on the basis of finalised RICE score
For product-led initiatives the aim is to divide time b/w doubling down on what users love, and addressing what’s holding them back.

Leading a Growth Meeting

Purpose of the growth meeting is not ideation!

Metrics review and update focus area (15min)

North-star and other key growth metrics
Key positive factors
Key negative factors
Growth focus area (AARRR)

Review last weeks testing activity (10min)

Tempo (vs goal)
How many "up next" were not launched—why

Key lessons learned from growth experiments (15min)

Preliminary results for just concluded experiments
Conclusive results of finalised experiments and implications for future action

Select growth tests for current cycle (15min)

Members give overview of the ideas they nominated
Brief discussions and selection of experiments for next week
Each selected experiment will be assigned an appropriate 'owner'
Experiments worthy of testing, but not ready for next week will be slated for launch pending further information on the timeline from the impacted teams
Selected ideas need to fit the current growth focus

Growth of idea pipeline (5min)

Recognise top contributors

Skills and Tools

Finally, running successful growth experiments requires the ability to measure experiment results accurately. This is best achieved with a solid analytics infrastructure.

Growth teams need to combine a wide array of skill sets, from marketing, copy-writing and psychology to analytics and engineering. These skills don’t need to be combined at expert level in a single individual, but all team members should have a good technical and analytical understanding.

Many growth experiments consist of combining existing systems in novel ways to create value—take for example the early-days Airbnb example of posting their listings on Craigslist. This required ‘hacking’ together a solution that was able to convert Airbnb’s listings into a craigslist post and then post it on the platform. In order to be truly creative, growth teams need to understand what is technically possible—and what 3rd party solutions that are out there.

These golden ticket growth hacks are only possible with a robust growth process, culture and infrastructure. They result from shifting away from the traditional approach, moving away from campaign-driven promotions, and looking towards layered, automated strategies. Throw in some hustle, and at the very least you are sure to achieve fantastic, sustainable, long-term growth. Throw in some out-of-the-box thinking, good timing and some good-luck for measure, you are in the perfect position to pull off your one-in-a-million growth hack.

Happy hacking!

Leave it with us.

Data-driven growth and analytics consulting for scale-ups & established corporates
Let’s talk