Teach yourself growth marketing: How to perform growth experimentation through A/B testing
Without customers, there can be no business. So how do you drive new customers to your startup and keep existing customers engaged? The answer is simple: Growth marketing.
As a growth marketer who has honed this craft for the past decade, I’ve been exposed to countless courses, and I can confidently attest that doing the work is the best way to learn the skills to excel in this profession.
I am not saying you need to immediately join a Series A startup or land a growth marketing role at a large corporation. Instead, I have broken down how you can teach yourself growth marketing in five easy steps:
Setting up a landing page.
Launching a paid acquisition channel.
Booting up an email marketing campaign.
A/B test growth experimentation.
Deciding which metrics matter most for your startup.
In this fourth part of my five-part series, I’ll take you through a few standard A/B tests to begin with, then show which tests to prioritize once you have assembled a large enough list. Finally, I’ll explain how to run these tests with minimal external interference. For the entirety of this series, we will assume we are working on a direct-to-consumer (DTC) athletic supplement brand.
A crucial difference between typical advertising programs and growth marketing is that the latter employs heavy data-driven experimentation fueled by hypotheses. Let’s cover growth experimentation in the form of A/B testing.
It is important to consider secondary metrics and not always rely on a single metric for measuring impact.
How to properly do A/B tests
A/B testing, or split testing, is the process of sending traffic to two variants of something at the same time and analyzing which performs best.
In fact, there are hundreds of different ways to invalidate an A/B test and I’ve witnessed most of them while consulting for smaller startups. During my tenure leading the expansion of rider growth at Uber, we used advanced internal tooling simply to ensure that tests we performed ran almost perfectly. One of these tools was a campaign name generator that would keep naming consistent so that we could analyze accurate data when the tests had concluded.
Some important factors to consider when running A/B tests:
Do not run tests with multiple variables.
Ensure traffic is being split correctly.
Set a metric that is being measured.
The most common reason for tests getting invalidated is confounding variables. At times it isn’t obvious, but even testing different creatives in two campaigns that have different bids can skew results. When setting up your first A/B test, ensure there’s only one difference between the two email campaigns or datasets being tested.
Teach yourself growth marketing: How to perform growth experimentation through A/B testing by Ram Iyer originally published on TechCrunch