Split-testing is important for maximizing results…
You’ve heard that before, and you’ll hear it again. But let’s be honest, you don’t really do it, do you?
Yeah, I thought not.
Don’t feel bad… the truth is that hardly anybody does.
Split-testing can be complicated. You need the technology, and a statistics degree to make sense of the results. Not to mention that all you can really test is weird stuff like button colors and headlines, right?
Actually, no! You can split-test a few simple things, and see a big improvement to your bottom line.
Split testing? Say what?!
Let’s start with a quick review: split testing (or A/B testing)is about testing two different variations of a page, to see which does a better job of getting your audience to do what you want them to do. For example:
- Which headline variation gets more people to read the page.
- Which button text gets more people to sign up for your mailing list.
- Which product picture gets more people to buy your product.
That’s all there is too it.
And yes, sometimes these tests can have a huge impact on your bottom line, as DIYthemes discovered with their email sign up form.
To run these tests, you need the technology that can serve up the different variations of a page to your visitors, and a way to analyze these results to know when they’re statistically significant (aka, it’s not a coincidence).
And as for doing the analysis, programs like Visual Website Optimizer tell you whether or not your split test was significant. I also created a split test checker in Excel [edit: link removed] that helps you figure out whether or not your split tests are statistically significant.
Okay, now that we’re up to speed, let’s talk about the difference between split tests (that everybody talks about) and a split testing strategy (that people don’t talk about, but should be doing).
Split Tests vs. a Split Testing Strategy
Most people talk about individual tests—how changing the color of this or that button magically increased conversions by 4,281%.
And yes, that sometimes happens, but in most cases, switching button colors won’t make a very big difference at all.
So what should you be testing?
You don’t need to test everything. You need to test your assumptions.
Assumption #1: What People Want
Yes, that’s right. You developed a product or service because you thought people wanted what you’re selling.
That’s an assumption.
Maybe your product is great, and maybe it’s selling very well, but maybe you could do even better.
So why not run a split-test to see if people would rather have something?
Don’t worry, you don’t have to actually build another product in order to run this test – you can test with webinars before the product is even built.
Assumption #2: What It’s Worth
That’s right! You made an assumption when you set the price, too.
In fact, you made several assumptions. Assumptions about…
- How valuable your offering is to your audience.
- How much your audience can afford to spend.
- What a competitive price in your marketplace would be.
- What price would send the most compelling message about your offering.
Get the idea?
You could be right, but you could also be wrong. Sometimes you’ll find that not only will a higher price not reduce demand, but it might even increase it, by helping the audience’s perception of your product.
So how do you test price?
The easiest way is having a squeeze page, and sending your audience to that page. Have two variations of the page, with two different prices, and don’t mention the price anywhere else.
Then watch the data—and dollars—roll in. 😉
Assumption #3: The Pitch
The third assumption that you made was about the most effective way for you to sell.
But, just like with your other assumptions, you could be wrong.
Maybe you’ll do better if you use longer copy. Or maybe shorter copy would be a better idea. Maybe a sideways sales letter? Maybe just a video?
We can brainstorm and guess all day long, but the only way you’ll know for sure is to test.