I’m going to start with a simple question. Don’t panic, this isn’t a pop-quiz! It will be very easy to answer.
Do you want more donations for your organization?
I bet it’s pretty safe to assume that you answered yes. If you answered no, this article probably isn’t for you. More donations can be achieved by conversion rate optimization – improving the number of conversions (donations) from the visitors who are already exploring your site.
Your conversion rate can be incredibly effective at helping you achieve your goals and sometimes is accomplished with only a little effort. In the 2012 Obama campaign Kyle Rush used conversion rate optimization to increase donations by 49% just by using conversion rate optimization! Considering that 21% of nonprofit revenue comes from donations (on average) conversion rate optimization can have a significant impact on your bottom line.
So what is the conversion rate anyway?
The conversion rate is the percentage of people who came to your site that complete some sort of goal. In this case, our goal will be making a donation. For example if 100 people come to your site, and 3 of them make a donation your conversion rate is 3%. Conversion rate optimization is used to increase the percentage of people to your site that make a donation.
Alright, I’m sold! Where do I start?
Now that you know what conversion rate optimization is, we can get into the nitty gritty of how to do it! The best way to increase your conversion rate is through A/B testing.
In the words of one of the leading providers of A/B testing software, Optimizely, “A/B testing is a method of comparing two versions of a webpage or app against each other to determine which one performs better”. Essentially you take a webpage that you believe could perform better and make changes to the page in an effort convince more visitors to donate. You then compare it to the original page and if there were more donations, you know it’s safe to implement the changes permanently. By running these tests over and over again, you can increase your conversion rate significantly.
This concept is common in your everyday life, too! Have you ever tried two foods next to each other to see which was better? Or maybe you take different routes to work to see which one is faster? These are both (generous) examples of A/B tests.
I understand what an A/B test is now, how do I run one?
Running an effective A/B test requires a hypothesis. If we go and make changes “just because”, we are going to learn nothing about what makes your visitors donate, or more importantly, what might be keeping them from donating.
If you don’t already have analytics set up on your site, you should have that implemented as soon as possible. If you don’t know how, contact Visceral, we can help! If you do, dig through the data to come up with a hypothesis. Are your visitors not even making it to your donation form at all? Maybe you need to improve your calls to action across your site. Are a lot of visitors going to the donation page but not making a donation? Maybe you’re asking them for personal information they simply aren’t willing to share.
Dig around in your analytics data and see where the biggest bottleneck is and try to figure out why your visitors stop there. This will be the basis of your hypothesis.
The most common formula for creating a hypothesis is done by filling in the blanks of the following sentence:
“By changing ____________ into __________ will ___________”
- By changing the call to action from “submit” into “help people today” it will increase the number of people who go to the donations page.
- By changing the number of fields in the form into fewer fields, it will reduce the number of people who leave the donation page
Note: It’s important that you only test one thing at a time. If you test multiple things at the same time, you won’t know which change made the difference. It’s even possible that one of the changes improved conversions and the other reduced conversions and cancelled each other out. You would never know! So when you create your hypothesis, only choose one element to change.
I have a hypothesis now, how do I test it?
Okay, now you have a hypothesis. You know what to test and why. More importantly, you know what you’re looking for! An increase in conversions? More visits to the donation pages? more newsletter signups?
Running the test is easy, or at least, with the right tools it is. Setting up a page to do this all by yourself would be a nightmare but thankfully, many people have created tools to make this downright simple. I recommend Optimizely. They have an advanced interface that makes it super simple to use. They have even recently changed their pricing model to make it free for beginners. Other A/B testing tools you could use are Unbounce, Visual Website Optimizer, or Maxymiser (each of these has their own pro’s and cons).
These tools will take all the difficult technical work out of the test for you and leave you free to hypothesize and succeed.
One thing that’s important to note is that the test and the control are run simultaneously (the control is what would generally happen without the test). Lots of factors can affect the results of a test such as seasonality, types of visitors, time of day etc. These are all variables and it’s important to remove as many variables as possible so that the effect could only be your hypothesis. Because of this, traffic to your site will be split and some will be sent to the “test” page and others will be given the original. That way there are no variables that could invalidate your conclusion..
Hooray! I’m running a test! I’m done now, right?
No, but we are almost there. The most important question of them all is left and that is “When is it over?” If you ran your test on a Saturday and Sunday and ended the test there, the chances that what happened wouldn’t be an “average day” are pretty high! Knowing when to end is important.
This requires statistical analysis, but before you fall asleep, know that the tools above take care of the actual calculations for you. However, it’s still important to know what you’re looking for and why it’s significant.
The statistical concept “confidence interval” is the most common metric for knowing when a test is completed. This is a calculation that ensures the likelihood that your test is accurate. Generally tests shoot for 95%-98% confidence. What this means is that if the test was run 100 times, the results would be the same 95 times.
This also ensures that your sample is big enough. This is equally important. If you had 10,000 visitors every month and you surveyed five of them you wouldn’t consider their opinions representative of all of your visitors. It’s the same with A/B testing, you need to have enough data to confidently infer how the rest of your visitors would act.
Okay, now I’m done, right?
Yes, you did it! You’ve run your first A/B test. The beauty of it is, with a hypothesis and a free testing software, you can increase your conversions by leaps and bounds and who can say no to that?
Did it seem almost too easy? Well to be frank, it kind of was. There is a LOT to know about A/B testing and how to avoid confounding variables, create hypotheses, and philosophies on when the test is done but this is a good start for a beginning tester!
I’ll leave you with one more thought for the beginning tester that’s very important. A test that saw a reduction in the conversion rate ISN’T a failure so keep your chin up soldier! If you took the time to create a hypothesis based off of your user data and ran a test with only one element, you’ve still learned something new about what makes your visitors donate (or not donate).