The ROI of Reporting is $0

piggy-bank

Scenario #1: Reporting Data

Lets pretend that I sent you an email that nicely communicated that you earned  $75,000 last year. In this email I broke down how $65k was from your salary, $5k was from your bonus and $5k was from some stock dividends.

How much is that email worth to you?

What if I included pie charts and graphs that compare your income to the average person in your state and showed your income growth over the last 5 years? What’s it worth to you now?

My best educated guess would be something close to $0.

Actually, it would probably have negative value because I wouldn’t figure that out for free. My time is worth something after all.

 

Scenario #2: Using Data to Drive Action

Now, instead of just telling you what happened last year, what if I also told you that your three co-workers, who are not nearly as capable or experienced as you, earned 30% more than you did? Also I just looked up that old extra car that you haven’t driven in 4 years and you could probably get about $10k for it on eBay – not to mention a cost saving on registration and insurance once you get rid of it.

What is this new information worth? Well, if you successfully ask for and receive a 30% raise and sell that car, my email was worth $17k (before taxes)!

 

There’s Only Value in Analytics if You Act on It

Everybody has web analytics nowadays. Google gives it away for free (or at least in exchange for your data) and even provides you with an easy-to-read dashboard right out of the box. But if all you’re doing with your data is simply reviewing it and keeping score, then I hope you’re not paying too much for it. There’s little value in that.

Now, if you hire an analyst to dig into your data and discover something that could help increase your web conversion or optimize your ad campaign mix, that’s worth something. Assuming you act on this newfound information, this analytics stuff can have some serious value.

Of course that analyst probably won’t be free, but if don’t hire her, you’d never know what to improve. You might simply keep doing what you’re doing, missing a ton of opportunities along the way. Or worse yet, you might take action based on somebody’s opinion and make a bad move that hurts your business.

 

What About A/B Testing?

What if you asked for a raise and didn’t get it? You could still try selling that old car and  gain something, but that’s not really the point. At the very least, if you don’t get the raise you now learned that you’re underpaid and what you may be worth in the job market. You have new knowledge that gives you options and puts you in a better position. If you never tried for that raise because you didn’t know what you were worth, you’d be stuck where you were and not know what you’re missing. That’s called being a sucker.

Testing is the smartest action to take when you have an insight that may help you reach your objectives faster. Even if you don’t get what you hoped for, at least you’ll learn that it wasn’t a good idea and not go down that path.

Get a win or avoid a loss – it’s still some ROI in my book.

Why a Failed A/B Test is a Great Thing

Failed A/B Tests Lead to Success

In Failing Forward by John C. Maxwell he describes how failing is a necessary step to being truly successful. It made me think of how many companies that start A/B testing encounter “unsuccessful” tests and let that experience become a roadblock to building a successful optimization program. These “failures” can even lead to a loss of credibility throughout the organization, which makes it difficult to expand or even keep going.

In reality, a “failed” A/B test can be as good as finding a new experience that increases conversion and is a much better option than deploying a new experience without an A/B test. However, just like being successful requires having the right perception of failure, gaining something from a “failed” test requires a clear understanding of WHY it’s actually a good thing.

 

A “FAILED” A/B TEST CAN SAVE YOU

What’s better, a company that reports no growth in their quarterly report or one that reports negative growth?

To put it another way, what’s more likely to get you fired? Failing to get that new $10 million client or losing a once loyal $10 million client?

The benefit of running an A/B test where your test experience is not as good as your original is that it reveals a bad experience. If it wasn’t for the test, you could have just deployed that bad experience. It saves you from your own and others’ bad ideas!

Whether you conduct an A/B test or just deploy a new experience without a test, the same three outcomes are possible:

  1. The new experience will have a higher conversion rate than the old experience.
  2. The new experience will have a lower conversion rate than the old experience.
  3. The new experience will have the same conversion rate as the old experience.

As you can see, 2 out of the 3 possibilities don’t favor finding a better experience. This means that you have to be prepared to face a situation where your test doesn’t yield a better experience and be able to communicate to others why it’s a good thing; especially at the beginning when you’re trying to gain momentum and build a culture that embraces learning.

In an A/B test where your new experience is worse than the original, you will realize that you are better off with what you already have. You can then turn off the test and continue with your existing experience with minimal impact to your business. If you set up a well-designed experiment with proper tracking, you can even isolate exactly what doesn’t work which can lead to other improvements across your website or new ideas, but that’s a topic for another post.

 

PLAYING WITH FIRE: DEPLOYING WITHOUT A TEST

Some companies just like to pull the trigger without testing and then try to analyze the before and after results. If you deploy something without an A/B test, you subject your entire audience to an experience that may or may not better than what you already had. Because you’re likely trying to run a business, this can have serious financial implications. There’s another word for this; it’s called gambling.

To make matters worse, when you try to compare new experience data to old experience data, it’s difficult to determine how much of the difference is due to the actual experience change. Many factors come into play when you have a comparison over two different time periods that make it impossible to really understand the impact. Here is just a small sample of some of the differences:

  • Seasonality
  • Different prices and discounts
  • Varying competitor tactics
  • Weather changes
  • Holidays
  • Important news around the country/world
  • Technological problems at your company
  • Tracking differences
  • The list goes on!

In a worst-case scenario, some of these factors may even create the perception that a new, inferior experience is better than the original one. This creates a false positive that can be very detrimental if you don’t realize the mistake early on.

Depending on your level of risk averseness, avoiding a disaster can be just as good, if not better than finding something better. And although avoiding a major mistake never gets the glory of improving conversion, those who know better realize it is an invaluable ability to have.

The Website Redesign That’s Hurting Your Business

Design Disaster

Does your newly redesigned website look and work great but have a lower conversion rate than your previous website? Drawing from years of consulting and being on in-house teams, I can tell you that it happens ALL THE TIME.

Redesigns usually upgrade many things about your website but can inadvertently hurt other critical factors. The new website usually IS better in many ways but introduces one or various detrimental changes. In addition, because so many things change in the process, it can be difficult to isolate and clearly understand issues.

Here’s a typical scenario:

  1. Company X hires a digital agency or assigns in-house team to rebrand and redesign their website.
  2. This team does a ton of research, leverages customer personas, creates visitor journeys and designs a beautiful website full of “best practices” on a platform that is faster and more advanced than the old site.
  3. Company X then has a ceremonious new website launch only to see web orders do their best Blockbuster Video impression.
  4. Company X spends the next year trying to recover from this mishap by spending more on media to offset their lower conversion rate and trying to resolve the problems.

If this has happened to you, you’re not alone; many companies make this mistake and lose a ton of money in the process. If you’re yet to make this mistake, then this is a good post to read. Follow these five steps, and you’ll never have to suffer through this situation:

 

#1. Build a project plan that INCLUDES OPTIMIZATION

What typically happens: Company X starts planning for the launch of its new website by clearly defining the scope of the project. This includes time and resources for research, design, content, production, quality assurance, launching the site and even post-launch bug fixes. It is a necessary step that helps allocate teams, uncover gaps, estimate cost and create a schedule needed to set expectations with stakeholders or clients.

The problem: Most of the time this project planning assumes that the new site will be successful immediately and assumes that the team will no longer be needed post-launch. It is a huge gamble (many times a multi-million dollar bet) that the new site will bring in more revenue than the old site. It doesn’t take into consideration that the new website may reduce conversion rates. Then any attempt to include optimization work after planning has ended is seen as adding scope and difficult to get approved.

Suggested solution: With any website redesign, the expectation must be set in the early project planning/scoping stage that optimization efforts will be needed. If they are not needed, then you can use that time to try to make that great site even better, but don’t count on it. Everyone, up to the CEO or VP that is sponsoring the project must understand that launching a new site is not the objective. The objective is to launch a website that increases revenue, leads, content consumption or whatever user action is most critical to your company.

If you have already passed the project planning stage and have not included optimization, it will be painful to get the project plan updated to include it. However, it will be much more painful to have the team scrambling to find a remedy to poor conversion after the new site is live.

 

#2. Minimize your risk via A/B TESTING

What typically happens: Company X sets a hard launch date. On this date, the old website ceases to exist and the new website is shown to all visitors. For better or worse, they take the plunge.

The problem: In this scenario, all visitors are exposed to the new website and the old website is history. Again, this is a huge gamble, yet many people don’t consider the possible disaster waiting ahead. If the new website has a lower conversion, it will have a negative effect on your ENTIRE user population, leading to huge losses that can be difficult to recover from. Even if you do diligently analyze website performance pre- and post-launch, you still expose yourself to extreme risk. Additionally, pre/post comparisons include many environmental differences, making it impossible to have a fair comparison.

Suggested solution: Before deploying the new website to everyone that reaches your domain, run an A/B test with only 10%-15% of your visitors. During the test, measure all critical KPIs, and analyze any differences between version A and B. Many companies cringe at this idea and agencies that aren’t experienced with this practice hate it. That’s because A/B testing makes it more difficult to estimate timelines (it can be difficult to estimate exactly how many test iterations will be needed) and quickly uncovers whether the new website is better or worse than the old website. This tactic gives you a glimpse of your future conversion rate before you open the floodgates and gives you the opportunity to optimize beforehand or at least know what you’re in for.

 

#3. Have a good MEASUREMENT plan

What typically happens: Nothing out of the ordinary. Most companies with at least a little online marketing knowledge have reports that track advertising campaigns, visitor count, conversions, conversion rate and maybe even the most popular pages.

The problem: Regular reports usually do not provide you with actionable insights. If conversion rate decreases after a redesign, it could be for 100 different reasons. It’s similar to getting a fever – you know something is wrong, but you don’t know exactly why. If you have just regular reports when you launch a new website, you may know what happened as a result of the new site, but you’ll have no idea why.

Suggested solution: Prepare to do some serious investigative analysis as soon as that new website has any user data. Before you even start the redesign, do the following things:

  • Assign your best analyst or analysts to the task of understanding how the new website performs compared to the old website and why.
  • Create a measurement plan that clearly defines how the new website will be evaluated against the old one. This will help you and all stakeholders understand how success will be measured.
  • Make sure everything you plan on measuring and investigating has proper tracking. There are few things worse than starting an important analysis only to find out that there’s no tracking or it’s inaccurate.

 

#4. Analyze your DATA and let it guide you

What typically happens: The new website launches and conversion rate decreases. To offset the lower conversion rate and stop the revenue bleeding, extra media is purchased to bring more visitors to the site – this may maintain revenue at the cost of reducing your ROI. Additionally, a design/user experience “expert” is brought in to fix the problem. Company X then implements creative changes dictated by the expert and crosses its fingers once again that they can increase conversion.

The problem: In this situation, nobody truly knows why conversion has decreased and guesses are often made, which lead to poor attempts at fixing the situation. Sure, design and user experience experts can help, but without truly analyzing site usage patterns, you are severely decreasing your chances of success.

Suggested solution: Take the time to clearly understand how visitors are using the new website. Bring in that star analyst to investigate the situation and draw insights from the usage data or visitor feedback (surveys are a great tool for this). Then allow those insights to steer that design/user experience expert in the right direction. Nothing should be guessed at this point – the company is already losing money and another bad bet loses all your credibility and can be detrimental.

 

#5. Have a FLEXIBLE SCHEDULE

What typically happens: A timeline is created with hard dates in order to fit other projects into the year’s schedule. The new website must be launched on a specific date and all work on it must stop at a certain point so that the people on that team can start working on other important projects.

The problem: What if the new website doesn’t convert as well as the old one, and you haven’t yet figured out how to bring it back to par? Will the team simply abandon the project so that they can begin work on another project? Having such hard timelines with such an important project puts you between a rock and hard place – you’ll either have to live with the new website’s poor conversion or put off some other important project.

Suggested solution: The expectation should be set early that the new website project does not end until the new website is at least as effective as the old one. Therefore the team that will be working on this project should be given ample time to run multiple optimization tests, diligently analyze performance and implement something that doesn’t hurt the business. It should be a special team without a hard schedule. This is difficult for some companies to accept because it makes it difficult to do resource planning. However in this case, the objective is more important than a schedule. A company should always have a roadmap, but timelines should not compromise project success for the sake of making a specific date.

 

In summary

Launching a new website that hurts conversion is worse then having done nothing at all, but I’m not suggesting that you shouldn’t try to innovate and redesign as needed. I’m suggesting that there’s a way to do it right. Set the expectation early that optimization will be required. Do an A/B test prior to launch. Have a good measurement plan. Analyze user data and let it guide you – don’t guess. And finally, have a flexible schedule that allows you to launch something great – success is more important than meeting a deadline.

Optimizing for Customer Lifetime Value

A/B split testing, when accompanied by a sound process and methodology, is an effective way of optimizing conversion rates. However, for many companies that have subscription business models, conversion is only the beginning of a longer relationship with customers. For these companies, Customer Lifetime Value (CLV) is the most critical KPI of all.

CLV CHALLENGES IN A/B TESTING

Measuring CLV of an A/B test on your website can be a challenge for various reasons. The following are some examples:

  • Using conversion rate as an indicator
    A conversion rate increase does not always indicate an increase in CLV. A company may increase conversion by testing a discount off a new customer’s first month, but if these new customers are less likely to renew, it can actually hurt CLV.
  • Testing period vs. CLV period
    CLV can take months to accurately calculate. Among other things, companies must consider conversion rate, average order value, renewal rate (i.e. “stick rate”) and future upsells. It is rare and often impractical for a website A/B test to run for multiple months or years. Usually A/B tests last only a few weeks, which can make it difficult to determine the connection between the content tested on the site and it’s long-tail effect.
  • Extreme sensitivity to slight inaccuracies
    Many of the popular testing platforms use JavaScript to swap and deliver content. Because browser settings and connection speed can affect JavaScript, sometimes it doesn’t execute 100% as desired and you can have a small amount of un-wanted effects. Because CLV is affected by compounding factors such as renewals and upsells, it is extremely sensitive to un-desired testing effects caused by JavaScript weirdness or test setup mistakes.

TIPS TO CREATING A TEST BUILT FOR CLV MEASUREMENT

Working with a client that is seasoned in the discipline of direct marketing and  testing, we used the following tactics to create A/B tests that make it possible to accurately measure the impact to CLV:

  • Use unique IDs for each test experience.
    To measure the CLV of an A/B test, tie conversions to renewals and upsells with a unique ID per test experience. For example, your website can assign a unique ID to experience A orders and a different one to experience B orders. These unique IDs should show up in your transaction data and also be passed to your customer tracking system. The IDs can be used to tie CLV back to specific test experiences and continue measuring test impact long after the test has been deactivated.
  • Don’t trust summary reports. Analyze detailed results.
    All testing platforms provide you with summary reports of how each test experience performs against the control. However, this type of reporting lacks the detail required to determine if you’re looking at clean results or to analyze based on the unique ID previously described. Some popular platforms provide detailed transaction data that contain stuff like products IDs or descriptions, number of products per order, revenue per order, transaction IDs and other fields that you can customize. With this level of detail you can review results carefully and identify any discrepancies that may inaccurately influence your CLV.

    Test&Target Audit Data

    Detailed test data in Test&Target can be found in the “Type” drop-down under “Audit”

  • Be creative and work around the tech imperfections.
    When small data inaccuracies have a big impact on long-term ROI, a small amount of bad data is unacceptable. For some of our clients a .5% shift in conversion can have as much as a $20 million impact! Recently we were struggling with a testing platform inaccurately assigning unique IDs about 5% of the time. IDs from Experience A were being applied to orders in Experience B and vice versa. To solve this problem, we used Test&Target to split traffic and reload test pages with a unique campaign code per experience. The campaign code was then fed into a content management system that displayed the test experience associated the code. That unique ID was connected to users in the appropriate test experience and passed in order information. The result was the 5% inaccuracy being reduced to 0.1%. 

IN SUMMARY

If your business decisions are made based on CLV, then that is the KPI that needs to be measured on any optimization efforts. Other KPIs like Conversion Rates and Revenue are good indicators, but ultimately they are only influencers to CLV.

Using unique IDs per experience and sharing them with your customer tracking systems, you can tie everything together and continue to review performance after your test period has ended. Analyzing detailed test transaction data can help you solidify your data integrity or un-cover any inaccuracies that may have otherwise led to poor decisions. And finally, being creative with your test setup and deployment can help you overcome imperfect out-of-the-box testing solutions.

What Types of Optimization Tests Are There?

Recently one of my colleagues asked me about the different types of conversion optimization tests that are possible. I started to explain the differences between A/B and multivariate testing, but he quickly stopped me. That was not what he wanted to know.

What he WANTED was to learn about the different types of things that could be tested on a specific web page or process. Some simple examples of this are: testing the hero image, testing the button or even testing entire redesigns. I think this is a question at least a few people out there have, so this post contains a list of different tests types along with pros and cons that may exist.

  • Redesign Tests
    In a redesign test everything is fair game and can be changed. You can change, move, remove and add whatever your heart desires in the name of making the page better.
    • Pros: Redesign tests are great for when you have something that you know is a bad experience and need to try something completely different. At that point having incremental optimization is not going to cut it. To paraphrase one of my favorite authors, Seth Godin, sometimes what you need is to start over, not to optimize something that is bad. You could be climbing up a short hill and be completely missing the giant mountain next to you (Yahoo vs. Google is a perfect example of this).
    • Cons: When you change too many things at once, it’s practically impossible to learn what helped or hurt your page. You may have used a new hero image that increased conversion by 10%, but changed an important description that decreased conversion by 15%. Since you changed both at once, all you would know is that you had a decrease in conversion of 5%, which would cause you to disregard an awesome hero image. Redesigns can be great when done right, but you will give up a lot of learning and they should be used sparingly.
    • TipIf you are going to do a redesign, make sure you maximize this opportunity by doing a lot of due diligence in understanding what your visitors want. Don’t just assume that you can think like a visitor. A lot is on the line and you should make the most of all the work you are doing. Take the time to really dive into your web analytics, do usability testing, read surveys and go through use cases.
  • Description Tests
    Words matter, A LOT. Changing what you say in a page title, photo caption, call to action or product description can have dramatic effects with practically no creative assets needed.
    • Pros: Description tests are a great way to incrementally improve a page that you don’t think is that bad. Additionally, since most of these tests involve testing some kind of html text, you aren’t likely to need a designer. They are also pretty easy to implement because changing text usually isn’t too technically complicated.
    • Cons: A lot of the time, people don’t take the time to read on the Internet, therefore a common occurrence with description tests is that you may see no conversion difference. In order to make an impact, you need to make sure that you are testing descriptions that people are actually reading.
    • Tip: Look through your web analytics to identify any keywords that may drive significant traffic to your page or look through your own internal search from that page to identify any popular keywords. You are more likely to get visitors’ attention if you use keywords or phrases that you know people are interested in.
  • Promotional Tests
    In promotional tests you can experiment with different prices, promotions and the way you position promotions to determine what visitors respond to most and how much they respond.
    • Pros: Promotional tests can be very useful in determining an optimal price for your products and how to best position promotions. It gives you the freedom to try promotions on a sample population and ensure that you offer only the most effective one to the entire population.
    • Cons: Promotional tests where you offer discounts can be trickier to interpret. In most cases, larger discounts create higher conversion rates, but if your discounts are larger than your conversion increases, you may start eating into your profits. It takes a little more sophisticated monitoring and analysis to ensure that you are making a good business decision.Additionally, you can really upset your potential customers if you’re not careful. Be wary of offering different pricing to different people. Some big companies have received a lot of bad PR for these types of tactics.
    • TipTry providing the same discounts to everyone, but positioning it differently in your test variations. You may find that your customers respond more to a percentage discount (e.g. 25% off) compared to a flat dollar discount (e.g. get $50 off $200). Or it’s possible that visitors are insensitive to $9.99 off, but are more likely to convert if you offer $10 off.
  • Image Tests
    Sometimes just using a different image on your page can make big differences. If you’re a vacation-planning site, lifestyle images of people having fun may work better than a nice shot of the beach. On the other hand, if you’re selling furniture, visitors may want to see the piece of furniture by itself.
    • Pros: I think all tests can be fun, but these tend to be easy and fun. Imagery is something that people actually pay attention to and sometimes the right image can really move the needle in right direction.
    • Cons: I can’t really think of any cons to image testing. But if you can, please comment!
    • TipMany times less is more. If you have a page with 3 smaller images, try testing one bigger image that makes an impact. When there are too many things competing for your attention, nothing stands out.
  • Design Tests
    This is where you try to optimize by testing color, fonts types and sizes, shapes or spacing. These are very popular among design-centric brands.
    • Pros: Design-oriented tests not only can help in improve your conversion, they also can allow you to test different design prototypes to ensure they don’t hurt conversion before you release them to the world. Remember that avoiding the implementation of something that hurts conversion is probably more important than finding something that improves conversion.
    • Cons: Design oriented tests usually require help from a designer, which can make them more resource intensive. Additionally, it can be hard to find evidence to support that changing things like button color or increasing font size are likely to increase conversion, so you’re sometimes left with creating less data-driven test variations.
  • Targeting Tests 
    Many will say that targeting is not the same as testing. I agree, but you can test your targeting approach to ensure you are being as effective as you can be. For example, you may be targeting based on a referring keyword, but conversion rate may be higher if you targeted based on a visitors being Mac users.
    • Pros: Targeting tests can really help you capture hard to reach customers by fine-tuning your personalization tactics. These types of tests are the next level once you have harvested all the low hanging fruit with your other testing
    • Cons: Testing different targeting tactics is not as straightforward as testing visual items and requires a pretty deep understanding of your visitors. You will also need to invest in a tool that allows you to target because free tools such as Google Analytics Experiments do not do this.
    • Tip: For the most part don’t listen to anyone that says they tried targeting and it doesn’t work. Except for rare situations, targeting will help increase conversion if you find a recipe that resonates with users. Like I mentioned in the cons section, good targeting requires a deep understanding of your visitors, but when you figure it out you can really provide a great experience and improve conversion.

These are some of the more popular types of tests, but it is probably not an exhaustive list. I welcome any of my fellow testing practitioners to add to it or provide their own perspective.

Google Website Optimizer vs. Google Experiments (Google Analytics Feature)

Some people may be worried and wondering what they will gain or lose by having to move from Google Website Optimizer to setting up tests within Google Analytics (i.e. Google Experiments).

I recently read a great post by Dennis van der Heijden, CEO of Convert.com, that provides details on the differences. Here’s a summary (the details are on his post):

Features that are not available in Google Analytics Experiments:

  • Multivariate testing
  • Conversion rate calculation based on visitors (changed to visits)
  • Removing low performing variations from your test
  • Access to Google’s My Client Center MCC (agency)
  • Testing longer than three months
  • Testing more than five variations
  • Having more than 12 active tests
  • Copying a test
  • Pausing a test

Here are some of the new features in the Google Analytics Experiments:

  • Improved: Slick split URL testing for A/B testing
  • Improved: Email notifications of tests
  • New: One tag solutions for tests
  • New: Google Analytics goal integration
  • New: Advanced Segment Report on test results
  • Existing: URL variables and REGEX support

Like I mentioned, this is only a summary and there are a lot of good details in the full article. If you want to read the whole enchilada and get all the details, visit Dennis’ conversion optimization blog.

*I have no affiliation with convert.com and am simply sharing because it’s a great piece of information

How Testing Works – An Explanation with Pictures!

How AB Testing Works

Too many people don’t equate A/B testing with making more money and that’s a mistake. To someone that’s unfamiliar with the concept, it can be difficult to truly understand it’s value. So above is my attempt to explain it in a way that helps me understand things better: with pictures and arrows.

Before Testing
You have your original page. This page helps get a percentage of you visitors to purchase something. For better or worse, this is what you’ve been working with and chances are that it can be better.

During a Test

  • In real time, present different variations of the same page to your visitors; you will need a testing technology to do this and they range in price from $0 to more than most can afford.
  • “In real time” is important! It ensures that you fairly compare your test page variations to your original and eliminates any outside influencers such as seasonality, sales promotions or even events going on in the world.
  • Track what happens using your testing or web analytics program. Without data you have nothing, so make sure you keep track of the conversion rate for each variation.

After the Test
Select the best variation and make it your new page that every visitors sees. If you found something that converts better than your original page, congratulations! Now you’re generating more leads or revenue than you were before while investing the same amount in your efforts to drive traffic to your site. For you finance people, this directly equals a higher ROI.

Keep it in Perspective
When you test, you are taking a risk, kind of. It is possible that one or all of your test variations won’t be as effective as your control. However, without testing you’d never know if something is better or worse. The risk only lasts as long as the test runs and if you don’t find anything better, you go back to what you had – no harm no foul. At the very least you avoided making a horrible decision.

The Best Part
The best part is when you find something better… and if you keep at it, you definitely will. You may nail it on your first try or may have to run 10 tests before you find something better. Remember that if you don’t find a better experience, you can always go back to your original and the loss is minimal. But when you do find a better experience, the gain is one that keeps paying dividends until you find something even better!

New Homepage on Autodesk.com

Image

I really am proud to work at a company like Autodesk that has embraced testing and really is trying to create products and experiences that customers want. I know this because they allow me and the team that I’m on to experiment with ways to make the site better for our visitors.

Today the company launched a new homepage. And unlike most websites in the world, they didn’t just ask the design team to create one based on what the most important person’s opinion was. They allowed our team to really analyze and understand what people were doing on the homepage and to try to figure out what they want. Based on that analysis, we came up with a hypothesis, then an approach and then went to our user experience team with guidelines based on that approach. And they rocked it!

All in all we tested 5 different experiences and not all of them worked so well. But we did find one that users loved! It showed in the increased click through rate from the homepage and number of people that were able to find their free trial. Now, I’m definitely not saying it’s perfect, but what you see on the homepage of www.autodesk.com is tried and true.

Autodesk is willing to experiment, which means they are destined to find something better than what existed before.

Disclaimer: The views expressed here are my own and do not reflect that of Autodesk Inc.

Scott Olivares – A/B Testing and Web Optimization Blog

I’m finally starting a blog. I had to because I could not take hearing from all the usability “experts” and “gurus” anymore and I figure that I’d give my perspective to anyone that will listen.

First I’ll start by saying that there are no experts. That is why I love testing. It allows you to be wrong and can help validate some great ideas.

When it comes to creating something for the people of the world, nobody has a magic formula; there is no map to follow. There are some really cool things out there, but are any of them perfect? And if something is pretty awesome, it probably took a while to get to that point. The closest we have to a company that makes perfect products could be Apple, but then again there are plenty of people that choose Android or even a Windows (gasp!) phone over an iPhone. So even Apple is not perfect, right? If they were perfect, only people that purposely want something inferior or couldn’t afford the ridiculous price tag would buy something other than their devices.

My point is that whether you’re building a product or creating a web page, nobody knows exactly what people want and 99.9% percent of the time, whatever is created can be better. In fact I am a strong believer of Sturgeon’s law that states that “90% of everything is crap.” Just look around a bit and you’ll soon believe it too. I hope my blog doesn’t fall into that category!

Anyway, I’m putting it out there. I am not an expert or a guru and I can’t stand people that call themselves that. I throw up in my mouth a little every time someone calls me an expert or a guru – I am not one.

I’m good at analyzing information to increase the chance that the ideas I have stand a decent chance of being good ones. I know how to set up an experiment to test those ideas. And I’m good at accepting that I’m going to be wrong a lot. And that’s okay, because as my favorite author, Seth Godin says “it’s okay to say ‘this might not work’.”

This blog is about testing and optimization. I will contribute my best ideas, thoughts and methodologies, but please don’t take them as fact. They are simply things that have worked for me, I have tried or am curious about. But if you have a better way, share it, challenge the status quo, challenge me. The point is that we all learn.