Home Tech & Coding Business Analysis A/B Testing
A/B Testing

A/B Testing

Design and implement an A/B test to determine the efficacy of potential improvements to an online site or mobile app while specifying metrics to measure.
Video Beginner Advanced
UPVOTE 0
SAVE THIS COURSE
Course Website
www.udacity.com
Share this page
Other courses you might like
Gallery
Description

Summary

This course will cover the design and analysis of A/B tests, also known as split tests, which are online experiments used to test potential improvements to a website or mobile application. Two versions of the website are shown to different users - usually the existing website and a potential change. Then, the results are analyzed to determine whether the change is an improvement worth launching. This course will cover how to choose and characterize metrics to evaluate your experiments, how to design an experiment with enough statistical power, how to analyze the results and draw valid conclusions, and how to ensure that the the participants of your experiments are adequately protected.

Expected Learning

A/B testing, or split testing, is used by companies like Google, Microsoft, Amazon, Ebay/Paypal, Netflix, and numerous others to decide which changes are worth launching. By using A/B tests to make decisions, you can base your decisions on actual data, rather than relying on intuition or HiPPO's - the highest paid person's opinion! Designing good A/B tests and drawing valid conclusions can be difficult. You can almost never measure exactly what you want to know (such as whether the users are "happier" on one version of the site), so you need to find good proxies. You need sanity checks to make sure your experimental set-up isn't flawed, and you need to use a variety of statistical techniques to make sure the results you're seeing aren't due to chance. This course will walk you through the entire process. At the end, you will be ready to help businesses small or large make crucial decisions that could significantly affect their future!

Syllabus

Lesson 1: Overview of A/B Testing

This lesson will cover what A/B testing is and what it can be used for. It will also cover an example A/B test from start to finish, including how to decide how long to run the experiment, how to construct a binomial confidence interval for the results, and how to decide whether the change is worth the launch cost.

Lesson 2: Policy and Ethics for Experiments

This lesson will cover how to make sure the participants of your experiments are adequately protected and what questions you should be asking regarding the ethicality of experiments. It will cover four main ethics principles to consider when designing experiments: the risk to the user, the potential benefits, what alternatives users have to participating in the experiment, and the sensitivity of the data being collected.

Lesson 3: Choosing and Characterizing Metrics

One of the most important and time-consuming pieces of designing an A/B test is choosing and validating metrics to use in evaluating your experiment. This lesson will cover techniques for brainstorming metrics, what to do when you can't measure what you want directly, and characteristics you should consider when validating your metrics.

Lesson 4: Designing an Experiment

This lesson will cover how to design an A/B test. This includes how to choose which users will be in your experiment and control group - different online definitions of a "user", and what effects different choices will have on your experiment. It will also cover when to limit your experiment to a subset of your entire user base, how to calculate how many events you will need in order to draw strong conclusions from your results, and how this translates into how long to run the experiment. Finally, the lesson will cover how various design decisions affect the size of your experiment, so you will know which decisions to revisit if you need results more quickly.

Lesson 5: Analyzing Results

This lesson will cover how to analyze the results of your experiments. Step one is always to run some sanity checks so that you can catch problems with your experiment set-up. Then, you will learn how to check conclusions with multiple methods, including a hypothesis test on the effect size and a binomial sign test, if you get results that surprise you. You will also learn how measuring multiple metrics for the same experiment can make analysis difficult, and some techniques for handling multiple metrics. Finally, you will learn about several analysis "gotchas", and what to do if you see them, including how Simpson's Paradox can affect A/B tests, and why even statistically significant results might disappear when you launch.

Final Project: Design and Analyze an A/B Test

Make design decisions for an A/B test, including which metrics to measure and how long the test should be run. Analyze the results of an A/B test that was run by Udacity and recommend whether or not to launch the change.

Required Knowledge

This course requires introductory knowledge of descriptive and inferential statistics. If you haven't learned these topics, or need a refresher, they are covered in the Udacity courses Inferential Statistics and Descriptive Statistics.

Prior experience with A/B testing is not required, and neither is programming knowledge.


Pricing:
Free
Level:
Intermediate
Duration:
4 weeks
Educator:
Carrie Grimes
Organization:
Google
Submitted by:
Coursearena
Reviews
Would you recomment this course to a friend?
Discussion
There are no comments yet. Please sign in to start the discussion.
Other courses you might like