Google Ads Experiments: Running Valid A/B Tests (2026 Guide)

"I think we should switch to Target CPA bidding." "I feel like Broad Match might work better now."
Opinions are expensive. Data is free. Google Ads Experiments (formerly Drafts & Experiments) is the only way to validate a major change without risking the performance of your entire campaign.
In this "Mega-Authority" guide, we cover:
- The Methodology: Science applied to marketing.
- The Setup: Creating a Cookie-Split test.
- What to Test: Bidding, Match Types, Creative.
- Interpreting Results: Statistical Significance (P-Value).
Part 1: The Financial Impact of Testing
Imagine you switch your main campaign from Manual CPC to Maximize Conversions. It tanks. CPA doubles. You lose $5,000 in a week. You panic and switch back.
Now imagine you tested it on 50% of traffic. It tanks. You lost $2,500. You stop the test. Or, it succeeds. It lowers CPA by 30%. You roll it out confidently.
Experiments are your "Sandbox." They protect your downside while unlocking upside.
Part 2: Theory - Cookie Split vs Search Split
Google uses a Cookie-Based Split.
- User A falls into the "Control" bucket. They always see the original campaign.
- User B falls into the "Experiment" bucket. They always see the test campaign.
This ensures data integrity. It prevents a user from seeing Ad A in the morning and Ad B in the evening, which would muddy the conversion attribution.
Part 3: Framework - The Testing Hierarchy
Don't test random things. Test high-impact levers.
| Priority | Test Type | Potential Impact |
|---|---|---|
| 1 | Bidding Strategy (e.g., Manual vs tCPA) | High (20-50%) |
| 2 | Match Type (e.g., Phrase vs Broad) | High (Volume vs Efficiency) |
| 3 | Landing Page (URL A vs URL B) | Med/High (CRO) |
| 4 | Ad Copy (RSA Assets) | Low/Med (CTR) |
Part 4: Execution - Setting Up a Test
Let's test Manual CPC vs Target CPA.
- Campaigns → Experiments → All Experiments.
- Click + -> Custom Experiment.
- Base Campaign: Select your current "Search - Generic".
- Suffix:
- Experiment - tCPA. - Configuration:
- Change Bidding Strategy to Target CPA.
- Set Target (use your historical 30-day average).
- Split: 50% (Recommended).
- Schedule: Start Date (Tomorrow). End Date (None - manually end it).
Part 5: The "Don't Touch" Rule
Once an experiment is live, DO NOT TOUCH IT. Do not change the budget. Do not add keywords. Do not change ads.
If you change variables mid-test, you invalidate the results. The "Learning Phase" takes 7 days. The "Data Collection" phase takes 14-30 days. Patience is the skill here.
Part 6: Interpreting Results
After 30 days, check the dashboard. Google will show a "Confidence Interval" (Star icon or Blue/Grey text).
- Metric: Conv. / Cost (ROAS) or Cost / Conv. (CPA).
- Result: "Experiment outperformed Base by +15% (95% Confidence)." -> APPLY.
- Result: "No significant difference." -> END. (The change isn't worth it).
- Result: "Experiment underperformed." -> END. (Good thing you didn't switch!).
How to Apply: Click "Apply Experiment." You can choose to:
- Update Original: Converts the base campaign to the new settings. (Preserves history).
- Convert to New: Pauses base, creates new campaign. (Resets history - Avoid).
Part 7: Summary & Checklist
If you aren't testing, you aren't growing.
Your Action Plan:
- Identify a campaign that has plateaued.
- Hypothesize a change (e.g., "Broad Match with Smart Bidding will get more volume").
- Launch a 50/50 experiment today.
- Wait 4 weeks.
Be the scientist.

About the Author
Performance marketing specialist with 6 years of experience in Google Ads, Meta Ads, and paid media strategy. Helps B2B and Ecommerce brands scale profitably through data-driven advertising.
Need this implemented for you?
Read the guide, or let our specialist team handle it while you focus on the big picture.
Get Your Free Audit