Ads Management
AdsManagement.coBy TwoSquares
How We WorkBlogOur ToolsContact
Get an Ads Audit
Ads Management
AdsManagement.coBy TwoSquares

Professional paid ads management for predictable growth.

Ads Management
AdsManagement.coBy TwoSquares

Professional paid ads management for predictable growth.

Services

  • Google Ads
  • Microsoft Ads
  • Meta Ads
  • LinkedIn Ads
  • YouTube Ads
  • TikTok Ads
  • Free Audit

Industries

  • Ecommerce
  • SaaS
  • B2B Services
  • Healthcare
  • Legal
  • Finance
  • Real Estate
  • Education
  • Hospitality
  • Automotive
  • Home Services
  • Professional Services

Resources

  • Free Tools
  • Glossary
  • Ad Specs Db
  • Swipe File
  • Expert Tips
  • Troubleshooting
  • Launch Checklists
  • Versus Battles
  • Diagnostic Quizzes

Company

  • About
  • Contact
  • Blog
  • Our Tools

Connect

hello@adsmanagement.co
SSL Secured
GDPR Compliant

© 2026 AdsManagement.co. All rights reserved.

Privacy PolicyTerms of Service

Part of TwoSquares

ADSMANAGEMENT

  1. Home
  2. Blog
  3. Google Ads Experiments Running Valid A B Tests
Back to Strategy Hub

Google Ads Experiments: Running Valid A/B Tests (2026 Guide)

2026-01-28
4 min read
Kiril Ivanov
Kiril Ivanov
Performance Marketing Specialist

"I think we should switch to Target CPA bidding." "I feel like Broad Match might work better now."

Opinions are expensive. Data is free. Google Ads Experiments (formerly Drafts & Experiments) is the only way to validate a major change without risking the performance of your entire campaign.

In this "Mega-Authority" guide, we cover:

  1. The Methodology: Science applied to marketing.
  2. The Setup: Creating a Cookie-Split test.
  3. What to Test: Bidding, Match Types, Creative.
  4. Interpreting Results: Statistical Significance (P-Value).

Part 1: The Financial Impact of Testing

Imagine you switch your main campaign from Manual CPC to Maximize Conversions. It tanks. CPA doubles. You lose $5,000 in a week. You panic and switch back.

Now imagine you tested it on 50% of traffic. It tanks. You lost $2,500. You stop the test. Or, it succeeds. It lowers CPA by 30%. You roll it out confidently.

Experiments are your "Sandbox." They protect your downside while unlocking upside.


Part 2: Theory - Cookie Split vs Search Split

Google uses a Cookie-Based Split.

  • User A falls into the "Control" bucket. They always see the original campaign.
  • User B falls into the "Experiment" bucket. They always see the test campaign.

This ensures data integrity. It prevents a user from seeing Ad A in the morning and Ad B in the evening, which would muddy the conversion attribution.


Part 3: Framework - The Testing Hierarchy

Don't test random things. Test high-impact levers.

PriorityTest TypePotential Impact
1Bidding Strategy (e.g., Manual vs tCPA)High (20-50%)
2Match Type (e.g., Phrase vs Broad)High (Volume vs Efficiency)
3Landing Page (URL A vs URL B)Med/High (CRO)
4Ad Copy (RSA Assets)Low/Med (CTR)

Part 4: Execution - Setting Up a Test

Let's test Manual CPC vs Target CPA.

  1. Campaigns → Experiments → All Experiments.
  2. Click + -> Custom Experiment.
  3. Base Campaign: Select your current "Search - Generic".
  4. Suffix: - Experiment - tCPA.
  5. Configuration:
    • Change Bidding Strategy to Target CPA.
    • Set Target (use your historical 30-day average).
  6. Split: 50% (Recommended).
  7. Schedule: Start Date (Tomorrow). End Date (None - manually end it).

Part 5: The "Don't Touch" Rule

Once an experiment is live, DO NOT TOUCH IT. Do not change the budget. Do not add keywords. Do not change ads.

If you change variables mid-test, you invalidate the results. The "Learning Phase" takes 7 days. The "Data Collection" phase takes 14-30 days. Patience is the skill here.


Part 6: Interpreting Results

After 30 days, check the dashboard. Google will show a "Confidence Interval" (Star icon or Blue/Grey text).

  • Metric: Conv. / Cost (ROAS) or Cost / Conv. (CPA).
  • Result: "Experiment outperformed Base by +15% (95% Confidence)." -> APPLY.
  • Result: "No significant difference." -> END. (The change isn't worth it).
  • Result: "Experiment underperformed." -> END. (Good thing you didn't switch!).

How to Apply: Click "Apply Experiment." You can choose to:

  1. Update Original: Converts the base campaign to the new settings. (Preserves history).
  2. Convert to New: Pauses base, creates new campaign. (Resets history - Avoid).

Part 7: Summary & Checklist

If you aren't testing, you aren't growing.

Your Action Plan:

  1. Identify a campaign that has plateaued.
  2. Hypothesize a change (e.g., "Broad Match with Smart Bidding will get more volume").
  3. Launch a 50/50 experiment today.
  4. Wait 4 weeks.

Be the scientist.

Kiril Ivanov

About the Author

Performance marketing specialist with 6 years of experience in Google Ads, Meta Ads, and paid media strategy. Helps B2B and Ecommerce brands scale profitably through data-driven advertising.

View author profile Connect on LinkedIn

Need this implemented for you?

Read the guide, or let our specialist team handle it while you focus on the big picture.

Get Your Free Audit