Getlago

Feb 20

/

5 min read

How to Run Pricing Experiments Without Breaking Your Billing System

Finn Lobsien

Finn Lobsien

Share on

LinkedInX
How to Run Pricing Experiments Without Breaking Your Billing System

Executive Summary

Pricing experimentation is essential for optimizing saas pricing models, but billing systems often turn tests into engineering projects. This guide explains how to design, run, and measure pricing experiments safely — without disrupting existing customers, creating billing errors, or requiring engineers to babysit every change. Lago is built to support safe, repeatable pricing tests from day one; it provides plan overrides, rich customer metadata, and composable charge models to run experiments as configuration, not code. Lago

What this guide covers:

  • Why billing systems block pricing experiments for saas pricing models
  • The five pricing experiments every SaaS team should run
  • Architecture patterns to isolate experiments from production billing
  • Metrics and statistical guidelines to measure impact

Reference: pricing decisions are a high-leverage lever for product-led SaaS companies [1].


Why Most Companies Don’t Experiment with saas pricing models

Pricing is among the highest-leverage levers for SaaS monetization, but the technical friction of changing plans, migrating subscriptions, and handling grandfathering prevents experiments. The bottleneck is almost always the billing system.

The Billing System Bottleneck

Common constraints:

  • Plans are treated as immutable: changing a plan affects every customer on it
  • No easy way to show variant pricing to a subset of new signups
  • Rollback is manual and error-prone
  • Billing data rarely includes experiment membership for measurement

Result: experiments become high-risk, high-effort projects and rarely happen.

The Cost of Not Experimenting

Without systematic experiments companies commonly:

  • Underprice valuable enterprise segments
  • Overprice early-stage segments that churn
  • Miss packaging or hybrid opportunities that drive NRR

For practical models and implementation approaches to hybrid and usage-based saas pricing models, see Lago’s guides on hybrid pricing and complex billing systems: What are hybrid pricing models and how do they work? and SaaS Billing Systems That Handle Complex Pricing Models.


The 5 Pricing Experiments Every SaaS Company Should Run

Each experiment below is described with the business objective, measurement approach, run-length guidance, and the minimal billing capabilities required.

Experiment #1: New Customer Price Testing

  • What: Show different prices to cohorts of new signups.
  • Why: Lowest risk—doesn't affect existing customers; validates whether current pricing leaves value uncaptured.
  • How: 2–3 variants (control, +15%, +30%); random assignment at signup; measure conversion, time-to-first-payment, first-month churn.
  • Duration: 4–8 weeks (depending on traffic).
  • Tech needed: plan overrides or lightweight plan variants; customer metadata recording.

What to watch for:

  • Consistent exposure (same visitor sees same variant)
  • Avoid seasonal peaks
  • Correct attribution from signup to invoice

(Code examples and implementation approaches retained from the original guide.)

Experiment #2: Usage Threshold Testing

  • What: Vary free-tier or paywall thresholds (e.g., 1k vs 5k vs 10k API calls).
  • Why: Free/paid boundary is highly impactful for conversion and activation.
  • How: Configure per-customer usage limits, track free-to-paid conversion and NRR.
  • Duration: 2–3 billing cycles.
  • Tech needed: per-customer usage limits, real-time metering, threshold alerts.

What to watch for:

  • Run for multiple cycles to capture retention effects
  • Monitor support volume when customers hit limits

Experiment #3: Packaging Experiments (Feature Bundling)

  • What: Move features between tiers or create add-ons.
  • Why: Packaging drives perceived value and expansion.
  • How: Entitlements decoupled from billing; route new signups into packaging variants; measure tier distribution, upgrade rate, feature adoption.
  • Duration: 3+ months for statistically useful expansion signals.
  • Tech needed: entitlements system separate from invoice generation.

Experiment #4: Discount & Incentive Testing

  • What: Test onboarding incentives (first month free, percent-off, credits).
  • Why: Short-term conversion lift vs. long-term LTV trade-offs.
  • How: Coupon/credit variants; measure conversion, discount redemption, LTV, retention after discount.
  • Duration: 3–12 months (LTV impact requires time).
  • Tech needed: coupon/credit functionality, cohort LTV reporting.

What to watch for:

  • Discount-driven cohorts may churn once promos expire
  • Track 12-month cohorts when possible

Experiment #5: Billing Model Experiments

  • What: Subscription vs. usage-based vs. hybrid models.
  • Why: Highest-impact test; validates which billing model aligns buyer willingness to pay and product value.
  • How: Route cohorts to different billing models; measure revenue per customer, churn, expansion, and unit economics.
  • Duration: 6+ months and large samples.
  • Tech needed: billing system that supports multiple billing models and unified reporting.

For practical examples of hybrid models and pricing templates, consult Lago’s playbook on usage-based pricing [2] and the overview of saas billing models [3].


Architecture for Safe Pricing Experiments

The Experiment Isolation Pattern

Principle: layer experiments on top of the billing engine so the billing engine remains unaware of test logic.

Layered design (summary):

  1. Experiment Assignment — application logic stores variant in customer metadata.
  2. Billing Configuration — apply overrides, coupons, credits per subscription.
  3. Billing Engine — invoices and payments proceed normally.
  4. Experiment Analytics — join experiment metadata with invoices for measurement.

This pattern reduces plan sprawl, supports safe rollback, and preserves consistent invoice behavior.

Customer Metadata Pattern

  • Store experiment_id and variant in customer metadata at creation.
  • Join customers → invoices → metrics for cohort revenue and retention analysis.
  • Example SQL remains applicable for revenue-by-variant queries.

Handling Grandfathering

Options:

  • Hard grandfathering (keep forever)
  • Soft grandfathering (time-limited)
  • Value-based migration (only migrate if new price is lower or equal)

Recommendation: prefer subscription-level overrides over creating legacy plans.


Measuring Pricing Experiments: The Metrics That Matter

Primary metrics:

  1. Revenue per customer (monthly ARPU) — segment by cohort and billing model.
  2. Conversion rate (visitor→paid or free→paid).
  3. Customer lifetime value (LTV) — requires 6–12 months; compute by cohort.
  4. Net Revenue Retention (NRR) — capture expansion behavior.

Secondary metrics:

  • Time to first value (TTFV)
  • Support ticket volume (billing confusion signal)
  • Upgrade/downgrade rates
  • Price sensitivity surveys (Van Westendorp) to complement A/B tests

Statistical guidelines (rules of thumb):

  • Conversion tests: thousands of visitors per variant for 5% baseline conversion
  • Revenue tests: hundreds of paying customers per variant for meaningful revenue delta
  • Retention/LTV: 6–12 months for final decisions; monitor early leading indicators at 3 months

Common mistakes:

  • Peeking too frequently (false positives)
  • Ignoring seasonality or segmentation (SMB vs. enterprise)
  • Testing multiple variables simultaneously

For practical guidance on SaaS pricing models and strategy, see Stripe and ProfitWell resources [4] [5].


Common Pitfalls and Practical Fixes

  • Plan sprawl → Use per-subscription overrides and metadata
  • Forgetting existing customers → Use subscription-level overrides instead of plan-level edits
  • No control group → Keep a clear control cohort (e.g., 70/30 split)
  • Testing too many variables → Change one variable at a time or sequence experiments
  • Ignoring unit economics → Track cost-to-serve and margin alongside conversion

Building a Pricing Experimentation Rhythm

Quarterly cadence:

  • Month 1: Hypothesize and size experiments
  • Month 2: Execute with instrumentation and safety checks
  • Month 3: Analyze, decide, and roll out or iterate

Cross-functional responsibilities:

  • Product: hypothesis and success criteria
  • Engineering: implementation and observability
  • Finance: measurement validation and compliance
  • Sales & Support: qualitative feedback and ticket monitoring

Billing infrastructure must support plan overrides, metadata, invoice previews, webhooks, and API-first configuration to enable this cadence.


FAQ (Short)

Q: How many pricing experiments per year?

A: Aim for 4–6 meaningful experiments (one per quarter); avoid overlapping major experiments.

Q: Can enterprise pricing be A/B tested?

A: Use list-price, discount-guideline, and packaging experiments rather than blind A/B tests; coordinate with sales.

Q: What if customers discover price differences?

A: Test on new customers where possible, use promotions, and keep variant ranges reasonable.

Q: Can tax/VAT differ across variants?

A: Tax is computed on invoice amounts; tax provider integration should remain independent of experiments.


Conclusion & Next Steps

Pricing experimentation is a core competency for modern saas pricing models. Start with low-risk new-customer price tests, instrument revenue-by-cohort, adopt the experiment isolation pattern, and iterate on packaging, thresholds, and billing models. When pricing is treated as configuration — with plan overrides, customer metadata, and composable charge models — experiments become routine operational capability, not engineering hardship.

Ready to run safe pricing experiments with an API-first billing platform that supports hybrid and usage-based saas pricing models? Start with Lago: getlago.com

Related resources:

External references:

  • Sequoia Capital — Pricing your product [1]
  • Lago playbook on usage-based models [2]
  • Stripe — SaaS pricing models overview [4]
  • ProfitWell — pricing strategy guide [5]

Call to action: Start experimenting with pricing safely — Lago.


Share on

LinkedInX

More from the blog

Lago solves complex billing.