This is going to be a different kind of company blog post. No growth hacks. No "10 tips to improve your ROAS." Just an honest explanation of why we built this thing and what we believe.
The problem that wouldn't leave us alone
We spent years watching the same pattern repeat across Shopify stores.
A merchant launches. They start running ads. Meta says ROAS is great. Google says ROAS is great. Revenue is growing. Everything looks fantastic in the dashboards.
Then they look at their bank account. The profit margins don't match the ROAS. Cash is tighter than the numbers predict. They work harder, spend more on ads, and the gap gets wider.
They try new analytics tools. Triple Whale. Northbeam. Elevar. Each one promises clarity. Each one rearranges the same data into a different dashboard with a different attribution model. None of them answer the question that actually matters:
"Are these ads causing sales, or am I paying for customers who would have bought anyway?"
That question — the incrementality question — haunted us. Not because it's technically hard to answer (the methodology has existed for decades), but because nobody was making the answer accessible to the merchants who need it most.
What we found when we looked
The more we researched, the worse it got.
The tools that make this possible — Haus, Measured, Incrementality.com — charge $24,000 to $60,000 per year. They're designed for brands spending $100K+ monthly on ads with dedicated data science teams.
A Shopify merchant spending $8,000/month on ads — the kind of merchant who might be wasting $2,300/month without knowing it — has no access to these tools. The measurement technology costs more than their entire ad budget.
So they're stuck trusting platform numbers. Numbers that are structurally inflated by three mechanisms we've written about extensively: creative fatigue, brand cannibalization, and channel overlap.
The irony is cruel: the merchants who can least afford to waste money are the ones with the least ability to detect it.
What we decided to build
We didn't set out to build another attribution tool. The world has plenty of those. We set out to build something that would answer the incrementality question for merchants who can't afford enterprise measurement platforms.
The core design principles came from the problem itself:
Principle 1: Test, don't just estimate
Attribution tools estimate which channel "gets credit" for a sale. We wanted to prove whether the sale was caused by ads at all. That means holdout experiments — not attribution models.
But we also recognized that experiments take time and not every finding needs one. So we built a two-tier system: pattern-based estimates for quick insights, with the option to validate any estimate through a real experiment. The confidence badge system — ESTIMATED vs. TESTED — emerged from this principle.
Principle 2: Honest over impressive
This might be the most controversial design decision we made. Instead of showing precise numbers that look authoritative, we show ranges with confidence badges. Instead of saying "you're wasting $2,400," we say "we estimate you may be wasting $1,800–$3,100 (ESTIMATED)."
The second statement is less impressive. It's harder to put on a pitch deck. It doesn't have the satisfying finality of a single number. But it's honest. And when a merchant is deciding whether to cut a $3,000/month campaign based on our analysis, they deserve to know whether that analysis is a pattern-based estimate or an experimentally validated fact.
We lose some merchants to competitors who show bigger, cleaner numbers without qualifiers. We're okay with that. The merchants we keep trust us, and trust is the only sustainable competitive advantage in analytics.
Principle 3: Built for the merchant, not the analyst
Enterprise incrementality tools are designed for data science teams. They require experiment design expertise, statistical literacy, and comfort with terms like "power analysis" and "significance threshold."
We wanted a Shopify merchant — someone who's an expert at running their business, not at running statistical experiments — to be able to validate their ad spend in plain language. That means:
- "Tap Prove It" instead of "Configure experiment parameters"
- "Your ads drove 60% of claimed conversions" instead of "Incremental lift coefficient: 0.6, p < 0.03"
- "This finding needs more data" instead of "Failed to reject null hypothesis at α = 0.05"
The methodology behind the scenes is rigorous. Power analysis determines sample sizes. Statistical significance is calculated at p < 0.05. The math is the same as what P&G and Nike use. But the interface speaks merchant, not statistician.
Principle 4: Radical transparency
We published our entire methodology. Every algorithm, every confidence threshold, every limitation. A competitor could read it and build the same thing. We did this on purpose.
If our methodology doesn't survive scrutiny, we don't deserve customers. If it does — if data scientists and engineers read it and say "yes, this is sound" — then transparency becomes a trust signal that no competitor's marketing page can match.
We also list our limitations honestly. Small stores may not reach statistical significance. Our estimates are approximations, not certainties. We only support two ad platforms currently. We don't install a pixel, which means we have less data than some competitors.
Every analytics company emphasizes strengths and buries limitations. We list both, side by side, because merchants deserve to make informed decisions about the tools they use.
What we believe about this market
The digital advertising industry has a transparency problem that's been tolerated for too long.
Ad platforms report inflated numbers because their business models incentivize it. Attribution tools rearrange credit but don't question whether the conversions are real. Agencies optimize for metrics that make their work look good, not metrics that drive actual profit.
The merchants caught in the middle — spending $5K, $10K, $20K per month — are making budget decisions on numbers they can't verify. They suspect the numbers are inflated. They joke about it with other merchants. But they have no practical way to find the truth.
We believe that's going to change. Not because of us specifically, but because the cost of incrementality testing is falling, the methodology is becoming accessible, and merchants are increasingly sophisticated about measurement.
The question isn't whether self-serve incrementality testing will become standard for Shopify merchants. It's when. We intend to be first.
An invitation
If you've read this far, you're probably one of two types of people.
You're either a merchant who's suspected for a while that your ROAS numbers don't add up — and you're curious whether there's finally a practical way to find out. In that case, start with our free tools: the Ad Waste Calculator gives you an industry-average estimate in 30 seconds. No signup required.
Or you're someone who builds, invests in, or advises Shopify merchants — and you're interested in how incrementality testing fits into the ecosystem. In that case, our methodology page is the best place to start. It's transparent by design.
Either way, we built Ripplux because we believe merchants deserve to know the truth about their ad spend. Not a prettified version of the truth. Not a version that makes our dashboard look impressive. The actual truth, with honest confidence levels and clear limitations.
That's the company we're building. No false precision. No inflated metrics. Just the closest thing to the truth that current methodology can provide, at a price point that makes it accessible to every Shopify merchant who needs it.
Thank you for reading. We're glad you're here.
— Rami Omran, Founder