Prepared for TooTimid

Conversion Rate Optimization Proposal

Prepared by Clean Commit · April 2026

The one-page version

The pitch: We don't start with button colors and layout tweaks. We start with the changes that directly affect your unit economics — pricing, offer structure, shipping thresholds, bundles, post-purchase upsells. These Tier 1 experiments consistently produce 15-40% lifts and resolve faster than surface-level changes. Once the economics are optimized, we layer in Tier 2 structural improvements.

What we've already done: We built a psychographic customer profile from 80+ of your real customer reviews, and our team has completed a preliminary site analysis identifying seven specific Tier 1 experiment opportunities for your store.

What you get: based on three comparable engagements, expected 12-month impact of a 15 to 20% conversion rate lift and $1M+ in cumulative revenue.

Price: performance-based. Month one is performance-only with no retainer — you pay nothing unless an experiment wins. Month two onward, $3,000 monthly floor or one month of measured uplift, whichever is higher. Capped at $10,000. No lock-in.

Proof: 21 experiments documented in this proposal across 11 clients, with real results. Three full case studies from comparable engagements (8x to 15x ROI).

Next step: a 30-minute call with Tim Davidson to walk through your current metrics together, confirm the opportunity size, and answer anything outstanding. tim@cleancommit.io.

How We Prioritize What to Test

Not all experiments are created equal

Most CRO agencies start with the easy stuff. Button colors, headline copy, badge placement. Those changes are low-risk and fast to ship, but they rarely move the needle in a meaningful way.

We take a different approach. We classify every experiment into three tiers based on how directly it affects your unit economics, and we prioritize accordingly.

The framework

TierWhat it changesExpected impactExamples
Tier 1What the customer buys, pays, or receives15-40%+ liftPricing, shipping thresholds, bundles, offers, subscription models, post-purchase upsells
Tier 2How the customer gets to the purchase8-20% liftNavigation, checkout flow, cart architecture, search, cross-sell placement, page structure
Tier 3How existing elements look, read, or feel2-8% liftCopy, colors, layout, imagery, badges, trust signals, social proof styling

This isn't guesswork. It's backed by meta-analyses across thousands of experiments. Wharton's study of 2,732 tests confirmed that pricing and offer experiments produce the largest effect sizes of any category. Browne & Jones analyzed 6,700 experiments and found that 90% of tests produce less than 1.2% RPV lift. The only way to consistently break through the noise is to test at the proposition level first.

Our approach for TooTimid: We start with Tier 1. These experiments finish faster, produce larger effects, and compound more aggressively. Once the big levers are optimized, we blend in Tier 2 changes. Tier 3 comes last, if at all, and only when the higher tiers are exhausted.

The simple test: would the customer's bank statement look different? If yes, it's Tier 1.

Winning Experiment Examples

10 Tier 1 and 11 Tier 2 experiments we've run across our client base, with the results.

#TierExperimentClientKey Result
1T1Price increase on hero SKUsOne Quiet Mind+42.5% CVR, +33.4% RPV
2T1Free shipping threshold optimizationAFTCO+22% AOV, +8% net revenue
3T1Starter bundle introductionCodeword+31% AOV
4T1Gift with purchase vs flat discountAFTCO+24% RPV
5T1Subscribe & save on consumablesGum of Gods+18% RPV, +2.4x reorder rate
6T1Discount removal on flagshipAnyAge Wear+19% margin, +38% checkout rate
7T1Spend-and-save threshold tiersMarsh Wear+26% AOV
8T1Post-purchase one-click upsellHashStash+21% AOV, 18% acceptance rate
9T1Starter kit for new customersOverland Addict+34% new visitor CVR
10T1Volume discount incentive in cartMarsh Wear+28% RPV
11T2Desktop sticky navbarAFTCO+9.6% RPV
12T2Homepage UGC carouselCodeword+6.1% CVR, -10.5% bounce
13T2Cross-sell pop-up at add-to-cartMarsh Wear+29% RPV, +13% AOV
14T2Free gift callout on PDPPeluva+18.8% RPV
15T2Homepage reskin with category cardsOverland Addict+90% CVR
16T2Product card differentiationGum of Gods+11.5% CVR
17T2Single column collection layoutAnyAge Wear+6.6% ATC rate
18T2Mobile navigation redesignQ30+19.2% CVR, +22.1% RPV
19T2Popup redesign & delayQ30+7.9% CVR, +14.3% ATC
20T2Cart vs quiz checkout flowGum of Gods+44.1% RPV
21T2Sale countdown timerBetterGuards+12% CVR, +8% RPV

Tier 1: The experiments that change your economics

These experiments don't make the site look prettier. They change what the customer pays, receives, or how the offer is structured. They're harder to implement and require more conviction, but they consistently produce the largest, fastest results.

T1-1. Price Increase on Hero SKUs — One Quiet Mind

Result: +42.5% CVR, +33.4% RPV
Duration: 20 days, 13,843 visitors

Tested a 15% price increase on three flagship weighted pillow SKUs. Conversion rate went up, not down. The original price was anchoring the product as "cheap," and the target audience associated higher price with higher quality.

Control: Original pricing
Control: Original pricing on the flagship Weighted Pillow.
Variant: 15% price increase
Variant: 15% price increase. Conversion went up.

For TooTimid: Your premium vibrators and toys could be underpriced relative to what your customers expect to pay for quality. A price test on your top 3-5 SKUs would tell us immediately whether you're leaving margin on the table.

T1-2. Free Shipping Threshold Optimization — AFTCO

Result: +22% AOV, +8% net revenue
Duration: 28 days, 18,400 visitors

Tested raising the free shipping threshold from $79 to $99. Pushed customers to add one more item to qualify. Average overshoot was 25-30% above the new threshold.

Control: $79 threshold
Control: Free shipping on orders $79+.
Variant: $99 threshold
Variant: Threshold raised to $99. Customers added more to qualify.

For TooTimid: Your current free shipping threshold is $59. Testing a higher threshold ($79 or $99) could meaningfully lift AOV. Your catalogue is deep enough that customers can easily add complementary items to reach a higher bar.

T1-3. Starter Bundle Introduction — Codeword

Result: +31% AOV
Duration: 35 days, 4,200 visitors

Introduced a "Complete Kit" bundle on the PDP — the hero product plus matching accessories at a combined price 12% below buying separately. Positioned as the default recommended option, not an afterthought in a sidebar widget.

Control: Single product PDP
Control: Standard PDP with a single product.
Variant: Complete Kit bundle
Variant: "Complete Kit" bundle as the recommended purchase.

For TooTimid: Couples kits, first-timer starter kits, or "date night" bundles would map directly to your two largest customer segments — couples (35%) and first-time explorers (25%). Bundles reduce decision paralysis and increase AOV in a single move.

T1-4. Gift With Purchase vs Flat Discount — AFTCO

Result: +24% RPV
Duration: 30 days, 22,000 visitors

Replaced a sitewide 15% discount code with a free branded accessory (retail value $25) on orders over $75. The gift with purchase outperformed the discount across every metric — conversion, AOV, and margin.

Control: 15% off sitewide
Control: 15% off sitewide with code.
Variant: Free accessory on orders $75+
Variant: Free branded accessory on orders $75+.

For TooTimid: You already include a free gift with every order, but you're also running a permanent 50% sitewide discount code. Testing whether the free gift alone drives comparable results could recover significant margin.

T1-5. Subscribe & Save on Consumables — Gum of Gods

Result: +18% RPV, 2.4x reorder rate
Duration: 42 days, 3,800 visitors

Added a subscribe & save option on the PDP for consumable products — 10% discount on recurring orders with a toggle between one-time and subscription. Subscription set as the default selection.

Control: One-time purchase only
Control: One-time purchase only.
Variant: Subscribe & save toggle
Variant: Subscribe & save toggle with 10% recurring discount.

For TooTimid: Lubricants, toy cleaners, and other consumables are natural candidates for subscription. These products run out and need replenishing — a subscribe & save model generates predictable recurring revenue at zero acquisition cost.

T1-6. Discount Removal on Flagship — AnyAge Wear

Result: +19% gross margin, +38% checkout rate
Duration: 21 days, 6,500 visitors

Removed the permanent discount code from the hero product and tested it at full price with stronger value messaging. Checkout completions actually increased because removing the discount code field eliminated the "let me go find a code" abandonment loop.

Control: Permanent sale pricing
Control: Permanent sale pricing with discount code.
Variant: Full price with value messaging
Variant: Full price with value-led messaging. Margin recovered, checkouts went up.

For TooTimid: You're running a permanent "SEXY50" code for 50% off sitewide. Testing what happens when the discount disappears — replaced with value messaging and the free gift offer — could be one of the single highest-impact changes on your store.

T1-7. Spend-and-Save Threshold Tiers — Marsh Wear

Result: +26% AOV
Duration: 45 days, 12,300 visitors

Replaced a flat 10% discount with tiered spend-and-save thresholds: spend $100 save 10%, spend $150 save 15%, spend $200 save 20%. Most customers aimed for the middle tier, overshooting their original cart value by 25-40%.

Control: Flat 10% discount
Control: Flat 10% discount on all orders.
Variant: Tiered spend-and-save
Variant: Three tiers with escalating rewards and cart progress bar.

For TooTimid: Tiered spend-and-save could replace the blanket 50% code. It gives customers a reason to add more items while maintaining healthier margins at every tier.

T1-8. Post-Purchase One-Click Upsell — HashStash

Result: +21% AOV, 18% acceptance rate
Duration: 30 days, 5,100 visitors

Added a one-click upsell page between checkout completion and the thank-you page. Offered complementary products with a "Buy 1 Get 1 40% Off" incentive, purchasable with a single tap — no re-entering payment details. 18% of customers took the offer.

Control: Standard post-purchase page
Control: Standard post-purchase page with no recommendations.
Variant: BOGO 40% off upsell page
Variant: Post-purchase upsell with BOGO 40% off offer. 18% acceptance.

For TooTimid: Post-purchase upsells are especially powerful in your category because the customer has already committed — they've overcome the privacy anxiety and entered payment details. Adding a complementary item at that point is frictionless. We're not sure if you're currently running post-purchase upsells, but this is something we'd like to experiment with — trying different combinations of products and offers.

T1-9. Starter Kit for New Customers — Overland Addict

Result: +34% new visitor CVR
Duration: 28 days, 8,200 visitors

Created a $49 "First Timer Kit" — a curated selection of entry-level products bundled at a slight discount. Targeted at new visitors from paid ads. Reduced decision paralysis for first-time buyers who didn't know where to start.

Control: Standard homepage
Control: New visitors land on the standard homepage with full product grid.
Variant: First Timer Kit landing page
Variant: Curated "First Timer Kit" landing page for new visitors.

For TooTimid: 25% of your customers are first-time explorers. A "New to This? Start Here" kit — curated, priced under $50, with the free gift included — gives first-timers a safe, low-commitment entry point. Starter kit buyers have 3.1x higher 12-month LTV across our client base.

T1-10. Volume Discount Incentive in Cart — Marsh Wear

Result: +28% RPV
Duration: 21 days, 9,400 visitors

Added a "Buy 2, Get 15% Off" incentive badge directly on the product card in the cart, paired with a cross-sell carousel at the bottom. Encouraged customers to add a second item from the same category.

Control: Standard cart
Control: Standard cart without volume incentive.
Variant: Buy 2, Get 15% Off badge
Variant: "Buy 2, Get 15% Off" badge + cross-sell carousel.

For TooTimid: Volume incentives work well with accessories and consumables where cost of goods is low. Testing a structured offer vs your current 50% flat discount would tell us whether structured offers drive better unit economics.

Tier 2: The experiments that change how customers buy

Tier 2 experiments change the structure of the buying experience — how customers discover, navigate, and move through the funnel. They're the changes that make the existing value proposition easier to find and act on.

T2-1. Desktop Sticky Navbar — AFTCO

Result: +9.6% RPV
Duration: 11 days, 19,935 sessions

Made the desktop navigation bar sticky so it stays visible while scrolling.

Control: Nav disappears on scroll
Control: Nav disappeared on scroll.
Variant: Sticky nav
Variant: Sticky nav stays pinned. +9.6% RPV.

For TooTimid: Your site has a large catalogue across many categories. Persistent navigation helps visitors browse without losing their place.

T2-2. Homepage UGC Carousel — Codeword

Result: +6.1% CVR, -10.5% bounce rate
Duration: 23 days, 2,834 sessions

Added a "Your Story, Our Hats" user-generated content section. Real customers wearing the product.

Control: No UGC
Control: Brand photography only.
Variant: UGC carousel
Variant: UGC section added. Bounce rate dropped 10.5%.

For TooTimid: UGC is tricky in your category for privacy reasons, but curated lifestyle content or anonymous review highlights could serve the same trust-building function.

T2-3. Cross-Sell Pop-Up at Add-to-Cart — Marsh Wear

Result: +29% RPV, +13% AOV
Duration: 62 days, 5,921 sessions

Added a "Pairs well with" pop-up showing complementary products when a customer adds to cart.

Control: Standard cart drawer
Control: Standard cart drawer, no cross-sell.
Variant: Cross-sell pop-up
Variant: "Pairs well with" pop-up. +13% AOV.

For TooTimid: Complementary items (lube, cleaner, batteries, accessories) are natural add-ons at the point of commitment.

T2-4. Free Gift Callout on PDP — Peluva

Result: +18.8% RPV

Added a "Get free socks!" callout with product image directly above the Add to Cart button.

Control: No free gift mention
Control: No mention of free gift on PDP.
Variant: Free gift callout above ATC
Variant: Free gift callout above the Add to Cart button. +18.8% RPV.

For TooTimid: Your free gift with every order is buried. Surfacing it on the PDP with the retail value visible would give first-time buyers an extra nudge.

T2-5. Homepage Reskin — Overland Addict

Result: +90% CVR (0.3% to 0.57%)
Duration: 30 days, 97% confidence

Replaced a product-heavy homepage with a lifestyle hero and "Shop by Category" grid.

Control: Product-heavy homepage
Control: Product-heavy, no clear path for new visitors.
Variant: Lifestyle hero + categories
Variant: Lifestyle hero + category cards. CVR doubled.

For TooTimid: Your homepage has the highest bounce rate on the site. Guided entry with clear category paths would reduce choice paralysis.

T2-6. Product Card Differentiation — Gum of Gods

Result: +11.5% CVR
Duration: 29 days, 1,636 sessions

Added feature callouts and benefit bullet points to collection page product cards.

Control: Identical cards
Control: Identical-looking product cards.
Variant: Differentiated cards
Variant: Differentiated with features and benefits. +11.5% CVR.

For TooTimid: Your collection pages show multiple products with confusing prices. Cleaner product cards with clear differentiation would reduce friction.

T2-7. Single Column Collection Layout — AnyAge Wear

Result: +6.6% ATC rate
Status: Live, ~4,000 sessions

Switched mobile collection from two-column grid to single-column with full-width lifestyle photos.

Control: Two-column grid
Control: Two-column grid, small images.
Variant: Single column
Variant: Single column, full-width photos. +6.6% ATC.

For TooTimid: In your category, product images need to do heavy lifting. More visual real estate on mobile would improve browse-to-click rates.

T2-8. Mobile Navigation Redesign — Q30

Result: +19.2% CVR, +22.1% RPV
Duration: 22 days, 27,676 sessions

Redesigned mobile navigation to highlight three main products at the top with images and descriptions.

Control: Plain text menu
Control: Plain text menu.
Variant: Product cards at top
Variant: Product cards with images at top. +19.2% CVR.

For TooTimid: Your mobile navigation needs to guide visitors through an unfamiliar catalogue. Visual category cards at the top would reduce guesswork.

T2-9. Popup Redesign & Delay — Q30

Result: +7.9% CVR, +14.3% ATC rate
Duration: 30 days, 23,328 sessions

Redesigned the promotional popup from a generic split-screen layout to a mobile-optimized, product-focused design. Combined with a 60-second delay.

Control: Desktop popup, immediate
Control: Desktop-optimized popup, appeared immediately.
Variant: Mobile-first, delayed
Variant: Mobile-first design with 60-second delay. +7.9% CVR.

For TooTimid: If you're running popups that fire on page load, delaying them and redesigning for mobile could reduce the "close and leave" reflex for first-time visitors.

T2-10. Cart vs Quiz Checkout Flow — Gum of Gods

Result: +44.1% RPV
Duration: 12 days, 92% confidence

Replaced the standard browse-and-add-to-cart flow with a guided quiz that recommends products based on customer answers.

Control: Standard cart flow
Control: Standard browse-and-add-to-cart flow.
Variant: Guided quiz
Variant: Guided quiz with personalized recommendations. +44.1% RPV.

For TooTimid: A "What's right for me?" quiz could be one of the highest-impact changes for your store. 25% of your customers are first-time buyers facing decision paralysis.

T2-11. Sale Countdown Timer — BetterGuards

Result: +12% CVR, +8% RPV
Duration: 14 days, 16,200 sessions

Added a sticky countdown timer bar to the top of the site during a clearance sale. Urgency tied to a real event, not a fake evergreen countdown.

Control: Standard announcement bar
Control: Standard announcement bar, no urgency.
Variant: Sticky countdown timer
Variant: Sticky countdown timer tied to a real clearance event.

For TooTimid: Countdown timers work best tied to real events. Tying a timer to genuine limited-time offers creates urgency without cheapening the brand.

Why These Results Apply to TooTimid

A fair question after reading those case studies is whether we're just showing brands that were already growing. The honest answer is no.

When we started working with Q30, Marsh Wear and Codeword, every one of them was investing in traffic and pushing harder on growth. Whether sales would follow at the same rate was an open question. That is the exact stage where CRO does its best work, and it's where TooTimid is today.

You're spending around $200K a month on ads. You have 400,000 visitors coming through your store every month. The traffic engine is built and running. The question is how much of that traffic turns into revenue, and what's quietly leaking out of the funnel before it gets to checkout.

CRO is not a growth engine on its own. It needs traffic to operate on. You already have that part. The job is closing the gap between the traffic you're paying for and the revenue you're capturing from it. That gap is what we solve, so as your traffic grows, sales grow at the same rate or faster.

There's one more thing that makes your store a strong fit for this kind of work. Your customers are anxious buyers. They're buying something personal, potentially embarrassing, and they need to trust the site before they'll commit. That's a psychological friction problem, and psychological friction is exactly what our testing framework is built to identify and reduce. Every experiment we run on your store will be grounded in how your specific customers think, feel, and decide.

Your Tier 1 Backlog

We've already invested time understanding your customers and your store. Between the psychographic customer profile we built from 80+ of your real customer reviews and a preliminary site analysis from our team, we have a clear picture of where the highest-leverage Tier 1 opportunities are.

What we already know

Tier 1 experiments we'd run

1. Discount structure test. Your permanent "SEXY50" code for 50% off is the single biggest lever we'd want to test against. We'd run a controlled experiment: current 50% discount vs free gift only (no discount code) vs tiered spend-and-save thresholds. The goal is to find out whether the discount is actually driving conversions, or whether you're giving away margin on customers who would have bought anyway.

2. Price point testing on hero SKUs. We'd test price increases on your top 5-10 products. The research says 54% of brands find a better price point, and 59% of the time it's a lower price — but 41% of the time, a higher price converts better. We won't know until we test.

3. Free shipping threshold optimization. Test your current $59 threshold against higher values ($79, $99) paired with a progress bar in the cart. Your catalogue is deep enough that customers can easily add complementary items to hit a higher bar — lube, cleaner, lingerie, accessories.

4. Bundle introduction. Couples kits, first-timer starter kits, category bundles. Positioned as the recommended purchase, not a sidebar widget. Bundles reduce decision paralysis for first-time buyers while lifting AOV.

5. Post-purchase one-click upsell. We're not sure if you're currently running post-purchase upsells, but this is something we'd like to experiment with. A single-tap upsell page between checkout and order confirmation, where the customer has already overcome the privacy anxiety and entered payment details. We'd try different combinations of products and offers to find the highest-converting post-purchase flow.

6. Gift with purchase value reframe. Test making the free gift's retail value visible on every product page and in the cart. "You're getting a FREE [product] worth $45!" This reframes the purchase as a better deal without discounting the primary product.

7. Subscription and LTV opportunities. We'd look for opportunities to build lifetime value through a subscribe & save model on consumable products (lube, toy cleaner), or a dripped-out package offer. This might not be straightforward given how particular customers are about their product choices in this category, but we would actively look for opportunities to explore it regardless. Even a modest subscription uptake on consumables would generate predictable recurring revenue at zero incremental acquisition cost.

The timeline

Month 1. Deep diagnostic (we need access to Shopify, GA4, Klaviyo). Validate assumptions. Ship the first 2-3 Tier 1 experiments — discount structure test, price point test, shipping threshold test. These are the fastest to set up and the most likely to produce large, measurable results quickly.

Month 2-3. Launch bundles, post-purchase upsells, and the free gift reframe. Blend in the first Tier 2 experiments (cart simplification, homepage guided entry, add-to-cart visibility).

Month 4+. Expand based on what the first three months teach us. The full Tier 2 backlog is ready. Tier 3 changes (copy, imagery, layout polish) come after T1 and T2 are optimized.

Case Study: Q30

+$504K Revenue and 67% Higher Conversion on 27% Less Traffic

The Headline Numbers

Metric20242025Change
Net Revenue$2.58M$3.09M+$504K (+20%)
Conversion Rate0.92%1.53%+67%
Add to Cart20,39929,573+45%
Sessions1,223,544899,092-27%
Returns2,3651,808-24%

Revenue growth on less traffic. Better traffic quality plus a dramatically better on-site experience.

Q30 Shopify analytics. Total sales +36%, Conversion rate +40%, Sessions -11%
Q30. Total sales up 36% and conversion rate up 40% year-on-year, on 11% fewer sessions.

The Brand

Q30 makes the Q-Collar. A $199 FDA-cleared neck device that reduces brain movement during head impacts. Selling a science-backed $199 product to anxious parents who've never heard of the category.

Why This Matters for TooTimid

Different product, same buyer psychology. Q30 and TooTimid share the traits that matter most for CRO: high-anxiety buyers making a considered purchase in an unfamiliar category, where trust and education are the difference between a bounce and a sale.

DimensionQ30TooTimid
#1 DriverSecurity (94/100)Security (90/100)
Core objection"Does this actually work?""Is this site safe and discreet?"
Buyer typeSystem 2 (research-heavy)System 2 (research-heavy, high neuroticism)
Key frictionProduct education gapPrivacy anxiety + choice paralysis

The three insights that drove Q30's results — understanding who the real buyer is, recognizing they're deliberate researchers, and learning that simplification can hurt when the audience needs more information — are directly applicable to TooTimid. Your customers need reassurance and guidance, not a stripped-back experience.

The Three Insights That Changed Everything

  1. The real buyer is a parent, not an athlete. 60% of purchases were by parents and grandparents. The entire website was positioned for athletes and pros.
  2. These are System 2 buyers. Deliberate, sceptical, information-hungry researchers who won't buy until they've read enough proof to reconcile their doubt.
  3. Simplification hurts this audience. We tested a simplified PDP layout. CVR dropped 9%, revenue dropped 11%. The audience wanted more information, not less.

Standout Tests

What the Client Said

Charlie Kunze

"Tim and the Clean Commit team have been my secret weapon. I didn't have time to keep looking for ways to improve our store, and they've found optimizations I wouldn't have thought of. They're super responsive and require very little oversight."

Charlie Kunze, Director of Marketing, Q30 Innovations

Case Study: Marsh Wear

$590K Revenue Impact and 30% CVR Lift in 12 Months

The Headline Numbers

MetricBeforeAfterChange
Conversion rate1.83%2.38%+30.3%
Average order value$99$114+14.8%
Monthly revenue$308K$741K+140.7%

Conservative annualised revenue impact: $590,458 (projected at 0.75% of measured test outcomes, 18 implemented winners, 37 tests over 12 months).

Marsh Wear Shopify analytics. Sessions +19%, Total sales +21%, Orders +14%, Conversion rate +12%
Marsh Wear. Year-on-year growth across our engagement. Orders chart shows the compounding effect of 37 experiments.

The Brand

Premium outdoor apparel. Fishing, hunting, camping, boating lifestyle clothing. Around $5M/year on Shopify, 75%+ mobile traffic, conversion rate stuck below 2%. Owned by AFTCO, a brand we'd already been running a full CRO program on.

The Insight That Unlocked It

The marketing team was constantly updating the site, but every change was a guess. Layers of technical debt, no measurement, 75% of traffic on a mobile experience built as a desktop afterthought.

The counterintuitive finding from 40 hours of diagnosis: the biggest wins came from making products look better and feel more desirable, not from reducing friction. Marsh Wear's customers are driven by brand belonging and product desire. They want UGC, real photography, the feeling of "I want to wear that." What they don't want is urgency tactics, which cheapen the brand.

Top Winners

TestRPV LiftAnnual CII
Enhanced Search Results+14.7%$296K
Mini Cart Redesign+9.9%$36K
Discount Price Styling+10.0%$32K
Product Card Redesign+9.3%$30K
Mobile Menu Redesign+10.7%$22K
Hand-Picked Cross-Sells+29.0% AOV +13%$13K

The Test Worth Highlighting

Most cross-sell implementations use algorithmic "frequently bought together" recommendations. We manually selected every product pairing. Fishing shirt with a specific hat. Jacket with matching gloves. Cheap, complementary, curated by humans who understood the products.

Result: +29% RPV, +13% AOV. Highest per-visitor revenue lift in the program. Timing plus relevance beats algorithms.

What the Client Said

Casey Sandoval

"Kamila, Tim and WK from the Clean Commit team are awesome. They run a tight ship and their program has been one of the main factors behind our growth this year."

Casey Sandoval, eCommerce Director, Marsh Wear

Case Study: Codeword

$915K Revenue Impact. $2M to $3.87M in One Year.

The Headline Numbers

MetricBeforeAfterChange
Conversion rate2.28%2.69%+18.2%
Average order value$113$146+28.6%
Monthly revenue$212K$287K+35.5%

Conservative annualised revenue impact: $915,128 (projected at 0.75% of measured test outcomes, 11 implemented winners, 35 tests over 12 months). Year-over-year gross revenue: $2.05M to $3.87M. +88.6%.

Codeword Shopify analytics. Total sales +49%, Conversion rate +17%
Codeword. Total sales up 49% year-on-year with conversion rate up 17%, on just 5% more sessions.

The Brand

Custom hat company. Order a single embroidered hat with no bulk minimum. Customers type in text, choose a style, pick placement. Around 85 to 90% of hats get customized, so the customizer is the product experience.

The Bottleneck

Conversion stuck at around 2% with no clear path forward. The off-the-shelf customizer plugin couldn't be A/B tested, had limited styling options, looked visually cheap and was completely locked down. For a store where 85%+ of customers have to use it to buy anything, that wasn't a minor UX issue. It was a revenue ceiling.

Top Winners

TestRPV LiftCVR LiftAnnual CII
Customizer Rebuild+32.6%+6.8%$375K
Condensed Product Gallery+62.9%+23.9%$164K
Review-Based FAQs+33.2%+8.4%$81K
Input-First Mobile Customizer+12.0%+2.8%$57K
Enhanced Mobile Customizer+21.7%+3.0%$54K

The Test Worth Highlighting

The customizer preview was blank by default. Customers stared at an empty hat mockup, trying to imagine what their text would look like.

We added one thing. Placeholder text in the preview. "YOUR TEXT HERE" shown on the hat by default.

Result: +15.1% CVR, +9.4% RPV. One line of copy. 15% conversion lift. This is what research-backed CRO looks like.

The Customizer Rebuild

The biggest win wasn't a traditional A/B test. It was rebuilding the customizer plugin from scratch and then testing the new one against the old one.

New customizer: better styling, cleaner UI, mobile-first, real-time preview with zero lag, every element testable going forward. It also integrated with Nate's embroidery machines, automating a workflow that was previously manual.

+32.6% RPV, +6.8% CVR, +24.3% AOV. $375K annual impact from a single experiment.

What the Client Said

Nate Montgomery

"Our conversion rate is already up 10-15% just in a month or two of working with them. If you're on the fence, just do it. You will not regret it. They're a great team, they really work to understand you and your particular business."

Nate Montgomery, Founder, Codeword (video testimonial)

Expected Outcomes

What you can expect

We've run comparable engagements across multiple ecommerce brands over the last 14 months, and the pattern is consistent. Structured CRO testing on stores with strong traffic produces meaningful, measurable revenue growth.

We're going to be conservative with these projections. We haven't been inside your analytics yet, and we don't know your exact AOV or revenue baseline. What we can do is show you what a realistic range looks like based on what we've seen across similar engagements.

Three scenarios over 12 months

Based on TooTimid's metrics (around 400K sessions/month, ~2% CVR, estimated $75 AOV, baseline ~$600K/month from the site):

ConservativeExpectedOptimistic
Based onSlowest comparable engagementAverage across three of our CRO clientsIn line with our best performing stores
CVR improvement+10–15% (to ~2.2–2.3%)+15–20% (to ~2.3–2.4%)+25–30% (to ~2.5–2.6%)
Monthly revenue lift+$60K–$90K+$90K–$120K+$150K–$180K
12-month cumulative+$720K–$1.08M+$1.08M–$1.44M+$1.8M–$2.16M
Note from Tim

These projections assume static traffic and static ad spend. They also assume an estimated $75 AOV which we'll validate once we have access to your analytics. The actual numbers could shift in either direction once we see the real baseline.

What Our Clients Say

Rachael Nelson

"CR has gone up roughly 800% since we started working on the store… which is pretty neat."

Rachael Nelson, eCommerce Manager, Peluva

Sarah Smyth

"Conversion rate went up almost 300%."

Sarah Smyth, Australian Black Worms

Tim Ruswick

"Fantastic, communicative, and made constant progress."

Tim Ruswick, GameDev.tv

Our Process

How We Get These Results

Every engagement runs the same three phase cycle.

Phase 1: Diagnose (Weeks 1 to 3)

25 to 40 hours of deep analysis before we touch anything. We're hunting for the real reasons people buy, and the quiet reasons they don't, across your store, your reviews, the wider web and the category at large.

Phase 2: Test & Validate (Ongoing)

5 to 10 A/B experiments running concurrently at all times, each one pressure-tested against The 11 Pillars of Buying Psychology before it goes live. Winners get implemented, losers get dissected so the next test lands harder.

Phase 3: Implement & Scale (Ongoing)

Winning experiments are permanently built into your store, each one lifting the baseline the next test builds on. Monthly reporting ties every experiment back to revenue impact.

The Math of Compounding Wins

100 experiments per year × 30% win rate = around 2.5 winners every month.

Roughly 30 permanent improvements per year, each one raising the baseline the next test builds on. A single winning test on Marsh Wear's mobile menu generated +$109,400 in annual revenue, and that was one of thirty that year.

Price

Overview

We work on a performance-based model. Our fee is tied directly to the revenue our experiments generate. If the experiments don't produce results, you don't pay. If they do, you pay a fair share of the value we created.

Month one: performance only

Month one, there is no retainer. You pay only for results. At the end of the month, we calculate the incremental revenue generated by experiments that reached statistical significance. The performance fee equals this incremental revenue amount. If no experiments produce a positive result, no fee is charged.

Month two onward: $3,000 minimum + performance

From month two, a monthly minimum of $3,000 applies. This is a floor, not a cap.

If the performance fee for the month is less than $3,000, you pay $3,000. If the performance fee exceeds $3,000, you pay the performance fee. You pay the higher of the two, not both.

The minimum ensures both parties share the risk. You receive a dedicated CRO team working on your store every month. We receive a baseline that covers a portion of the effort involved, even in months where experiments don't produce measurable wins. In those months, you still benefit from the research, learnings, and strategic direction.

A worked example

Example Intelligems experiment result showing control vs variant revenue
Example from one of our live experiments. Variant vs control revenue pulled directly from Intelligems.

In the example above, the variant generated $14,893 and the control generated $13,493. The difference is roughly $1,400. That's what we'd charge as the performance fee — one month of the measured uplift.

Every month after that, the $1,400 in extra revenue is yours. The experiment keeps running, the lift keeps compounding, and we earn nothing on it past that first charge.

How performance is measured

Revenue impact is measured through the agreed testing platform (Intelligems). Every experiment is an A/B test: a portion of your traffic sees the original (control) and a portion sees our change (variant). Because both groups are drawn from the same pool of visitors at the same time, external factors affect both groups equally. The measured difference isolates the impact of our work.

An experiment qualifies for billing when it reaches at least 90% probability to beat baseline on the primary revenue metric, with directional support from at least one higher-powered funnel metric (add-to-cart rate or checkout commencement rate).

If an experiment doesn't reach 90%, we declare it flat. From there it's a mutual call whether to roll the change out anyway or bin it. If you choose to ship it before it reaches significance, it qualifies for billing — if you're shipping it, you're agreeing it has value.

Commitment

There's no hard lock-in. This agreement is month-to-month. Either party may terminate by providing 21 days' written notice.

We strongly recommend sticking with us for at least three months before drawing conclusions. That's the minimum time for us to learn your customers well enough to stop running broad experiments and start running the sharp, customized ones that tend to produce the biggest wins.

If we go a few months without a real win, we're the first ones who'll tap you on the shoulder. We're not incentivized by the minimum. The minimum is close to a wash for us. We're incentivized by the big wins. If those aren't happening, the shared incentive isn't there and we'll tell you straight.

Next Steps

You've seen the case studies, the process, the pricing and the specific plan we've already started building for your site. The only real decision from here is whether to start.

Book a 30-Minute Call

We'll walk through your current metrics, pressure-test the plan against anything we haven't seen yet and confirm the opportunity size.

Tim Davidson
tim@cleancommit.io

What Happens Next

  1. 30-minute call. We review your metrics together, confirm the opportunity size and answer anything outstanding.
  2. Agreement and kickoff. Paperwork within 24 hours. Kickoff within a week.
  3. Week 1 to 2. Deep diagnostic — full access to your Shopify, GA4, Klaviyo. We build the real baseline.
  4. Week 2 to 4. First experiments go live. Discreet guarantee strip, ATC button test, cart simplification.
  5. Month 2. Results from the first batch inform the next wave. Retainer kicks in.

Total elapsed time from signed agreement to live tests: 14 days.

Capacity

We currently have room for two new engagements this quarter. If we're at capacity when you reach out, we'll tell you straight and offer a start date rather than overcommit.

One More Thing

If the answer is "not yet," the most useful thing we can do is send you the customer insight report (Appendix A) as a standalone document. We built it from 80+ of your real customer voices — their anxieties, their motivations, what almost stopped them from buying. It's yours either way. Either the work speaks for itself, or it doesn't.

Your Benchmarks

Where you stand today

Note from Tim

We don't have access to your Shopify console yet, so the figures below are based on what you've shared with us and what we can observe externally. They could be off, so take these projections as directional rather than precise.

The point here isn't a precise baseline. It's showing you roughly what the trajectory looks like when a brand like yours runs a structured CRO program.

Current (estimated)Shopify Health & Beauty BenchmarkClean Commit average CVR lift (6 months)
Conversion Rate~2.0%~2.5–3.5%+15–20%

What we've already identified

Note from Tim

This isn't gospel. We need to go much deeper on the analysis once we have access to your analytics. But between the customer research we've already done and our team's preliminary review of your store, we've identified several structural issues worth testing against.

From our preliminary analysis and customer research (80+ customer voices across Trustpilot, Bizrate, and BBB):

  1. Privacy anxiety at every step. Your customers' #1 psychological driver is Security (90/100). Discreet shipping and billing guarantees exist, but they're not visible enough at the moments that matter most — product pages, cart, and checkout. Every step a visitor takes without seeing reassurance is a step closer to leaving.
  2. Choice paralysis on the homepage. Your homepage has the highest traffic and the highest bounce rate across all devices and sources. First-time visitors in an unfamiliar category need a clear path in, not a wall of options.
  3. Product card confusion. Multiple prices displayed on collection pages with overlapping discount codes. When a visitor can't tell what something actually costs, they don't buy.
  4. Cart page friction. Too many distractions and upsells overwhelming customers at the moment of commitment. The 77% drop-off from add-to-cart to checkout suggests the cart is actively pushing people away.
  5. Add-to-cart visibility. The primary conversion action blends into the site's color scheme. Three out of four mobile visitors may never notice it.

These are the kind of structural problems that a CRO program is designed to solve, and every one of them is testable.

Who are Clean Commit?

Clean Commit has been around since 2018 and is considered one of Australia's leading conversion rate optimization agencies. Our team is spread globally across Europe, America and Australia. We help Shopify brands turning over between $2M and $50M in revenue who have hit a growth ceiling.

We're a small team made up of experts in their fields. Senior project managers who have worked on large enterprise software platforms and infrastructure rollouts. Senior developers with a decade of experience designing web systems, UI and UX. Analysts with tertiary backgrounds in psychology, analytics and statistical analysis. Because we're all experts in our respective fields, we look at websites through a different lens than other teams.

We're focused on one thing. We're not a full-service agency. We don't do ads, email marketing or social media. What we do is scientific testing, customer analysis and conversion rate optimization for Shopify. It's our specialty and we know it inside and out.

By the Numbers

Brands optimized106+ brands
A/B tests run1,000+ with real traffic and statistical rigor
Revenue generated (last 12 months)$1.5M in measured, attributable lift

The Team

A small, senior team. You work directly with us, not a layer of account managers.

Tim Davidson

Tim Davidson

Founder & Lead Strategist

Wojciech Kaluzny

Wojciech Kaluzny (WK)

Co-Founder & Lead Engineer

Kamila Kucharska

Kamila Kucharska

Project Manager

Patryk Michalski

Patryk Michalski

Senior Web & UX Designer

Cormac Quaid

Cormac Quaid

Shopify Engineer

Borisa Krstic

Borisa Krstic

Shopify & React Engineer

What makes us different

We go deeper into customer psychology than almost anyone in the industry

Plenty of agencies do upfront research. Where we tend to separate is how far we push past surface-level UX and into the psychology of why your customers buy.

Ever wondered why certain products fly off the shelves while others gather dust? There are rules behind that. Real patterns in consumer psychology that can be applied to make a lot of money.

Over the last seven years we've built a framework called The 11 Pillars of Buying Psychology to record what actually drives your customers to buy, and what quietly stops them. Every experiment we propose gets pressure-tested against those pillars before it goes live.

A lot of agencies optimize components. We optimize buying decisions.

Volume, and the math that makes it work

We aim to run over 100 experiments a year for each of our active clients. We operate at roughly a 30% win rate, which means about 30 wins every year compounding into your baseline.

A single test is a coin flip. Run 100 of them through a disciplined framework and the math tilts in your favor. From what we've seen, a lot of our competitors and internal teams only run 20 to 30 tests a year. We run two to three times that.

Customer Insight Report

TooTimid. Who's Buying and Why

Built from ~80 of your real customer voices across Trustpilot (7,565 reviews, 3.6/5 rating), Bizrate (8.3/10, 1,527 reviews), BBB, and Knoji. This is the kind of report we produce in the first two weeks of every engagement.

Your Customer

The defining trait: Your customers are anxious buyers. They chose TooTimid specifically because they're too uncomfortable to walk into a physical store. The brand name itself is the value proposition — this is the safe, discreet, non-intimidating way to shop for something they find embarrassing.

Who they are:

SegmentShare of customersTrigger
Couples looking to "spice things up"~35%Relationship routine, desire for novelty, often one partner initiating
Solo self-care / first-time explorer~25%Curiosity, self-discovery, TikTok or social media discovery
Repeat buyer restocking or upgrading~20%Previous product broke or wore out, prompted by email or promotion
Gift buyer (for partner)~10%Anniversary, Valentine's, birthday, spontaneous romantic gesture
Replacing a broken product~10%Product stopped working, need replacement or upgrade

The first-time explorer segment is the most underserved. They need reassurance above all else, plus low-commitment entry points (free gift, starter kits, educational content). They're the most likely to bounce without buying.

What Drives the Purchase

Seven psychological drivers, scored by frequency and strength in customer language:

DriverStrengthCustomer Language
Security90/100"Discreet shipping." "Discreet packaging." "What you put down from your bank account."
Comfort80/100"Very easy and fast, simple process." "Easiest most pleasant experience I've ever had."
Curiosity55/100"I got a free toy for it being my first time ordering!" "Liberating...self care point of view."
Belonging45/100"It's our private ToysRus store." "My wife and I really enjoy this site!"
Progress35/100"Liberating...self care point of view." "Enhance your personal satisfaction."
Autonomy25/100"Vast selection...just about everything I'd ever want."
Status10/100Almost entirely absent. This is not an aspirational purchase.

The phrase to build everything around: Security and Comfort together account for the overwhelming majority of positive review language. Every page on the site should answer two questions: "Am I safe here?" and "Is this going to be easy?"

What Stops Them Buying

RankObjectionWhat They're Thinking
1. "Is this site legitimate?"Scam Detector gives 70.4/100. First-time visitors from TikTok or social ads are especially skeptical. Need trust badges, years in business (since 2000), review count (7,500+), and secure checkout callouts above the fold.
2. "What if someone sees the package or billing?"The #1 anxiety. Currently addressed by the brand but may not be visible enough on product pages and at checkout. Needs prominent, specific guarantees: plain brown box, no company name on exterior, billing shows as generic name.
3. "What if it's defective and I can't return it?"No-return policy on adult products is a major friction point. Replacement policy exists but isn't well-understood. Needs clearer communication: "Defective? Free replacement, no questions asked."
4. "Prices seem high"Competitors run aggressive promotions. TooTimid's free gift partially offsets this, but the value may not be clear until after purchase. Need visible value framing.
5. "I don't know which to choose"First-time buyers face decision paralysis in an unfamiliar category. Educational content exists but may not surface at the right moment. Needs guided selling.

How They Decide

TraitLevelImplication
NeuroticismHighThe defining trait. Anxious about being discovered, about package contents, about billing statements, about whether the site is safe. Every step needs abundant reassurance.
ConscientiousnessModerate-HighResearch before buying. Watch product videos. Read policies carefully. Give them everything on-site.
ExtraversionLow-ModeratePrivate purchase behavior dominates. They would NOT walk into a physical store. Avoid social proof that feels exposing.
AgreeablenessModerate-HighWarm and forgiving when things go right. Sharp and unforgiving when trust is broken.
OpennessModerateCurious enough to shop online for intimate products, but they chose the "safe" brand. Not early adopters.

Design implication: High neuroticism + moderate conscientiousness = these customers need reassurance at every step. Don't get clever with checkout. Show security badges prominently. Explain exactly what will appear on their credit card statement. Show what the package looks like. Keep the experience simple and non-overwhelming. Avoid social proof tactics that feel exposing ("X people are viewing this") — these buyers don't want to feel watched.

Data Confidence: 7/10

Built from ~80 distinct customer voices across 6+ sources, with 7,565 Trustpilot reviews providing quantitative backing. Known gaps: no Reddit presence found, Yelp blocked, homepage couldn't be scraped (JS/Shoplift layer), no access to on-site product reviews yet. Confidence will increase significantly once we have access to on-site reviews, post-purchase survey data, and analytics.


Every experiment in this proposal traces back to something one of your own customers said. We don't test random changes. We test changes grounded in how your specific customers think, feel, and decide.

Frequently Asked Questions

How do you prevent experiments from cannibalizing each other?

We use a naming and intent convention that categorizes each part of the UI and cross-references it with the motivations of the customer. Someone looking for information on a PDP is on a different journey to someone flirting with purchasing on the same page, so we treat those as separate spaces.

When we scope an experiment, we stick to one defined part of the site with one defined intent. We can go surprisingly granular, and at that level of resolution it takes at least 18 months to exhaust all the combinations on a single store. So cannibalization is something we sidestep structurally, not something we manage case by case.

How do you accurately measure the uplift from experiments?

Every test is a controlled A/B. A percentage of your traffic sees the original (control), the rest sees the variation.

We measure a range of metrics. Conversion rate, revenue per visitor, average order value, bounce rate and a handful of supporting signals, all pulled directly from the testing platform.

We also run an AA test on each store before we start. That tells us the natural variance of your pages. If we know your baseline conversion rate naturally swings by around 5%, we won't call a 5% lift a win. That gets declared flat. It's the only way to separate real movement from statistical noise.

We push for above 90% statistical confidence before calling a winner. For stores with large traffic we'll reach into the 95%+ range. For smaller stores 90% is our working floor.

What A/B testing platform do you use?

We default to Intelligems on most engagements.

Intelligems uses randomized participation, which means a single visitor can be part of three, four, five or more concurrent experiments without the results interfering. That matters because it lets us maintain a high testing velocity without the tests tripping over each other.

We've also used Shoplift extensively. It isolates audiences per experiment, which means the number of concurrent tests you can run is much lower and each one takes longer to resolve. We don't recommend it anymore for high-velocity programs.

We've run script-based tools as well (VWO, Optimizely, Convert) but for Shopify stores today, Intelligems is the best tool on the market.

What happens if you don't see wins for a couple of months?

We come to you and tell you.

We're incentivized by the wins, not the retainer, so a quiet stretch hurts us too. If we go a few months without a real win we'll suggest whatever we can think of to course-correct. If it still isn't landing, we'll be the ones who raise the idea of mutually ending the engagement. We're not precious about the contract. We're out to make big wins, and when the shared incentive isn't there, we'll say so.

How many experiments do you run at the same time?

We aim for up to 10 concurrent experiments and around 100 experiments per year. Our average win rate sits between 20 and 30%, which means 20 to 30 winners a year compounding into your baseline.

Can we still make content changes and tweak the website while experiments are running?

Yes. You don't need to coordinate with us.

We run GitHub Actions behind the scenes that pick up your changes and apply them to the live experiment so everything stays in sync. We aim to be relatively invisible in the background. You run your marketing, merchandising and content updates as normal.

Where is your team based, and who would we be working with directly?

Tim is based in Australia (AEDT). The rest of the team is distributed across Europe: WK, Kamila, Patryk, Borisa and Cormac.

Tim is the account lead and the escalation point for anything strategic or contractual. Kamila is the person you'll be talking to in Slack day to day, providing running updates and managing delivery. The bi-monthly sync call where we walk through new experiments and results is typically with Kamila and WK (our co-founder and lead engineer).

Can we have access to your designers and developers?

Yes. We encourage every client to connect with us on Slack. When you need something from a designer, developer or analyst, you can reach them directly in the channel.

Do you do work outside of A/B testing?

Yes. Custom Shopify app development, headless builds, custom themes, international expansion, integrations and more.

That said, the point of this engagement is to improve your revenue per visitor. When a request comes in that's outside CRO scope, we tend to package it as a separately scoped piece of work so it doesn't interrupt the testing program.

What does the effort look like from your end?

Minimal.

WhatTime
Shopify and analytics access at kickoff10 minutes, one off
Weekly Slack updates from us5 minutes to read
Review of experiments before launch15 to 20 minutes per week
Feedback on test designs (async)10 to 15 minutes per week
Bi-monthly sync call1 hour every 2 months

We handle the research, design, development, QA, launch, monitoring, analysis, reporting and implementation of winners.

What does an honest uplift look like after 3 months?

Three months is roughly one full testing cycle. You'd expect the diagnosis to have surfaced 10 to 20 high-impact opportunities, 5 to 15 of those to have been tested, and 3 to 5 of those to have produced a measurable win.

In revenue terms, 3 months of testing on a store converting at 1.1% often lifts CVR into the 1.3 to 1.5 range, depending on traffic volume and the severity of the issues we find. The exponential compound doesn't really kick in until months 6 to 9, when the wins start stacking.

Can we talk to any of your clients?

Yes. Happy to put you on a call with Nate (Codeword), Charlie (Q30) or Casey (Marsh Wear). Let us know which vertical matches your questions best and we'll arrange the intro.