Dell Technologies · E-Commerce · A/B Test · Case Study

Dell Accessories SNP —
hypothesis-driven, outcome-validated.

Dell's Accessories homepage was a high-traffic entry point that wasn't converting. Rather than redesigning on instinct, I structured the work as a formal A/B test — defining what 'better' meant before designing anything, so the results would be unambiguous.

Company
Dell Technologies
My role
Solo Lead Product Designer
Scope
UX · Hypothesis · A/B Test · Delivery
Timeline
2021 – 2022
Recipe A vs. Recipe B — side by side
The two recipes. Recipe B differences are deliberate and hypothesis-driven — not intuition-based. Each change maps directly to a tested hypothesis.
↑ RPV
Revenue Per Visitor — financial improvement under Recipe B
↑ AOV
Average Order Value — driven by system bundle sales
↓ Exit
Fewer users left without engaging under Recipe B
The problem

High traffic, underperforming conversion, no clear theory for why.

The Dell Accessories homepage had accumulated design decisions over time without a coherent strategy. It was getting significant traffic but converting poorly. Nobody had a confident theory for what was wrong — which meant any redesign without a proper test structure would just be replacing one set of guesses with another.

Before touching the design, I defined success metrics across three categories: financial (Revenue Per Visitor, Average Order Value), engagement (time on page, scroll depth), and customer experience (masthead usage, exit rate). Recipe B would need to improve across all three to be validated.

Why the structure mattered
When you define metrics after a redesign, you're selecting the ones that support the decision you've already made. Defining them first removes that option. The test either validates the hypothesis or it doesn't — and you learn something either way.
How I approached it

Audit against the metrics. Build a hypothesis. Design to test it.

I audited the existing page against each metric category — mapping where the current design was likely creating friction, missing intent signals, or failing to surface products users were ready to buy. Each identified problem became a specific, testable hypothesis. Recipe B was designed to test those hypotheses.

Hypothesis map
Hypothesis map
Annotated audit — friction points mapped to hypotheses mapped to Recipe B changes
The page audited against each metric category. Each problem annotated with the hypothesis it produced and the corresponding Recipe B change.

One hypothesis drove the biggest changes: users visiting the Accessories homepage were often ready to buy systems — full setups including laptop and peripherals — not individual accessories. The existing page didn't support that intent at all. Recipe B surfaced system bundles prominently and restructured the hierarchy around how users actually arrived.

Annotated screenshot
Recipe A screenshot
Recipe A — original page with annotated friction points
The control page. Friction annotated: unclear hierarchy, accessories without system context, no path for users with system purchase intent.
Annotated screenshot
Recipe B screenshot
Recipe B — redesign with hypothesis-driven changes annotated
Each change annotated against the hypothesis it tests. System bundles surfaced; hierarchy restructured around user intent.
Key decisions
Metrics defined before design began
Agreeing on what 'better' means before a single screen is designed removes the ability to rationalize after the fact. The data either validates the hypothesis or it doesn't.
Required improvement across all three metric categories
A redesign that improved revenue per visitor but pushed users away faster would have been a short-term gain with long-term costs. All three categories — financial, engagement, CX — had to move together. That constraint shaped design decisions throughout.
Surfaced system bundles based on the AOV hypothesis
The bet that users were ready to buy systems, not just accessories, was the biggest change in Recipe B. The AOV improvement confirmed it — and changed how the team thought about the purpose of the page going forward.
What I took from this
Metrics defined before design are worth more than metrics collected after. One is testing a hypothesis. The other is confirming a decision you've already made.
A/B testing is a design skill. Structuring a test that actually answers the question you're asking — with the right metrics and the right hypotheses — requires as much thought as the redesign itself.
Intent signals in the data are worth pursuing. The system bundle hypothesis came from looking at what users were actually doing — not what the product team assumed they were there for.