Role
Lead UX designer responsible for the end-to-end experience of an AI-assisted detection workflow, including access states, creation flows, and results interpretation.

Scope
Defined how AI-generated content is presented, attributed, and integrated into analyst workflows. Partnered with product and engineering to align UX decisions with technical constraints and rollout strategy, and led post-launch refinement to ensure the experience was demo-ready, trustworthy, and safe to scale.

NARC is an AI-assisted workflow that allows users to submit text or URLs and receive structured detection insights that can be added to a security knowledge base and used by analysts. My role was to design the UX that made AI-generated outputs understandable, attributable, and safe to act on—balancing trust, clarity, and evolving technical constraints in a high-stakes enterprise environment.

Impact at a Glance

  • Enabled confident demos and sales conversations for an AI-assisted feature in a high-stakes enterprise product

  • Explicit provenance cues to distinguish AI-generated vs user-owned content

  • Supported a safe, feature-flagged rollout with fewer late-stage UX issues and rework

  • Established UX patterns for AI-generated artifacts reused across the platform

Problem

Enterprise security teams increasingly rely on AI-assisted tools to interpret large volumes of unstructured information and turn it into usable detections. NARC was created to support this workflow by allowing users to submit text or URLs and receive AI-generated detection insights that could be incorporated into a shared knowledge base and used by analysts.

The UX challenge was trust. Analysts needed to understand where AI-generated content came from, what they owned, and whether it was safe to act on—while avoiding dense, overwhelming result displays. In parallel, many customers didn’t have access to the feature, requiring a non-enabled experience that communicated value without misleading users. From a business standpoint, the experience had to be demo-ready, credible, and resilient to ongoing technical changes to support a controlled rollout and validate commercial viability.

Press Release

The Challenge

Establishing trust in AI-generated outputs
Analysts needed to understand where AI-generated content came from, what it represented, and whether it was safe to act on—without overstating certainty or obscuring limitations.

Ambiguous ownership and discoverability
AI-created artifacts (references, detections, relationships) didn’t always behave like user-created objects, creating confusion around where they lived in the system and how users should interact with them.

Designing for users without access
Many customers didn’t have the feature enabled. The experience needed to communicate value clearly without implying availability or relying on fragile mock content that could quickly become outdated.

Dense, relationship-heavy results
Results combined multiple object types and relationships. The challenge was making these outputs scannable and interpretable without hiding important context or overwhelming users.

Evolving technical constraints
Backend and processing changes affected timing, data shape, and system behavior. UX decisions had to adapt without breaking user expectations or introducing misleading states.

Mixed object types

Design Decisions & Tradeoffs

Designed for trust over confidence

Emphasized provenance and context rather than presenting AI outputs as definitive, reducing the risk of over-trust while keeping results actionable.

Separated feature value from feature access

Created a clear non-enabled experience that explains capability without simulating live results, avoiding user confusion and maintenance risk.

Treated AI-generated outputs as first-class objects

Ensured system-created artifacts behaved consistently across the product, reinforcing ownership and discoverability

Optimized dense results for sense-making

Grouped and paginated content to support fast scanning without hiding important relationships.

Aligned UX with system constraints

Changes in how results were generated affected timing and visibility in the UI.

Designing for AI latency, abandonment, and user control—without breaking trust or flow.

What Changed Because of This Work

20,000+

+
Procedure Sightings created using NARC-extracted intelligence
  • Turned AI results into trustworthy, actionable product artifacts
  • Enabled safer rollout and more confident demos for an AI feature
  • Established reusable UX patterns for future AI-assisted workflows

Key Learnings

  • AI UX succeeds when trust, not novelty, is the goal
  • Access states and edge cases shape adoption more than features
  • Systems thinking beats screen-by-screen optimization

More Recent Work

Protected: From Coverage Data to Coverage Insight

Protected: From Coverage Data to Coverage Insight

Read More
Enterprise Coverage Foundations

Enterprise Coverage Foundations

Read More
Expand Your YOUNIVERSE – Dell’s Global Immersive Marketing Journey

Expand Your YOUNIVERSE – Dell’s Global Immersive Marketing Journey

Read More
Boosting Community Engagement: KPWB Website Redesign and SEO Impact

Boosting Community Engagement: KPWB Website Redesign and SEO Impact

Read More