Playground · AI-Assisted Guidance · Trust Design

Nefeli — dress with intention.

An exploratory AI outfit guidance system built around a single question: how do you make AI feel genuinely helpful — rather than authoritative — in a domain where there are no objectively correct answers?

Type
Personal side project
Focus
AI trust · Explainability · User control
Domain
Personal styling · Subjective guidance
Status
Live →
UI screenshot
Nefeli UI — outfit guidance with visible reasoning. Every suggestion includes a plain-language explanation of why, not just what.
What it is

A trust-first approach to AI guidance in a subjective domain.

Nefeli is an AI-assisted outfit guidance tool built around a single design question: how do you make AI feel helpful rather than authoritative in a domain where there are no objectively correct answers? Most AI styling tools either overwhelm users with options, issue recommendations without explanation, or require so much input that the effort outweighs the value.

Nefeli explores a different approach: calm guidance, contextual reasoning, and clear limits on what the AI claims to know.

The design question at the center
How does AI guidance stay supportive rather than prescriptive when the domain is inherently subjective — and when being wrong doesn't just waste a user's time, it affects how they feel about themselves?
Design decisions
Visible reasoning over confident recommendations
Every suggestion comes with a plain-language explanation of why — what context it used, what it was optimizing for, what it doesn't know. This keeps users in control of how much weight to give the output.
User-set constraints come before AI preferences
Users can specify what matters to them — comfort, context, style direction — and the AI works within those constraints rather than asserting a taste. Assistance, not influence.
No scores, no rankings, no 'best' option
Comparative ranking in subjective domains creates false hierarchy. Nefeli presents options without implying one is objectively better — the framing is contextual, not competitive.
Honest uncertainty over false confidence
When the AI doesn't have enough context to make a confident suggestion, it says so — rather than hedging with qualified confidence that's ultimately meaningless.
Why this matters to my work

The trust problems in Nefeli are the same ones I work on in NARC — just at lower stakes. When analysts use NARC to evaluate AI-generated security findings, the same questions apply: Does the interface communicate the basis for the recommendation? Does it make the AI's limitations visible? Does it keep the human in control of interpretation?

Building Nefeli gave me a lower-stakes environment to prototype trust design patterns — provenance, explanation, constraint — that I then brought to production AI work. The domain doesn't matter; the design problems do.