A playground for exploring
complex systems — carefully.
Intentional explorations outside the constraints of a roadmap — testing how structure, language, and UX constraints can make ambiguity understandable without oversimplifying it. The same design problems I work on professionally, at lower stakes.
An exploratory AI outfit guidance system built around a single question: how do you make AI feel helpful rather than authoritative in a domain where there are no objectively correct answers? The same trust design problems as NARC — at lower stakes.
A free environmental data map for tracking wildfire smoke, air quality, heat, and nearby facilities over time. The same design problem as Coverage Insight — how do you present complex, incomplete data to people making real decisions — with environmental signals instead of ATT&CK coverage.
These explorations are intentionally scoped. They're spaces for testing design judgment without the pressure to ship or optimize. What matters here isn't completeness — it's how systems are framed, how uncertainty is handled, and how trust is designed. The playground is where I pressure-test patterns I bring to production AI and data work.