LLM-Based AI Platform

My Role

Design Architect

Duration

7 months

Tools

Figma, Sketch

Overview

/Challenge

/Challenge

/Challenge

  • Setting User Expectations: Users struggle to interpret the reliability of AI outputs, requiring clear visual differentiation between confident and uncertain responses, along with non-disruptive handling of hallucinations.

  • Balancing Control vs. Automation: Users want both intelligent defaults and manual steering, creating tension between simplicity and the need for features like prompt tuning, rephrasing, and citation control.

  • Context & Memory Management: Managing what the model "remembers" can be opaque, with challenges around visualizing past interactions, editing persistent context, and keeping users oriented.

  • Transparency & Explainability: Users often don’t understand why a result was shown, especially in search or chat; surfacing reasoning or relevance without adding cognitive load is a key hurdle.

  • Information Density & Layout: LLM responses tend to be verbose, making it difficult to balance completeness with readability—especially when showing sources, follow-ups, or structured answers.


/Solution

/Solution

/Solution

  • Design for Trust and Transparency: Use progressive disclosure, visual cues, and fallback patterns to clearly communicate AI confidence levels and gracefully handle errors.

  • Empower Users with Guided Control: Provide structured inputs, follow-up editing, and mode toggles that give users flexibility without introducing cognitive friction.

  • Simplify Interaction with Contextual Memory: Introduce threaded conversations, editable memory views, and lightweight personalization to make context feel visible and manageable.

  • Reduce Cognitive Load with Visual Hierarchy: Use chunking, typographic structure, and expandable UI elements to make long or dense responses easier to navigate.

  • Ensure Cross-Modal UX Consistency: Align patterns, metaphors, and response formats across chat, search, and tools to create a seamless, unified experience.

  • Test, Learn, and Iterate in Context: Ground design decisions in real-world usage studies, telemetry insights, and explainability testing to ensure solutions meet user needs.


Research

User Engagement

User Engagement

User Engagement

Personalization

Personalization

Personalization

Navigation

Navigation

Navigation

Competitive Research

Competitive Research

Competitive Research

Design

Results

  • Increased User Trust and Confidence: Transparent cues like citations and confidence indicators led to fewer complaints and higher engagement with AI-generated responses.

  • Better User Control Without Overload: Guided input tools and smart toggles empowered both novice and advanced users without overwhelming them.

  • Improved Task Completion and Flow: Context retention and smooth transitions across features helped users complete complex tasks more efficiently.

  • Reduced Cognitive Load: Chunked outputs, clean layouts, and clear hierarchy made dense AI responses easier to scan and understand.

  • Greater Consistency Across Modes: Unified patterns and shared UI elements made switching between chat and search feel seamless.

  • Faster Iteration and More Targeted Improvements: Usage data and feedback loops enabled rapid, focused UX refinements based on real user behavior.