Reimagining AI Assistants: From Text-Only to Interactive Experiences
Artificial Intelligence (AI) assistants have rapidly evolved from simple text-based helpers to sophisticated interfaces capable of transforming how users interact with SaaS applications. Driven by advancements in large language models (LLMs) like Google Gemini, these assistants are increasingly integrated into core workflows, offering new possibilities for engagement and productivity. However, the challenge lies in moving beyond the limitations of traditional text interfaces to create rich, context-aware, and visually engaging interactions that truly differentiate products in a crowded marketplace.
The Dual Approaches to Building AI Assistants
Matthias Biilmann’s insightful analysis in “Introducing AX: Why Agent Experience Matters” highlights two primary strategies for deploying AI assistants:
- Closed Approach: Embedding conversational AI directly within a specific SaaS product. Examples include Zoom’s AI Companion, Salesforce CRM’s Einstein, and Microsoft’s Copilot. These assistants are tightly integrated, optimizing workflows within a single ecosystem.
- Open Approach: Leveraging external conversational agents like ChatGPT, Claude, and Gemini via protocols such as the Model Context Protocol (MCP). This approach allows AI to connect seamlessly across multiple third-party SaaS products, broadening access but also raising concerns about user experience consistency.
While both approaches unlock significant capabilities—such as instant data retrieval and multi-tool automation—they also introduce a critical risk: commoditization. When interactions become purely text-based and detached from the product’s unique design system, differentiation diminishes. Instead of standing out through tailored experiences, products risk becoming interchangeable commodities that rely solely on textual prompts for interaction.
The Limitations of Text-Only Interfaces
Despite the impressive linguistic abilities of LLMs, human preferences lean heavily towards visual information processing. Andrej Karpathy’s review of 2025’s LLM developments emphasizes that while text is optimal for machine understanding, humans favor visual cues. Similarly, Maximillian Piras highlights that complex interaction patterns—such as multi-step workflows—are often poorly served by chat interfaces alone.
This disconnect underscores a fundamental challenge: text-only interfaces constrain how users consume information and execute complex operations. They struggle to support large datasets, dynamic workflows, or multi-faceted decision-making processes that demand visual clarity and interactive feedback.
The Rise of Generative UI: Opportunities and Challenges
One promising trend is the development of generative user interface (UI) capabilities, where AI autonomously creates visual components based on user prompts. Imagine an AI that generates dashboards, forms, or flow diagrams dynamically—tailored specifically to the task at hand. While this innovation promises more fluid and personalized experiences, it also risks homogenizing design aesthetics when driven solely by training data from common design platforms.
Dorian Tireli notes that without deliberate curation, AI-generated UIs tend to reinforce mediocrity—products start looking similar because they originate from the same training datasets optimized for visual appeal rather than differentiation. This highlights the need for integrating unique design system knowledge into AI assistants to preserve brand identity and usability standards.
The Case for Design System Integration
To truly elevate AI-assisted interfaces, products must embed their unique design systems—components, patterns, and guidelines—within the AI’s knowledge base. Instead of defaulting to plain text responses or generic visuals, AI can render rich interfaces aligned perfectly with a company’s branding and usability principles. This transformation turns static chat boxes into dynamic viewports populated with interactive elements that adapt based on user prompts.
The latest advances in protocols like MCP facilitate this integration by enabling AI models to access and manipulate product-specific design assets programmatically. Using tools like Figma, designers can craft templates and components that the AI can instantiate in real-time—delivering personalized, contextually relevant interfaces at scale.
Three Modes for Enhanced AI Experiences
Mode 1: Rich Output – Visual Data Presentation
In complex SaaS workflows—such as data analysis or operational tasks—users benefit from visual summaries rather than lengthy text explanations. For instance, when asked to merge duplicate contacts, instead of a prompt asking “Which record should be primary?” the AI displays contact cards side-by-side with metadata and action buttons. This approach enhances scannability and speeds up decision-making.
Mode 2: UI as Input – Structured Interaction Components
Effective interaction begins at input. Replacing traditional text prompts with structured UI components—like query builders or form selectors—reduces ambiguity and accelerates task completion. For example, instead of typing “Show me high-activity leads in California,” a user interacts with dropdowns or sliders that specify parameters visually. This shift minimizes errors and improves overall efficiency.
Mode 3: Co-Creation – Workflow as a Shared Workspace
Real-world SaaS scenarios often involve multi-step processes requiring ongoing refinement. Here, the AI transitions from mere responder to collaborative workspace facilitator. Consider designing a marketing automation workflow: the user initiates “Create campaign for unactivated trial users,” prompting the AI to render a flow builder interface. The user can then modify individual steps directly—dragging components, adjusting parameters—and the AI validates these adjustments instantaneously.
- Fluid Modality Switching: Users shift seamlessly between direct manipulation and textual commands within the same workspace—for example, adjusting flow components visually or updating criteria via inline prompts.
- Proactive Cross-Tool Suggestions: The AI surfaces insights by integrating data from connected tools—such as analytics revealing mobile usage patterns—and presents recommendations within the workflow context without additional prompts.
- Delegation of Subtasks: Complex tasks are divided among human and AI collaborators—e.g., drafting email content or updating templates—allowing users to focus on strategic decisions while automating routine steps.
- Contextual Refinement: Inline prompts anchored to specific UI elements enable precise modifications—for instance, instructing the AI to add a survey step based on user hover actions in a flow diagram.
The Role of Design Skills in an AI-Enhanced Future
Building these advanced interactions does not require reinventing skill sets; it leverages core design principles refined over decades. Deep user empathy remains paramount—understanding which tasks benefit from conversational vs. visual interfaces ensures effective deployment. Systems thinking is increasingly vital as assistants cross boundaries across multiple tools and data sources.
Technical literacy is equally important: knowledge of APIs, data models, and backend logic enables designers to create adaptable templates that the AI can manipulate reliably. As automation becomes more prevalent in product design workflows—and as generative UI matures—the ability to craft flexible yet cohesive systems will distinguish successful products from their competitors.
The Path Forward: Differentiation Over Commoditization
The proliferation of AI assistants presents a strategic crossroads: will your product leverage these capabilities to deliver unique, visually compelling experiences or fall into the trap of generic text-based interactions? The answer hinges on your commitment to integrating your brand’s design system into the AI’s architecture—and on adopting multimodal interfaces that combine visual richness with conversational flexibility.
The evolution toward interactive AI-driven UI isn’t just an enhancement; it’s a necessity for differentiation in today’s competitive landscape. By moving beyond the text-trap and embracing rich, context-aware interactions powered by design system integration, product teams can craft experiences that are not only smarter but also more intuitive and engaging for users worldwide.
In Closing
The future of AI-assisted interfaces lies in their ability to blend linguistic prowess with visual interactivity—creating seamless workflows that adapt dynamically to user needs. As protocols like MCP mature and design systems become integral parts of intelligent assistants’ knowledge bases, organizations have an unprecedented opportunity to redefine user engagement standards. Embrace these innovations now to ensure your product stands out—not as a commodity but as an exemplar of thoughtful digital craftsmanship.
