Mingxiao's Blog
Some men think more than they read. Others read more than they think. Those who practice both, grow wise. Those who follow neither, remain ignorant.
— John Leland

Random Thoughts
042
Human input to LLMs often describe their goals/desires instead of their intentions. Similar to the "people wanted a faster horse and I gave them a card" idea, LLMs can be more helpful by gently challenging human's comfort zone then learn from the feedback, beyond fully respecting their commands/preferences.
041
Learned from a conversation with Zoom's design manager: sometimes asking the right question worths more than providing couple of different solutions.
040
Although case by case on the actual products/domains, there is a dilemma between human's low input bandwidth and LLMs' desire for a structured, detailed input for better output.
039
"Auto-generated meeting notes," "AI article summary," etc. should be highly subjective depending on who is the audience. This is where personalized AI/memory/rich context are needed.
038
Cars were controlled by a tiller before the steering wheel was invented, which soon became a much more practical method for precisely guiding cars. If chatbot is the tiller of AI, as AI requires more and more control, what will the steering wheel look like?
037
From "human-in-the-loop" to "human-on-the-loop." AI product designers are shifting from designing a tool to designing a type of partnership. And such partnership has a lot in common with the one between humans.
036
Something stood out from a recent conversation with the Clubhouse crew: the way of balancing speed and quality is by clearly defining a priority. This is also tied to the whole "taste" idea people are talking about lately: it's about investing heavily only in the things that worth building.
035
Demoed couple of my early AI-native app prototypes in an interview, and the fact was, 80% of them never made it to production. I guess it's hard to come up with a plan when building AI-native products. It's more like playing with possibilities until something eventually works out.
034
Startup teams are meant to be fast, but taste isn't the antithesis of speed. In fact, a clear sense of taste is a prerequisite for maintaining that speed, because it provides a consistent lens through which different teams can make decisions.
033
"Taste" in a product context means it does more than just deliver features, but also conveys a unique philosophy, or "vibe." It's not just a specific design style, but rather an emergent quality that results from cross-team collaboration.
032
Grammarly apparently secured another billion in funding at the end of May. Even though many people have been downplaying its prospects, thinking its grammar correction isn't on par with the latest models from ChatGPT and Claude, its ability to seamlessly integrate into the vast majority of users' existing writing workflows still gives it a huge advantage.
031
"Workbench" is a key concept in my design for Kimi's Research agent. It functions as an environment that enhances the utility of other tools, provides a structured space for the AI to operate, and allows for a "human-on-the-loop" approach, rather than requiring a "human-in-the-loop."
030
Given the uncertain boundaries of current model capabilities, the design process sometimes starts from existing technological strengths to explore potential application scenarios, rather than beginning with user needs and then working backward to devise a technical solution.
029
Reasoning models sometimes over-analyze the overthink simple prompts, which wastes tokens and user time. The future direction might not be about differentiating the underlying logic of reasoning vs. non-reasoning models, but rather having the model autonomously decide whether to engage in deep reasoning and how deep to go, based on specific prompts.
028
Often the long text reports delivered by Deep Research are nearly unreadable, and their marginal utility diminishes quickly. We need to consider how to improve the readability or think even deeper about the format of such high-volume information deliverables. Should it be text with visuals? Data visualizations? Interactive web pages? Or a podcast like Gemini's doing?
027
Deep Research has the ability to process info from websites that might be scattered, noisy, unstructured, or even incomplete to build a more exhaustive discussion. This is both offloading tedious works and enhancing people's capabilities.
026
Today's user researchers can delegate mid-range research tasks for user studies and product development to AI tools so that they can focus more on macro research on strategy and business, and micro research on technical usability and micro-interactions.
025
Meaning is the new interface.
024
Quote Anthropic here: "Consistently, the most successful implementations weren't using complex frameworks or specialized libraries. Instead, they were building with simple, composable patterns." If we look through the hype, workflows can do the job just fine, instead of agents.
023
I've always believed that replacing humans isn't the point of AI. That's why I'm much more interested in the idea of collaborative UX — where AI interacts with me like a teammate, similar to how group members would edit a Google Doc together on a university project.
022
Maybe one day LLMs will be incredibly powerful, but agents built on them will likely still seek guidance and help from users. So, something worth thinking about is how to seamlessly integrate such message from agents into the system design, or even into the user's own workflow. Much like how a Thursday night design critique session becomes an integral part of my job.
021
I'll quote this line on LangChain's blog: "Building trust with background agents: moving from 'human-in-the-loop' to 'human-on-the-loop.'" If we break that down further, methods for building trust include: intuitively showing the steps and process, allowing users to intervene and make corrections, and enabling users to pause the agent and provide feedback.
020
One of my design goals for Kimi's browser agent was to enable users to multitask and do their down jobs while the agent runs in the background. This will truly boost productivity and give the agent more leeway time-wise. But most users in testing still tend to watch the agent execute — partly out of curiosity, and partly because they haven't yet built up enough trust in its capabilities.
019
Streaming conversations (chatbots) minimize the technical divide because users can interact directly and continuously with an LLM. They can observe its reasoning and actions in real-time, and provide timely guidance or intervention. This also enables the agent to proactively ask for help. Although this is hard to achieve with non-streaming conversations, the latter are often easier to integrate into existing, non-AI-native products.
018
My work at Moonshot focused on envisioning the UX for AI-native products. But for every product there's an opportunity: we can examine each step in existing user flows, identify those complex or frustrating moments where users might feel helpless, and then explore how AI or agent could step in to lighten that burden.
017
Traditional UX primarily focuses on mapping out user flows and providing assistance at each step, with users being the one actually doing the work. In the context of AI agent UX, the agent takes on actual tasks, shifting the focus of UX from creating a supportive environment to designing a true execution partner for the user.
016
Really like the point from Atlassian's design principle "Matching purpose and feel familiar." Quote: "Although there's a persistent visual and behavior similarity, they adapt to people's devices and contexts, rather than being consistent for the sake of consistency.
015
Consistency isn't the only benchmark for excellent design; it should be a means to enhance the experience not an end in itself. Don't let the pursuit of consistency overshadow the goal of creating efficient, effective, and user-friendly interfaces.
014
Establishing a set of copywriting guidelines in a company's early stage is crucial. Beyond standardizing UI copywriting, it helps convey the company's vibe. A good rule of thumb for copywriting: if you wouldn't say it to a user face-to-face, it doesn't belong in the UI either.
013
Two perspectives of optimizing unsubscribe flow: 1. by reminding users of the value they're getting or what they'll miss out on; 2. for those who are set on leaving, by taking the opportunity to better understand their needs and preferences.
Always offer a graceful goodbye after they've unsubscribed or uninstalled. Using dark patterns or guilt-tripping users to make them stay is never the way to go.
012
How a product says goodbye to users should be just important as how it welcomes them. This is especially true for products with low uninstall friction, like Kimi Browser Extension — in such case, designing a dynamic unsubscribe or uninstall survey offers clear benefits.
011
Ultimately, the goal of designing an error handling system is to build user confidence, fostering a sense of trust, control, and satisfaction.
010
AI-native products often run into undefined errors. So it's vital to at least clearly communicate the error state and ensure users don't lose their progress.
009
Using chunking and progressive disclosure can significantly reduce cognitive load for users, helping them maintain clarity during multi-step tasks and thereby boosting efficiency and accuracy. However, you shouldn't break things down too granularly, and it's crucial to maintain a clear connection between related fields.
008
Kimi 1.0 used Messenger-style chat bubbles. This was essentially a metaphor to familiarize users with chatbot format and to cultivate a "partner" or "assistant" vibe. But this form isn't sustainable in long run because the most valuable answers or deliverables won't always be just a block of text.
007
The core of product design is managing complexity. Products inevitably get more complex, but LLMs will also keep evolving. Early on, model limitations might introduce more complex to users. But as these models get more powerful, we should be shedding those design "crutches" and even use probabilistic systems to help make some judgements for the user.
006
User trust in a product, much like trust between people, is built gradually. Features that depend on that trust shouldn't be dumped on the user during their first experience.
005
We often discussed about "human touch" or "human-like feel" at the early stage of Kimi. I believe a good human-like feel should actually emphasize a clear sense of boundary.
Whether an AI is framed as an "intelligence assistant" or a "smart tool," different applications should introduce different boundaries. How those boundaries are perceived and defined is shaped by the duration of use and the specific context of interaction.
004
Maybe product design in the future won't just be about solving a problem; it can be about shaping culture. The richness, diversity, and meaning in the product people use will directly influence how rich, diverse, and meaningful their own lives can be.
003
Strive to be the one in the company who sets the bar for production-level quality.
002
Focus, quality, speed: Focus on one or two features, build them well, ship quickly, and then worry about the rest.
001
A strong product narrative is effective marketing. It should first articulate why the product needs to exist and the value it creates, only then talking about what it is and what it does.

Good Reads
Making stateless machines recall the relevant bits as it goes.
Making stateless machines understand goals and be curious.
Having conversations with computer is a new way to amplify our thoughts.
An unrated manifesto on interfacing with intelligence.
Nick Jones on creative partnership, experimental interfaces, and why we may still be in the silent film era of the web.
Design systems are becoming operating systems for design orgs.
A very practical book about presentation and public speaking skills, by Jay Sullivan.
This pattern allows developers to expose a single tool interface to an agent while encapsulating modular tool routing logic behind it.
Quality is a choice we can make everyday.
Even if they're aesthetically pleasing, you can't substitute important information with sparkles.
Where we are going, we don't need roads. — Doc Brown, Back to the Future
Consistently, the most successful implementations weren't using complex frameworks or specialized libraries. Instead, they were building with simple, composable patterns.
Learn about spreadsheet UX for batch agent workloads, Generative UI, and collaborative UX with agents.
I'd argue that in order for agentic system to really reach their potential, this shift towards allowing AI to work in the background needs to happen.
I believe that in coming years, Human-Agent Interaction will also become a key area of research.
The future of digital experience is here — but it's being minced into microscopic use cases.
It is time to evolve AI tools beyond prompt-based interfaces and consider new mental models.
An ever-updating collection of new approaches to UI/UX in the era of AI.
When you think about consistency, you’re thinking about the product. When you’re thinking about current knowledge, you’re thinking about the user. They are two sides of the same coin.
"Your most unhappy customers are your greatest learning assets." — Bill Gates
An analysis of uncertainty avoidance and Amazon's website in various countries
Exploring how to design interfaces that feel intuitive and forgiving, even when users make mistakes.
The digital behavior traits of individualism and collectivism.
Even if they're aesthetically pleasing, you can't substitute important information with sparkles.

Meaningful Watches
Chris Pedregal, Founder & CEO, Granola
Karri Saarinen, Co-founder & CEO, Linear
Subham Agarwal, Director of Product, Ramp
Cheechee Lin, Product Designer, Instagram
Jeol Lewenstein, Head of Product Design, Anthropic
Raphael Schaad, Head of Calendar, Notion
Google DeepMind
Andrew Ng, AI Fund
Henry Modisett, Head of Design, Perplexity.ai
Sam Whitmore, CEO & Co-founder, Jason Yuan, CDO & Co-founder, New Computer
Andy Allen, Founder & Software Designer, !Boring
Jiaona Zhang, Chief Product Officer, Linktree
Peter Yang, Product Lead, Roblox
Dare Obasanjo, Lead Product Manager, Meta
Dr. Fei Fei Li, Co-Director, Stanford Human-Centered AI Institute