{"id":528,"date":"2025-08-18T13:00:00","date_gmt":"2025-08-18T13:00:00","guid":{"rendered":"http:\/\/www.guupon.com\/?p=528"},"modified":"2025-08-21T11:30:58","modified_gmt":"2025-08-21T11:30:58","slug":"beyond-the-hype-what-ai-can-really-do-for-product-design","status":"publish","type":"post","link":"http:\/\/www.guupon.com\/index.php\/2025\/08\/18\/beyond-the-hype-what-ai-can-really-do-for-product-design\/","title":{"rendered":"Beyond The Hype: What AI Can Really Do For Product Design"},"content":{"rendered":"

Beyond The Hype: What AI Can Really Do For Product Design<\/title><\/p>\n<article>\n<header>\n<h1>Beyond The Hype: What AI Can Really Do For Product Design<\/h1>\n<address>Nikita Samutin<\/address>\n<p> 2025-08-18T13:00:00+00:00<br \/>\n 2025-08-21T11:03:55+00:00<br \/>\n <\/header>\n<p>These days, it\u2019s easy to find curated lists of AI tools for designers, galleries of generated illustrations, and countless prompt libraries. What\u2019s much harder to find is a clear view of how AI is <em>actually<\/em> integrated into the everyday workflow of a product designer — not for experimentation, but for real, meaningful outcomes.<\/p>\n<p>I\u2019ve gone through that journey myself: testing AI across every major stage of the design process, from ideation and prototyping to visual design and user research. Along the way, I\u2019ve built a simple, repeatable workflow that significantly boosts my productivity.<\/p>\n<p>In this article, I\u2019ll share what\u2019s already working and break down some of the most common objections I\u2019ve encountered — many of which I\u2019ve faced personally.<\/p>\n<h2 id=\"stage-1-idea-generation-without-the-clich\u00e9s\">Stage 1: Idea Generation Without The Clich\u00e9s<\/h2>\n<p><strong>Pushback<\/strong>: <em>\u201cWhenever I ask AI to suggest ideas, I just get a list of clich\u00e9s. It can\u2019t produce the kind of creative thinking expected from a product designer.\u201d<\/em><\/p>\n<p>That\u2019s a fair point. AI doesn\u2019t know the specifics of your product, the full context of your task, or many other critical nuances. The most obvious fix is to \u201cfeed it\u201d all the documentation you have. But that\u2019s a common mistake as it often leads to even worse results: the context gets flooded with irrelevant information, and the AI\u2019s answers become vague and unfocused.<\/p>\n<p>Current-gen models can technically process thousands of words, but <strong>the longer the input, the higher the risk of missing something important<\/strong>, especially content buried in the middle. This is known as the \u201c<a href=\"https:\/\/community.openai.com\/t\/validating-middle-of-context-in-gpt-4-128k\/498255\">lost in the middle<\/a>\u201d problem.<\/p>\n<p>To get meaningful results, AI doesn\u2019t just need more information — it needs the <em>right<\/em> information, delivered in the right way. That\u2019s where the RAG approach comes in.<\/p>\n<h3 id=\"how-rag-works\">How RAG Works<\/h3>\n<p>Think of RAG as a smart assistant working with your personal library of documents. You upload your files, and the assistant reads each one, creating a short summary — a set of bookmarks (semantic tags) that capture the key topics, terms, scenarios, and concepts. These summaries are stored in a kind of \u201ccard catalog,\u201d called a vector database.<\/p>\n<p>When you ask a question, the assistant doesn\u2019t reread every document from cover to cover. Instead, it compares your query to the bookmarks, retrieves only the most relevant excerpts (chunks), and sends those to the language model to generate a final answer.<\/p>\n<div data-audience=\"non-subscriber\" data-remove=\"true\" class=\"feature-panel-container\">\n<aside class=\"feature-panel\">\n<div class=\"feature-panel-left-col\">\n<div class=\"feature-panel-description\">\n<p>Meet <strong><a data-instant href=\"https:\/\/www.smashingconf.com\/online-workshops\/\">Smashing Workshops<\/a><\/strong> on <strong>front-end, design & UX<\/strong>, with practical takeaways, live sessions, <strong>video recordings<\/strong> and a friendly Q&A. With Brad Frost, St\u00e9ph Walter and <a href=\"https:\/\/smashingconf.com\/online-workshops\/workshops\">so many others<\/a>.<\/p>\n<p><a data-instant href=\"smashing-workshops\" class=\"btn btn--green btn--large\">Jump to the workshops \u21ac<\/a><\/div>\n<\/div>\n<div class=\"feature-panel-right-col\"><a data-instant href=\"smashing-workshops\" class=\"feature-panel-image-link\"><\/p>\n<div class=\"feature-panel-image\">\n<img decoding=\"async\" loading=\"lazy\" class=\"feature-panel-image-img\" src=\"\/images\/smashing-cat\/cat-scubadiving-panel.svg\" alt=\"Feature Panel\" width=\"257\" height=\"355\" \/><\/p>\n<\/div>\n<p><\/a>\n<\/div>\n<\/aside>\n<\/div>\n<h3 id=\"how-is-this-different-from-just-dumping-a-doc-into-the-chat\">How Is This Different from Just Dumping a Doc into the Chat?<\/h3>\n<p>Let\u2019s break it down:<\/p>\n<p><strong>Typical chat interaction<\/strong><\/p>\n<p>It\u2019s like asking your assistant to read a 100-page book from start to finish every time you have a question. Technically, all the information is \u201cin front of them,\u201d but it\u2019s easy to miss something, especially if it\u2019s in the middle. This is exactly what the <em>\u201clost in the middle\u201d<\/em> issue refers to.<\/p>\n<p><strong>RAG approach<\/strong><\/p>\n<p>You ask your smart assistant a question, and it retrieves only the relevant pages (chunks) from different documents. It\u2019s faster and more accurate, but it introduces a few new risks:<\/p>\n<ul>\n<li><strong>Ambiguous question<\/strong><br \/>\nYou ask, \u201cHow can we make the project safer?\u201d and the assistant brings you documents about cybersecurity, not finance.<\/li>\n<li><strong>Mixed chunks<\/strong><br \/>\nA single chunk might contain a mix of marketing, design, and engineering notes. That blurs the meaning so the assistant can\u2019t tell what the core topic is.<\/li>\n<li><strong>Semantic gap<\/strong><br \/>\nYou ask, <em>\u201cHow can we speed up the app?\u201d<\/em> but the document says, <em>\u201cOptimize API response time.\u201d<\/em> For a human, that\u2019s obviously related. For a machine, not always.<\/li>\n<\/ul>\n<figure class=\"\n \n break-out article__image\n \n \n \"><\/p>\n<p> <a href=\"https:\/\/files.smashing.media\/articles\/beyond-hype-what-ai-can-do-product-design\/1-rag-approach.png\"><\/p>\n<p> <img decoding=\"async\" loading=\"lazy\" width=\"800\" height=\"383\" src=\"https:\/\/res.cloudinary.com\/indysigner\/image\/fetch\/f_auto,q_80\/w_400\/https:\/\/files.smashing.media\/articles\/beyond-hype-what-ai-can-do-product-design\/1-rag-approach.png\" alt=\"Diagram showing how RAG works: a user prompt triggers semantic search through a knowledge base. Relevant chunks are sent to a language model, which generates an answer based on retrieved content.\" \/><\/p>\n<p> <\/a><figcaption class=\"op-vertical-bottom\">\n Instead of using the model\u2019s memory, it searches your documents and builds a response based on what it finds. (<a href=\"https:\/\/files.smashing.media\/articles\/beyond-hype-what-ai-can-do-product-design\/1-rag-approach.png\">Large preview<\/a>)<br \/>\n <\/figcaption><\/figure>\n<p>These aren\u2019t reasons to avoid RAG or AI altogether. Most of them can be avoided with better preparation of your knowledge base and more precise prompts. So, where do you start?<\/p>\n<h3 id=\"start-with-three-short-focused-documents\">Start With Three Short, Focused Documents<\/h3>\n<p>These three short documents will give your AI assistant just enough context to be genuinely helpful:<\/p>\n<ul>\n<li><strong>Product Overview & Scenarios<\/strong><br \/>\nA brief summary of what your product does and the core user scenarios.<\/li>\n<li><strong>Target Audience<\/strong><br \/>\nYour main user segments and their key needs or goals.<\/li>\n<li><strong>Research & Experiments<\/strong><br \/>\nKey insights from interviews, surveys, user testing, or product analytics.<\/li>\n<\/ul>\n<p>Each document should focus on a single topic and ideally stay within 300–500 words. This makes it easier to search and helps ensure that each retrieved chunk is semantically clean and highly relevant.<\/p>\n<h3 id=\"language-matters\">Language Matters<\/h3>\n<p>In practice, RAG works best when both the query and the knowledge base are in English. I ran a small experiment to test this assumption, trying a few different combinations:<\/p>\n<ul>\n<li><strong>English prompt + English documents<\/strong>: Consistently accurate and relevant results.<\/li>\n<li><strong>Non-English prompt + English documents<\/strong>: Quality dropped sharply. The AI struggled to match the query with the right content.<\/li>\n<li><strong>Non-English prompt + non-English documents<\/strong>: The weakest performance. Even though large language models technically support multiple languages, their internal semantic maps are mostly trained in English. Vector search in other languages tends to be far less reliable.<\/li>\n<\/ul>\n<p><strong>Takeaway<\/strong>: If you want your AI assistant to deliver precise, meaningful responses, do your RAG work entirely in English, both the data and the queries. This advice applies specifically to RAG setups. For regular chat interactions, you\u2019re free to use other languages. A challenge also highlighted in <a href=\"https:\/\/arxiv.org\/abs\/2408.12345\">this 2024 study on multilingual retrieval<\/a>.<\/p>\n<h3 id=\"from-outsider-to-teammate-giving-ai-the-context-it-needs\">From Outsider to Teammate: Giving AI the Context It Needs<\/h3>\n<p>Once your AI assistant has proper context, it stops acting like an outsider and starts behaving more like someone who truly understands your product. With well-structured input, it can help you spot blind spots in your thinking, challenge assumptions, and strengthen your ideas — the way a mid-level or senior designer would.<\/p>\n<p>Here\u2019s an example of a prompt that works well for me:<\/p>\n<blockquote><p>Your task is to perform a comparative analysis of two features: “Group gift contributions” (described in group_goals.txt) and “Personal savings goals” (described in personal_goals.txt).<\/p>\n<p>The goal is to identify potential conflicts in logic, architecture, and user scenarios and suggest visual and conceptual ways to clearly separate these two features in the UI so users can easily understand the difference during actual use.<\/p>\n<p>Please include:<\/p>\n<ul>\n<li>Possible overlaps in user goals, actions, or scenarios;<\/li>\n<li>Potential confusion if both features are launched at the same time;<\/li>\n<li>Any architectural or business-level conflicts (e.g. roles, notifications, access rights, financial logic);<\/li>\n<li>Suggestions for visual and conceptual separation: naming, color coding, separate sections, or other UI\/UX techniques;<\/li>\n<li>Onboarding screens or explanatory elements that might help users understand both features.<\/li>\n<\/ul>\n<p>If helpful, include a comparison table with key parameters like purpose, initiator, audience, contribution method, timing, access rights, and so on.<\/p><\/blockquote>\n<h3 id=\"ai-needs-context-not-just-prompts\">AI Needs Context, Not Just Prompts<\/h3>\n<blockquote><p>If you want AI to go beyond surface-level suggestions and become a real design partner, it needs the right context. Not just <strong>more<\/strong> information, but <strong>better<\/strong>, more structured information.<\/p><\/blockquote>\n<p>Building a usable knowledge base isn\u2019t difficult. And you don\u2019t need a full-blown RAG system to get started. Many of these principles work even in a regular chat: <strong>well-organized content<\/strong> and a <strong>clear question<\/strong> can dramatically improve how helpful and relevant the AI\u2019s responses are. That\u2019s your first step in turning AI from a novelty into a practical tool in your product design workflow.<\/p>\n<h2 id=\"stage-2-prototyping-and-visual-experiments\">Stage 2: Prototyping and Visual Experiments<\/h2>\n<p><strong>Pushback<\/strong>: <em>\u201cAI only generates obvious solutions and can\u2019t even build a proper user flow. It\u2019s faster to do it manually.\u201d<\/em><\/p>\n<p>That\u2019s a fair concern. AI still performs poorly when it comes to building complete, usable screen flows. But for individual elements, especially when exploring new interaction patterns or visual ideas, it can be surprisingly effective.<\/p>\n<p>For example, I needed to prototype a gamified element for a limited-time promotion. The idea is to give users a lottery ticket they can \u201cflip\u201d to reveal a prize. I couldn\u2019t recreate the 3D animation I had in mind in Figma, either manually or using any available plugins. So I described the idea to Claude 4 in Figma Make and within a few minutes, without writing a single line of code, I had exactly what I needed.<\/p>\n<p>At the prototyping stage, AI can be a strong creative partner in two areas:<\/p>\n<ul>\n<li><strong>UI element ideation<\/strong><br \/>\nIt can generate dozens of interactive patterns, including ones you might not think of yourself.<\/li>\n<li><strong>Micro-animation generation<\/strong><br \/>\nIt can quickly produce polished animations that make a concept feel real, which is great for stakeholder presentations or as a handoff reference for engineers.<\/li>\n<\/ul>\n<p>AI can also be applied to multi-screen prototypes, but it\u2019s not as simple as dropping in a set of mockups and getting a fully usable flow. The bigger and more complex the project, the more fine-tuning and manual fixes are required. Where AI already works brilliantly is in focused tasks — individual screens, elements, or animations — where it can kick off the thinking process and save hours of trial and error.<\/p>\n<p><em>A quick UI prototype of a gamified promo banner created with Claude 4 in Figma Make. No code or plugins needed.<\/em><\/p>\n<p>Here\u2019s another valuable way to use AI in design — as a <strong>stress-testing tool<\/strong>. Back in 2023, Google Research introduced <a href=\"https:\/\/arxiv.org\/abs\/2310.15435?utm_source=chatgpt.com\">PromptInfuser<\/a>, an internal Figma plugin that allowed designers to attach prompts directly to UI elements and simulate semi-functional interactions within real mockups. Their goal wasn\u2019t to generate new UI, but to check how well AI could operate <em>inside<\/em> existing layouts — placing content into specific containers, handling edge-case inputs, and exposing logic gaps early.<\/p>\n<p>The results were striking: designers using PromptInfuser were up to 40% more effective at catching UI issues and aligning the interface with real-world input — a clear gain in design accuracy, not just speed.<\/p>\n<p>That closely reflects my experience with Claude 4 and Figma Make: when AI operates within a real interface structure, rather than starting from a blank canvas, it becomes a much more reliable partner. It helps test your ideas, not just generate them.<\/p>\n<div class=\"partners__lead-place\"><\/div>\n<h2 id=\"stage-3-finalizing-the-interface-and-visual-style\">Stage 3: Finalizing The Interface And Visual Style<\/h2>\n<p><strong>Pushback<\/strong>: <em>\u201cAI can\u2019t match our visual style. It\u2019s easier to just do it by hand.\u201d<\/em><\/p>\n<p>This is one of the most common frustrations when using AI in design. Even if you upload your color palette, fonts, and components, the results often don\u2019t feel like they belong in your product. They tend to be either overly decorative or overly simplified.<\/p>\n<p>And this is a real limitation. In my experience, today\u2019s models still struggle to reliably apply a design system, even if you provide a component structure or JSON files with your styles. I tried several approaches:<\/p>\n<ul>\n<li><strong>Direct integration with a component library.<\/strong><br \/>\nI used Figma Make (powered by Claude) and connected our library. This was the least effective method: although the AI attempted to use components, the layouts were often broken, and the visuals were overly conservative. <a href=\"https:\/\/forum.figma.com\/ask-the-community-7\/figma-make-library-support-42423?utm_source=chatgpt.com\">Other designers<\/a> have run into similar issues, noting that library support in Figma Make is still limited and often unstable.<\/li>\n<li><strong>Uploading styles as JSON.<\/strong><br \/>\nInstead of a full component library, I tried uploading only the exported styles — colors, fonts — in a JSON format. The results improved: layouts looked more modern, but the AI still made mistakes in how styles were applied.<\/li>\n<li><strong>Two-step approach: structure first, style second.<\/strong><br \/>\nWhat worked best was separating the process. First, I asked the AI to generate a layout and composition without any styling. Once I had a solid structure, I followed up with a request to apply the correct styles from the same JSON file. This produced the most usable result \u2014 though still far from pixel-perfect.<\/li>\n<\/ul>\n<figure class=\"\n \n break-out article__image\n \n \n \"><\/p>\n<p> <a href=\"https:\/\/files.smashing.media\/articles\/beyond-hype-what-ai-can-do-product-design\/3-ui-screens-claude-sonnet.png\"><\/p>\n<p> <img decoding=\"async\" loading=\"lazy\" width=\"800\" height=\"535\" src=\"https:\/\/res.cloudinary.com\/indysigner\/image\/fetch\/f_auto,q_80\/w_400\/https:\/\/files.smashing.media\/articles\/beyond-hype-what-ai-can-do-product-design\/3-ui-screens-claude-sonnet.png\" alt=\"Three mobile UI screens showing how different design system setups affect visual output: with component library, with JSON styles, and without any styles \u2014 all generated by Claude Sonnet 4 from the same prompt.\" \/><\/p>\n<p> <\/a><figcaption class=\"op-vertical-bottom\">\n From left to right: prompt with attached library in Figma, prompt with styles in JSON, and raw prompt. All generated using Claude Sonnet 4 with the same input. (<a href=\"https:\/\/files.smashing.media\/articles\/beyond-hype-what-ai-can-do-product-design\/3-ui-screens-claude-sonnet.png\">Large preview<\/a>)<br \/>\n <\/figcaption><\/figure>\n<p>So yes, AI still can\u2019t help you finalize your UI. It doesn\u2019t replace hand-crafted design work. But it\u2019s very useful in other ways:<\/p>\n<ul>\n<li>Quickly creating a <strong>visual concept<\/strong> for discussion.<\/li>\n<li>Generating <strong>\u201cwhat if\u201d alternatives<\/strong> to existing mockups.<\/li>\n<li>Exploring how your interface might look in a different style or direction.<\/li>\n<li>Acting as a <strong>second pair of eyes<\/strong> by giving feedback, pointing out inconsistencies or overlooked issues you might miss when tired or too deep in the work.<\/li>\n<\/ul>\n<blockquote class=\"pull-quote\">\n<p>\n <a class=\"pull-quote__link\" aria-label=\"Share on Twitter\" href=\"https:\/\/twitter.com\/share?text=%0aAI%20won%e2%80%99t%20save%20you%20five%20hours%20of%20high-fidelity%20design%20time,%20since%20you%e2%80%99ll%20probably%20spend%20that%20long%20fixing%20its%20output.%20But%20as%20a%20visual%20sparring%20partner,%20it%e2%80%99s%20already%20strong.%20If%20you%20treat%20it%20like%20a%20source%20of%20alternatives%20and%20fresh%20perspectives,%20it%20becomes%20a%20valuable%20creative%20collaborator.%0a&url=https:\/\/smashingmagazine.com%2f2025%2f08%2fbeyond-hype-what-ai-can-do-product-design%2f\"><\/p>\n<p>AI won\u2019t save you five hours of high-fidelity design time, since you\u2019ll probably spend that long fixing its output. But as a visual sparring partner, it\u2019s already strong. If you treat it like a source of alternatives and fresh perspectives, it becomes a valuable creative collaborator.<\/p>\n<p> <\/a>\n <\/p>\n<div class=\"pull-quote__quotation\">\n<div class=\"pull-quote__bg\">\n <span class=\"pull-quote__symbol\">\u201c<\/span><\/div>\n<\/p><\/div>\n<\/blockquote>\n<h2 id=\"stage-4-product-feedback-and-analytics-ai-as-a-thinking-exosuit\">Stage 4: Product Feedback And Analytics: AI As A Thinking Exosuit<\/h2>\n<p>Product designers have come a long way. We used to create interfaces in Photoshop based on predefined specs. Then we delved deeper into UX with mapping user flows, conducting interviews, and understanding user behavior. Now, with AI, we gain access to yet another level: data analysis, which used to be the exclusive domain of product managers and analysts.<\/p>\n<p>As <a href=\"https:\/\/www.smashingmagazine.com\/2025\/03\/how-to-argue-against-ai-first-research\/\">Vitaly Friedman rightly pointed out in one of his columns<\/a>, trying to replace real UX interviews with AI can lead to false conclusions as models tend to generate an average experience, not a real one. <strong>The strength of AI isn\u2019t in inventing data but in processing it at scale.<\/strong><\/p>\n<p>Let me give a real example. We launched an exit survey for users who were leaving our service. Within a week, we collected over 30,000 responses across seven languages.<\/p>\n<p>Simply counting the percentages for each of the five predefined reasons wasn\u2019t enough. I wanted to know:<\/p>\n<ul>\n<li>Are there specific times of day when users churn more?<\/li>\n<li>Do the reasons differ by region?<\/li>\n<li>Is there a correlation between user exits and system load?<\/li>\n<\/ul>\n<p>The real challenge was… figuring out what cuts and angles were even worth exploring. The entire technical process, from analysis to visualizations, was done \u201cfor me\u201d by Gemini, working inside Google Sheets. This task took me about two hours in total. Without AI, not only would it have taken much longer, but I probably wouldn\u2019t have been able to reach that level of insight on my own at all.<\/p>\n<figure class=\"\n \n break-out article__image\n \n \n \"><\/p>\n<p> <a href=\"https:\/\/files.smashing.media\/articles\/beyond-hype-what-ai-can-do-product-design\/4-gemini-google-sheets.png\"><\/p>\n<p> <img decoding=\"async\" loading=\"lazy\" width=\"800\" height=\"379\" src=\"https:\/\/res.cloudinary.com\/indysigner\/image\/fetch\/f_auto,q_80\/w_400\/https:\/\/files.smashing.media\/articles\/beyond-hype-what-ai-can-do-product-design\/4-gemini-google-sheets.png\" alt=\"Bar charts showing cancellation reasons by hour and by currency, generated with Gemini in Google Sheets.\" \/><\/p>\n<p> <\/a><figcaption class=\"op-vertical-bottom\">\n A few examples of output I\u2019ve got from Gemini in Google Sheets. (<a href=\"https:\/\/files.smashing.media\/articles\/beyond-hype-what-ai-can-do-product-design\/4-gemini-google-sheets.png\">Large preview<\/a>)<br \/>\n <\/figcaption><\/figure>\n<blockquote class=\"pull-quote\">\n<p>\n <a class=\"pull-quote__link\" aria-label=\"Share on Twitter\" href=\"https:\/\/twitter.com\/share?text=%0aAI%20enables%20near%20real-time%20work%20with%20large%20data%20sets.%20But%20most%20importantly,%20it%20frees%20up%20your%20time%20and%20energy%20for%20what%e2%80%99s%20truly%20valuable:%20asking%20the%20right%20questions.%0a&url=https:\/\/smashingmagazine.com%2f2025%2f08%2fbeyond-hype-what-ai-can-do-product-design%2f\"><\/p>\n<p>AI enables near real-time work with large data sets. But most importantly, it frees up your time and energy for what\u2019s truly valuable: asking the right questions.<\/p>\n<p> <\/a>\n <\/p>\n<div class=\"pull-quote__quotation\">\n<div class=\"pull-quote__bg\">\n <span class=\"pull-quote__symbol\">\u201c<\/span><\/div>\n<\/p><\/div>\n<\/blockquote>\n<p><strong>A few practical notes<\/strong>: Working with large data sets is still challenging for models without strong reasoning capabilities. In my experiments, I used Gemini embedded in Google Sheets and cross-checked the results using ChatGPT o3. Other models, including the standalone Gemini 2.5 Pro, often produced incorrect outputs or simply refused to complete the task.<\/p>\n<div class=\"partners__lead-place\"><\/div>\n<h2 id=\"ai-is-not-an-autopilot-but-a-co-pilot\">AI Is Not An Autopilot But A Co-Pilot<\/h2>\n<p>AI in design is only as good as the questions you ask it. It doesn\u2019t do the work for you. It doesn\u2019t replace your thinking. But it helps you move faster, explore more options, validate ideas, and focus on the hard parts instead of burning time on repetitive ones. Sometimes it\u2019s still faster to design things by hand. Sometimes it makes more sense to delegate to a junior designer.<\/p>\n<p>But increasingly, AI is becoming the one who suggests, sharpens, and accelerates. Don\u2019t wait to build the perfect AI workflow. Start small. And that might be the first real step in turning AI from a curiosity into a trusted tool in your product design process.<\/p>\n<h2 id=\"let-s-summarize\">Let\u2019s Summarize<\/h2>\n<ul>\n<li>If you just paste a full doc into chat, the model often misses important points, especially things buried in the middle. That\u2019s <strong>the \u201clost in the middle\u201d problem<\/strong>.<\/li>\n<li><strong>The RAG approach<\/strong> helps by pulling only the most relevant pieces from your documents. So responses are faster, more accurate, and grounded in real context.<\/li>\n<li><strong>Clear, focused prompts<\/strong> work better. Narrow the scope, define the output, and use familiar terms to help the model stay on track.<\/li>\n<li><strong>A well-structured knowledge bas<\/strong> makes a big difference. Organizing your content into short, topic-specific docs helps reduce noise and keep answers sharp.<\/li>\n<li><strong>Use English for both your prompts and your documents.<\/strong> Even multilingual models are most reliable when working in English, especially for retrieval.<\/li>\n<li>Most importantly: <strong>treat AI as a creative partner<\/strong>. It won\u2019t replace your skills, but it can spark ideas, catch issues, and speed up the tedious parts.<\/li>\n<\/ul>\n<h3 id=\"further-reading\">Further Reading<\/h3>\n<ul>\n<li>\u201c<a href=\"https:\/\/standardbeagle.com\/ai-assisted-design-workflows\/#what-ai-actually-does-in-ux-workflows\">AI-assisted Design Workflows: How UX Teams Move Faster Without Sacrificing Quality<\/a>\u201d, Cindy Brummer<br \/>\n<em>This piece is a perfect prequel to my article. It explains how to start integrating AI into your design process, how to structure your workflow, and which tasks AI can reasonably take on \u2014 before you dive into RAG or idea generation.<\/em><\/li>\n<li>\u201c<a href=\"https:\/\/www.figma.com\/blog\/8-ways-to-build-with-figma-make\/\">8 essential tips for using Figma Make<\/a>\u201d, Alexia Danton<br \/>\n<em>While this article focuses on Figma Make, the recommendations are broadly applicable. It offers practical advice that will make your work with AI smoother, especially if you\u2019re experimenting with visual tools and structured prompting.<\/em><\/li>\n<li>\u201c<a href=\"https:\/\/blogs.nvidia.com\/blog\/what-is-retrieval-augmented-generation\/\">What Is Retrieval-Augmented Generation aka RAG<\/a>\u201d, Rick Merritt<br \/>\n<em>If you want to go deeper into how RAG actually works, this is a great starting point. It breaks down key concepts like vector search and retrieval in plain terms and explains why these methods often outperform long prompts alone.<\/em><\/li>\n<\/ul>\n<div class=\"signature\">\n <img decoding=\"async\" src=\"https:\/\/www.smashingmagazine.com\/images\/logo\/logo--red.png\" alt=\"Smashing Editorial\" width=\"35\" height=\"46\" loading=\"lazy\" \/><br \/>\n <span>(yk)<\/span>\n<\/div>\n<\/article>\n","protected":false},"excerpt":{"rendered":"<p class=\"text-justify mb-2\" >Beyond The Hype: What AI Can Really Do For Product Design Beyond The Hype: What AI Can Really Do For Product Design Nikita Samutin 2025-08-18T13:00:00+00:00 2025-08-21T11:03:55+00:00 These days, it\u2019s easy to find curated lists of AI tools for designers, galleries of generated illustrations, and countless […]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[],"class_list":["post-528","post","type-post","status-publish","format-standard","hentry","category-accessibility"],"_links":{"self":[{"href":"http:\/\/www.guupon.com\/index.php\/wp-json\/wp\/v2\/posts\/528","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.guupon.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.guupon.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.guupon.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.guupon.com\/index.php\/wp-json\/wp\/v2\/comments?post=528"}],"version-history":[{"count":1,"href":"http:\/\/www.guupon.com\/index.php\/wp-json\/wp\/v2\/posts\/528\/revisions"}],"predecessor-version":[{"id":529,"href":"http:\/\/www.guupon.com\/index.php\/wp-json\/wp\/v2\/posts\/528\/revisions\/529"}],"wp:attachment":[{"href":"http:\/\/www.guupon.com\/index.php\/wp-json\/wp\/v2\/media?parent=528"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.guupon.com\/index.php\/wp-json\/wp\/v2\/categories?post=528"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.guupon.com\/index.php\/wp-json\/wp\/v2\/tags?post=528"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}