Define who the AI should be, and what it\u2019s being asked to deliver.<\/em><\/p>\nA creative brief starts with assigning the right hat. Are you briefing a copywriter? A strategist? A product designer? The same logic applies here. Give the AI a clear identity and task. Treat AI like a trusted freelancer or intern. Instead of saying \u201chelp me\u201d, tell it who it should act as and what\u2019s expected.<\/p>\n
Example<\/strong>: \u201cYou are a senior UX researcher and customer insights analyst. You specialize in synthesizing qualitative data from diverse sources to identify patterns, surface user pain points, and map them across customer journey stages. Your outputs directly inform product, UX, and service priorities.\u201d<\/em><\/p>\nI: Input Context<\/h3>\n
Provide background that frames the task.<\/em><\/p>\nCreative partners don\u2019t work in a vacuum. They need context: the audience, goals, product, competitive landscape, and what\u2019s been tried already. This is the \u201cWhat you need to know before you start\u201d section of the brief. Think: key insights, friction points, business objectives. The same goes for your prompt.<\/p>\n
Example<\/strong>: \u201cYou are analyzing customer feedback for Fintech Brand\u2019s app, targeting Gen Z users. Feedback will be uploaded from sources such as app store reviews, survey feedback, and usability test transcripts.\u201d<\/em><\/p>\nR: Rules & Constraints<\/h3>\n
Clarify any limitations, boundaries, and exclusions.<\/em><\/p>\nGood creative briefs always include boundaries — what to avoid, what\u2019s off-brand, or what\u2019s non-negotiable. Things like brand voice guidelines, legal requirements, or time and word count limits. Constraints don\u2019t limit creativity — they focus it. AI needs the same constraints to avoid going off the rails.<\/p>\n
Example<\/strong>: \u201cOnly analyze the uploaded customer feedback data. Do not fabricate pain points, representative quotes, journey stages, or patterns. Do not supplement with prior knowledge or hypothetical examples. Use clear, neutral, stakeholder-facing language.\u201d<\/em><\/p>\nE: Expected Output<\/h3>\n
Spell out what the deliverable should look like.<\/em><\/p>\nThis is the deliverable spec: What does the finished product look like? What tone, format, or channel is it for? Even if the task is clear, the format often isn\u2019t. Do you want bullet points or a story? A table or a headline? If you don\u2019t say, the AI will guess, and probably guess wrong. Even better, include an example of the output you want, an effective way to help AI know what you\u2019re expecting. If you\u2019re using GPT-5, you can also mix examples across formats (text, images, tables) together.<\/p>\n
Example<\/strong>: \u201cReturn a structured list of themes. For each theme, include:<\/em><\/p>\n\n- Theme Title<\/em><\/strong><\/li>\n
- Summary of the Issue<\/em><\/strong><\/li>\n
- Problem Statement<\/em><\/strong><\/li>\n
- Opportunity<\/em><\/strong><\/li>\n
- Representative Quotes (from data only)<\/em><\/strong><\/li>\n
- Journey Stage(s)<\/em><\/strong><\/li>\n
- Frequency (count from data)<\/em><\/strong><\/li>\n
- Severity Score (1\u20135)<\/em><\/strong> where 1 = Minor inconvenience or annoyance; 3 = Frustrating but workaround exists; 5 = Blocking issue<\/em><\/li>\n
- Estimated Effort (Low \/ Medium \/ High)<\/em><\/strong>, where Low = Copy or content tweak; Medium = Logic\/UX\/UI change; High = Significant changes.\u201d<\/em><\/li>\n<\/ul>\n
WIRE<\/strong> gives you everything you need to stop guessing and start designing your prompts with purpose. When you start with WIRE, your prompting is like a briefing, treating AI like a collaborator.<\/p>\nOnce you\u2019ve mastered this core structure, you can layer in additional fidelity, like tone, step-by-step flow, or iterative feedback, using the FRAME<\/strong> elements. These five elements provide additional guidance and clarity to your prompt by layering clear deliverables, thoughtful tone, reusable structure, and space for creative iteration.<\/p>\nF: Flow of Tasks<\/h3>\n
Break complex prompts into clear, ordered steps.<\/em><\/p>\nThis is your project plan or creative workflow that lays out the stages, dependencies, or sequence of execution. When the task has multiple parts, don\u2019t just throw it all into one sentence. You are doing the thinking and guiding AI. Structure it like steps in a user journey or modules in a storyboard. In this example, it fits as the blueprint for the AI to use to generate the table described in \u201cE: Expected Output\u201d<\/p>\n
Example<\/strong>: \u201cRecommended flow of tasks:
\nStep 1: Parse the uploaded data and extract discrete pain points.
\nStep 2: Group them into themes based on pattern similarity.
\nStep 3: Score each theme by frequency (from data), severity (based on content), and estimated effort.
\nStep 4: Map each theme to the appropriate customer journey stage(s).
\nStep 5: For each theme, write a clear problem statement and opportunity based only on what\u2019s in the data.\u201d<\/em><\/p>\nR: Reference Voice or Style<\/h3>\n
Name the desired tone, mood, or reference brand.<\/em><\/p>\nThis is the brand voice section or style mood board — reference points that shape the creative feel. Sometimes you want buttoned-up. Other times, you want conversational. Don\u2019t assume the AI knows your tone, so spell it out.<\/p>\n
Example<\/strong>: \u201cUse the tone of a UX insights deck or product research report. Be concise, pattern-driven, and objective. Make summaries easy to scan by product managers and design leads.\u201d<\/em><\/p>\nA: Ask for Clarification<\/h3>\n
Invite the AI to ask questions before generating, if anything is unclear.<\/em><\/p>\nThis is your \u201cAny questions before we begin?\u201d<\/em> moment — a key step in collaborative creative work. You wouldn\u2019t want a freelancer to guess what you meant if the brief was fuzzy, so why expect AI to do better? Ask AI to reflect or clarify before jumping into output mode.<\/p>\nExample<\/strong>: \u201cIf the uploaded data is missing or unclear, ask for it before continuing. Also, ask for clarification if the feedback format is unstructured or inconsistent, or if the scoring criteria need refinement.\u201d<\/em><\/p>\nM: Memory (Within The Conversation)<\/h3>\n
Reference earlier parts of the conversation and reuse what\u2019s working.<\/em><\/p>\nThis is similar to keeping visual tone or campaign language consistent across deliverables in a creative brief. Prompts are rarely one-shot tasks, so this reminds AI of the tone, audience, or structure already in play. GPT-5 got better with memory, but this still remains a useful element, especially if you switch topics or jump around.<\/p>\n
Example<\/strong>: \u201cUnless I say otherwise, keep using this process: analyze the data, group into themes, rank by importance, then suggest an action for each.\u201d<\/em><\/p>\n<\/div>\n
E: Evaluate & Iterate<\/h3>\n
Invite the AI to critique, improve, or generate variations.<\/em><\/p>\nThis is your revision loop — your way of prompting for creative direction, exploration, and refinement. Just like creatives expect feedback, your AI partner can handle review cycles if you ask for them. Build iteration into the brief to get closer to what you actually need. Sometimes, you may see ChatGPT test two versions of a response on its own by asking for your preference.<\/p>\n
Example<\/strong>: \u201cAfter listing all themes, identify the one with the highest combined priority score (based on frequency, severity, and effort).<\/em><\/p>\nFor that top-priority theme:<\/em><\/p>\n\n- Critically evaluate its framing: Is the title clear? Are the quotes strong and representative? Is the journey mapping appropriate?<\/em><\/li>\n
- Suggest one improvement (e.g., improved title, more actionable implication, clearer quote, tighter summary).<\/em><\/li>\n
- Rewrite the theme entry with that improvement applied.<\/em><\/li>\n
- Briefly explain why the revision is stronger and more useful for product or design teams.\u201d<\/em><\/li>\n<\/ul>\n
Here\u2019s a quick recap of the WIRE+FRAME framework:<\/p>\n
\n\n\nFramework Component<\/th>\n | Description<\/th>\n<\/tr>\n<\/thead>\n |
\n\nW: Who & What<\/strong><\/td>\nDefine the AI persona and the core deliverable.<\/td>\n<\/tr>\n | \nI: Input Context<\/strong><\/td>\nProvide background or data scope to frame the task.<\/td>\n<\/tr>\n | \nR: Rules & Constraints<\/strong><\/td>\nSet boundaries<\/td>\n<\/tr>\n | \nE: Expected Output<\/strong><\/td>\nSpell out the format and fields of the deliverable.<\/td>\n<\/tr>\n | \nF: Flow of Tasks<\/strong><\/td>\nBreak the work into explicit, ordered sub-tasks.<\/td>\n<\/tr>\n | \nR: Reference Voice\/Style<\/strong><\/td>\nName the tone, mood, or reference brand to ensure consistency.<\/td>\n<\/tr>\n | \nA: Ask for Clarification<\/strong><\/td>\nInvite AI to pause and ask questions if any instructions or data are unclear before proceeding.<\/td>\n<\/tr>\n | \nM: Memory<\/strong><\/td>\nLeverage in-conversation memory to recall earlier definitions, examples, or phrasing without restating them.<\/td>\n<\/tr>\n | \nE: Evaluate & Iterate<\/strong><\/td>\nAfter generation, have the AI self-critique the top outputs and refine them.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n And here\u2019s the full WIRE+FRAME prompt:<\/p>\n (W)<\/strong> You are a senior UX researcher and customer insights analyst. You specialize in synthesizing qualitative data from diverse sources to identify patterns, surface user pain points, and map them across customer journey stages. Your outputs directly inform product, UX, and service priorities.<\/p>\n(I)<\/strong> You are analyzing customer feedback for Fintech Brand\u2019s app, targeting Gen Z users. Feedback will be uploaded from sources such as app store reviews, survey feedback, and usability test transcripts.<\/p>\n(R)<\/strong> Only analyze the uploaded customer feedback data. Do not fabricate pain points, representative quotes, journey stages, or patterns. Do not supplement with prior knowledge or hypothetical examples. Use clear, neutral, stakeholder-facing language.<\/p>\n(E)<\/strong> Return a structured list of themes. For each theme, include:<\/p>\n\n- Theme Title<\/strong><\/li>\n
- Summary of the Issue<\/strong><\/li>\n
- Problem Statement<\/strong><\/li>\n
- Opportunity<\/strong><\/li>\n
- Representative Quotes (from data only)<\/strong><\/li>\n
- Journey Stage(s)<\/strong><\/li>\n
- Frequency (count from data)<\/strong><\/li>\n
- Severity Score (1\u20135)<\/strong> where 1 = Minor inconvenience or annoyance; 3 = Frustrating but workaround exists; 5 = Blocking issue<\/li>\n
- Estimated Effort (Low \/ Medium \/ High)<\/strong>, where Low = Copy or content tweak; Medium = Logic\/UX\/UI change; High = Significant changes<\/li>\n<\/ul>\n
(F)<\/strong> Recommended flow of tasks: Step 1: Parse the uploaded data and extract discrete pain points. Step 2: Group them into themes based on pattern similarity. Step 3: Score each theme by frequency (from data), severity (based on content), and estimated effort. Step 4: Map each theme to the appropriate customer journey stage(s). Step 5: For each theme, write a clear problem statement and opportunity based only on what\u2019s in the data.<\/p>\n(R)<\/strong> Use the tone of a UX insights deck or product research report. Be concise, pattern-driven, and objective. Make summaries easy to scan by product managers and design leads.<\/p>\n(A)<\/strong> If the uploaded data is missing or unclear, ask for it before continuing. Also, ask for clarification if the feedback format is unstructured or inconsistent, or if the scoring criteria need refinement.<\/p>\n(M)<\/strong> Unless I say otherwise, keep using this process: analyze the data, group into themes, rank by importance, then suggest an action for each.<\/p>\n(E)<\/strong> After listing all themes, identify the one with the highest combined priority score (based on frequency, severity, and effort). For that top-priority theme:<\/p>\n\n- Critically evaluate its framing: Is the title clear? Are the quotes strong and representative? Is the journey mapping appropriate?<\/li>\n
- Suggest one improvement (e.g., improved title, more actionable implication, clearer quote, tighter summary).<\/li>\n
- Rewrite the theme entry with that improvement applied.<\/li>\n
- Briefly explain why the revision is stronger and more useful for product or design teams.<\/li>\n<\/ul>\n<\/blockquote>\n
You could use \u201c##\u201d to label the sections (e.g., \u201c##FLOW\u201d) more for your readability than for AI. At over 400 words, this Insights Synthesis prompt example is a detailed, structured prompt, but it isn\u2019t customized for you and your work. The intent wasn\u2019t to give you a specific prompt (the proverbial fish), but to show how you can use a prompt framework like WIRE+FRAME to create a customized, relevant prompt that will help AI augment your work (teaching you to fish).<\/p>\n Keep in mind that prompt length isn\u2019t a common concern, but rather a lack of quality and structure is. As of the time of writing, AI models can easily process prompts that are thousands of words long.<\/p>\n Not every prompt needs all the FRAME components; WIRE is often enough to get the job done. But when the work is strategic or highly contextual, pick components from FRAME — the extra details can make a difference. Together, WIRE+FRAME give you a detailed framework for creating a well-structured prompt, with the crucial components first, followed by optional components:<\/p>\n \n- WIRE<\/strong> builds a clear, focused prompt with role, input, rules, and expected output.<\/li>\n
- FRAME<\/strong> adds refinement like tone, reusability, and iteration.<\/li>\n<\/ul>\n
Here are some scenarios and recommendations for using WIRE or WIRE+FRAME:<\/p>\n \n\n\nScenarios<\/th>\n | Description<\/th>\n | Recommended<\/th>\n<\/tr>\n<\/thead>\n | \n\nSimple, One-Off Analyses<\/strong><\/td>\nQuick prompting with minimal setup and no need for detailed process transparency.<\/td>\n | WIRE<\/td>\n<\/tr>\n | \nTight Sprints or Hackathons<\/strong><\/td>\nRapid turnarounds, and times you don\u2019t need embedded review and iteration loops.<\/td>\n | WIRE<\/td>\n<\/tr>\n | \nHighly Iterative Exploratory Work<\/strong><\/td>\nYou expect to tweak results constantly and prefer manual control over each step.<\/td>\n | WIRE<\/td>\n<\/tr>\n | \nComplex Multi-Step Playbooks<\/strong><\/td>\nDetailed workflows that benefit from a standardized, repeatable, visible sequence.<\/td>\n | WIRE+FRAME<\/td>\n<\/tr>\n | \nShared or Hand-Off Projects<\/strong><\/td>\nWhen different teams will rely on embedded clarification, memory, and consistent task flows for recurring analyses.<\/td>\n | WIRE+FRAME<\/td>\n<\/tr>\n | \nBuilt-In Quality Control<\/strong><\/td>\nYou want the AI to flag top issues, self-critique, and refine, minimizing manual QC steps.<\/td>\n | WIRE+FRAME<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n Prompting isn\u2019t about getting it right the first time. It\u2019s about designing the interaction and redesigning when needed. With WIRE+FRAME, you\u2019re going beyond basic prompting and designing the interaction between you and AI.<\/p>\n From Gut Feel To Framework: A Prompt Makeover<\/h3>\nLet\u2019s compare the results of Kate\u2019s first AI-augmented design sprint prompt (to synthesize customer feedback into design insights) with one based on the WIRE+FRAME prompt framework, with the same data and focusing on the top results:<\/p>\n Original prompt: Read this customer feedback and tell me how we can improve our app for Gen Z users.<\/em><\/p>\nInitial ChatGPT Results:<\/p>\n | | | | | | |
| | | | | | | | | |