A Week In The Life Of An AI-Augmented Designer A Week In The Life Of An AI-Augmented Designer Lyndon Cerejo 2025-08-22T08:00:00+00:00 2025-08-27T15:32:36+00:00 Artificial Intelligence isn’t new, but in November 2022, something changed. The launch of ChatGPT brought AI out of the background and into everyday […]
AccessibilityThe Double-Edged Sustainability Sword Of AI In Web Design The Double-Edged Sustainability Sword Of AI In Web Design Alex Williams 2025-08-20T10:00:00+00:00 2025-08-21T11:03:55+00:00 Artificial intelligence is increasingly automating large parts of design and development workflows — tasks once reserved for skilled designers and developers. This streamlining […]
AccessibilityBeyond The Hype: What AI Can Really Do For Product Design Beyond The Hype: What AI Can Really Do For Product Design Nikita Samutin 2025-08-18T13:00:00+00:00 2025-08-21T11:03:55+00:00 These days, it’s easy to find curated lists of AI tools for designers, galleries of generated illustrations, and countless […]
AccessibilityHandling JavaScript Event Listeners With Parameters Handling JavaScript Event Listeners With Parameters Amejimaobari Ollornwi 2025-07-21T10:00:00+00:00 2025-07-23T15:03:27+00:00 JavaScript event listeners are very important, as they exist in almost every web application that requires interactivity. As common as they are, it is also essential for them to […]
Accessibility
2025-07-21T10:00:00+00:00
2025-07-23T15:03:27+00:00
JavaScript event listeners are very important, as they exist in almost every web application that requires interactivity. As common as they are, it is also essential for them to be managed properly. Improperly managed event listeners can lead to memory leaks and can sometimes cause performance issues in extreme cases.
Here’s the real problem: JavaScript event listeners are often not removed after they are added. And when they are added, they do not require parameters most of the time — except in rare cases, which makes them a little trickier to handle.
A common scenario where you may need to use parameters with event handlers is when you have a dynamic list of tasks, where each task in the list has a “Delete” button attached to an event handler that uses the task’s ID as a parameter to remove the task. In a situation like this, it is a good idea to remove the event listener once the task has been completed to ensure that the deleted element can be successfully cleaned up, a process known as garbage collection.
A very common mistake when adding parameters to event handlers is calling the function with its parameters inside the addEventListener()
method. This is what I mean:
button.addEventListener('click', myFunction(param1, param2));
The browser responds to this line by immediately calling the function, irrespective of whether or not the click event has happened. In other words, the function is invoked right away instead of being deferred, so it never fires when the click event actually occurs.
You may also receive the following console error in some cases:
addEventListener
on EventTarget
: parameter is not of type Object
. (Large preview)
This error makes sense because the second parameter of the addEventListener
method can only accept a JavaScript function, an object with a handleEvent()
method, or simply null
. A quick and easy way to avoid this error is by changing the second parameter of the addEventListener
method to an arrow or anonymous function.
button.addEventListener('click', (event) => {
myFunction(event, param1, param2); // Runs on click
});
The only hiccup with using arrow and anonymous functions is that they cannot be removed with the traditional removeEventListener()
method; you will have to make use of AbortController
, which may be overkill for simple cases. AbortController
shines when you have multiple event listeners to remove at once.
For simple cases where you have just one or two event listeners to remove, the removeEventListener()
method still proves useful. However, in order to make use of it, you’ll need to store your function as a reference to the listener.
There are several ways to include parameters with event handlers. However, for the purpose of this demonstration, we are going to constrain our focus to the following two:
Using arrow and anonymous functions is the fastest and easiest way to get the job done.
To add an event handler with parameters using arrow and anonymous functions, we’ll first need to call the function we’re going to create inside the arrow function attached to the event listener:
const button = document.querySelector("#myButton");
button.addEventListener("click", (event) => {
handleClick(event, "hello", "world");
});
After that, we can create the function with parameters:
function handleClick(event, param1, param2) {
console.log(param1, param2, event.type, event.target);
}
Note that with this method, removing the event listener requires the AbortController
. To remove the event listener, we create a new AbortController
object and then retrieve the AbortSignal
object from it:
const controller = new AbortController();
const { signal } = controller;
Next, we can pass the signal
from the controller
as an option in the removeEventListener()
method:
button.addEventListener("click", (event) => {
handleClick(event, "hello", "world");
}, { signal });
Now we can remove the event listener by calling AbortController.abort()
:
controller.abort()
Closures in JavaScript are another feature that can help us with event handlers. Remember the mistake that produced a type error? That mistake can also be corrected with closures. Specifically, with closures, a function can access variables from its outer scope.
In other words, we can access the parameters we need in the event handler from the outer function:
function createHandler(message, number) {
// Event handler
return function (event) {
console.log(`${message} ${number} - Clicked element:`, event.target);
};
}
const button = document.querySelector("#myButton");
button.addEventListener("click", createHandler("Hello, world!", 1));
}
This establishes a function that returns another function. The function that is created is then called as the second parameter in the addEventListener()
method so that the inner function is returned as the event handler. And with the power of closures, the parameters from the outer function will be made available for use in the inner function.
Notice how the event
object is made available to the inner function. This is because the inner function is what is being attached as the event handler. The event object is passed to the function automatically because it’s the event handler.
To remove the event listener, we can use the AbortController
like we did before. However, this time, let’s see how we can do that using the removeEventListener()
method instead.
In order for the removeEventListener
method to work, a reference to the createHandler
function needs to be stored and used in the addEventListener
method:
function createHandler(message, number) {
return function (event) {
console.log(`${message} ${number} - Clicked element:`, event.target);
};
}
const handler = createHandler("Hello, world!", 1);
button.addEventListener("click", handler);
Now, the event listener can be removed like this:
button.removeEventListener("click", handler);
It is good practice to always remove event listeners whenever they are no longer needed to prevent memory leaks. Most times, event handlers do not require parameters; however, in rare cases, they do. Using JavaScript features like closures, AbortController
, and removeEventListener
, handling parameters with event handlers is both possible and well-supported.
Why Non-Native Content Designers Improve Global UX Why Non-Native Content Designers Improve Global UX Oleksii Tkachenko 2025-07-18T13:00:00+00:00 2025-07-23T15:03:27+00:00 A few years ago, I was in a design review at a fintech company, polishing the expense management flows. It was a routine session where we reviewed […]
Accessibility
2025-07-18T13:00:00+00:00
2025-07-23T15:03:27+00:00
A few years ago, I was in a design review at a fintech company, polishing the expense management flows. It was a routine session where we reviewed the logic behind content and design decisions.
While looking over the statuses for submitted expenses, I noticed a label saying ‘In approval’. I paused, re-read it again, and asked myself:
“Where is it? Are the results in? Where can I find them? Are they sending me to the app section called “Approval”?”
This tiny label made me question what was happening with my money, and this feeling of uncertainty was quite anxiety-inducing.
My team, all native English speakers, did not flinch, even for a second, and moved forward to discuss other parts of the flow. I was the only non-native speaker in the room, and while the label made perfect sense to them, it still felt off to me.
After a quick discussion, we landed on ‘Pending approval’ — the simplest and widely recognised option internationally. More importantly, this wording makes it clear that there’s an approval process, and it hasn’t taken place yet. There’s no need to go anywhere to do it.
Some might call it nitpicking, but that was exactly the moment I realised how invisible — yet powerful — the non-native speaker’s perspective can be.
In a reality where user testing budgets aren’t unlimited, designing with familiar language patterns from the start helps you prevent costly confusions in the user journey.
“
Those same confusions often lead to:
Global products are often designed with English as their primary language. This seems logical, but here’s the catch:
Roughly 75% of English-speaking users are not native speakers, which means 3 out of every 4 users.
Native speakers often write on instinct, which works much like autopilot. This can often lead to overconfidence in content that, in reality, is too culturally specific, vague, or complex. And that content may not be understood by 3 in 4 people who read it.
If your team shares the same native language, content clarity remains assumed by default rather than proven through pressure testing.
The price for that is the accessibility of your product. A study by National Library of Medicine found that US adults who had proficiency in English but did not use it as their primary language were significantly less likely to be insured, even when provided with the same level of service as everyone else.
In other words, they did not finish the process of securing a healthcare provider — a process that’s vital to their well-being, in part, due to unclear or inaccessible communication.
If people abandon the process of getting something as vital as healthcare insurance, it’s easy to imagine them dropping out during checkout, account setup, or app onboarding.
Non-native content designers, by contrast, do not write on autopilot. Because of their experience learning English, they’re much more likely to tune into nuances, complexity, and cultural exclusions that natives often overlook. That’s the key to designing for everyone rather than 1 in 4.
When a non-native speaker has to pause, re-read something, or question the meaning of what’s written, they quickly identify it as a friction point in the user experience.
Why it’s important: Every extra second users have to spend understanding your content makes them more likely to abandon the task. This is a high price that companies pay for not prioritising clarity.
Cognitive load is not just about complex sentences but also about the speed. There’s plenty of research confirming that non-native speakers read more slowly than native speakers. This is especially important when you work on the visibility of system status — time-sensitive content that the user needs to scan and understand quickly.
One example you can experience firsthand is an ATM displaying a number of updates and instructions. Even when they’re quite similar, it still overwhelms you when you realise that you missed one, not being able to finish reading.
This kind of rapid-fire updates can increase frustration and the chances of errors.
They tend to review and rewrite things more often to find the easiest way to communicate the message. What a native speaker may consider clear enough might be dense or difficult for a non-native to understand.
Why it’s important: Simple content better scales across countries, languages, and cultures.
When things do not make sense, non-native speakers challenge them. Besides the idioms and other obvious traps, native speakers tend to fall into considering their life experience to be shared with most English-speaking users.
Cultural differences might even exist within one globally shared language. Have you tried saying ‘soccer’ instead of ‘football’ in a conversation with someone from the UK? These details may not only cause confusion but also upset people.
Why it’s important: Making sure your product is free from culture-specific references makes your product more inclusive and safeguards you from alienating your users.
Being a non-native speaker themselves, they have experience with products that do not speak clearly to them. They’ve been in the global user’s shoes and know how it impacts the experience.
Why it’s important: Empathy is a key driver towards design decisions that take into account the diverse cultural and linguistic background of the users.
Your product won’t become better overnight simply because you read an inspiring article telling you that you need to have a more diverse team. I get it. So here are concrete changes that you can make in your design workflows and hiring routines to make sure your content is accessible globally.
When you launch a new feature or product, it’s a standard practice to run QA sessions to review visuals and interactions. When your team does not include the non-native perspective, the content is usually overlooked and considered fine as long as it’s grammatically correct.
I know, having a dedicated localisation team to pressure-test your content for clarity is a privilege, but you can always start small.
At one of my previous companies, we established a ‘clarity heroes council’ — a small team of non-native English speakers with diverse cultural and linguistic backgrounds. During our reviews, they often asked questions that surprised us and highlighted where clarity was missing:
These questions flag potential problems and help you save both money and reputation by avoiding thousands of customer service tickets.
Even if your product does not have major releases regularly, it accumulates small changes over time. They’re often plugged in as fixes or small improvements, and can be easily overlooked from a QA perspective.
A good start will be a regular look at the flows that are critical to your business metrics: onboarding, checkout, and so on. Fence off some time for your team quarterly or even annually, depending on your product size, to come together and check whether your key content pieces serve the global audience well.
Usually, a proper review is conducted by a team: a product designer, a content designer, an engineer, a product manager, and a researcher. The idea is to go over the flows, research insights, and customer feedback together. For that, having a non-native speaker on the audit task force will be essential.
If you’ve never done an audit before, try this template as it covers everything you need to start.
If you haven’t done it already, make sure your voice & tone documentation includes details about the level of English your company is catering to.
This might mean working with the brand team to find ways to make sure your brand voice comes through to all users without sacrificing clarity and comprehension. Use examples and showcase the difference between sounding smart or playful vs sounding clear.
Leaning too much towards brand personality is where cultural differences usually shine through. As a user, you might’ve seen it many times. Here’s a banking app that wanted to seem relaxed and relatable by introducing ‘Dang it’ as the only call-to-action on the screen.
However, users with different linguistic backgrounds might not be familiar with this expression. Worse, they might see it as an action, leaving them unsure of what will actually happen after tapping it.
Considering how much content is generated with AI today, your guidelines have to account for both tone and clarity. This way, when you feed these requirements to the AI, you’ll see the output that will not just be grammatically correct but also easy to understand.
Basic heuristic principles are often documented as a part of overarching guidelines to help UX teams do a better job. The Nielsen Norman Group usability heuristics cover the essential ones, but it doesn’t mean you shouldn’t introduce your own. To complement this list, add this principle:
Aim for global understanding: Content and design should communicate clearly to any user regardless of cultural or language background.
You can suggest criteria to ensure it’s clear how to evaluate this:
This one is often overlooked, but collaboration between the research team and non-native speaking writers is super helpful. If your research involves a survey or interview, they can help you double-check whether there is complex or ambiguous language used in the questions unintentionally.
In a study by the Journal of Usability Studies, 37% of non-native speakers did not manage to answer the question that included a word they did not recognise or could not recall the meaning of. The question was whether they found the system to be “cumbersome to use”, and the consequences of getting unreliable data and measurements on this would have a negative impact on the UX of your product.
Another study by UX Journal of User Experience highlights how important clarity is in surveys. While most people in their study interpreted the question “How do you feel about … ?” as “What’s your opinion on …?”, some took it literally and proceeded to describe their emotions instead.
This means that even familiar terms can be misinterpreted. To get precise research results, it’s worth defining key terms and concepts to ensure common understanding with participants.
At Klarna, we often ran into a challenge of inconsistent translation for key terms. A well-defined English term could end up having from three to five different versions in Italian or German. Sometimes, even the same features or app sections could be referred to differently depending on the market — this led to user confusion.
To address this, we introduced a shared term base — a controlled vocabulary that included:
Importantly, the term selection was dictated by user research, not by assumption or personal preferences of the team.
If you’re unsure where to begin, use this product content vocabulary template for Notion. Duplicate it for free and start adding your terms.
We used a similar setup. Our new glossary was shared internally across teams, from product to customer service. Results? Reducing the support tickets related to unclear language used in UI (or directions in the user journey) by 18%. This included tasks like finding instructions on how to make a payment (especially with the least popular payment methods like bank transfer), where the late fee details are located, or whether it’s possible to postpone the payment. And yes, all of these features were available, and the team believed they were quite easy to find.
A glossary like this can live as an add-on to your guidelines. This way, you will be able to quickly get up to speed new joiners, keep product copy ready for localisation, and defend your decisions with stakeholders.
‘Looking for a native speaker’ still remains a part of the job listing for UX Writers and content designers. There’s no point in assuming it’s intentional discrimination. It’s just a misunderstanding that stems from not fully accepting that our job is more about building the user experience than writing texts that are grammatically correct.
Here are a few tips to make sure you hire the best talent and treat your applicants fairly:
Instead, focus on the core part of our job: add ‘clear communicator’, ‘ability to simplify’, or ‘experience writing for a global audience’.
Over the years, there have been plenty of studies confirming that the accent bias is real — people having an unusual or foreign accent are considered less hirable. While some may argue that it can have an impact on the efficiency of internal communications, it’s not enough to justify the reason to overlook the good work of the applicant.
My personal experience with the accent is that it mostly depends on the situation you’re in. When I’m in a friendly environment and do not feel anxiety, my English flows much better as I do not overthink how I sound. Ironically, sometimes when I’m in a room with my team full of British native speakers, I sometimes default to my Slavic accent. The question is: does it make my content design expertise or writing any worse? Not in the slightest.
Therefore, make sure you judge the portfolios, the ideas behind the interview answers, and whiteboard challenge presentations, instead of focusing on whether the candidate’s accent implies that they might not be good writers.
Non-native content designers do not have a negative impact on your team’s writing. They sharpen it by helping you look at your content through the lens of your real user base. In the globalised world, linguistic purity no longer benefits your product’s user experience.
Try these practical steps and leverage the non-native speaking lens of your content designers to design better international products.
Unmasking The Magic: The Wizard Of Oz Method For UX Research Unmasking The Magic: The Wizard Of Oz Method For UX Research Victor Yocco 2025-07-10T10:00:00+00:00 2025-07-16T16:32:47+00:00 New technologies and innovative concepts frequently enter the product development lifecycle, promising to revolutionize user experiences. However, even the […]
Accessibility
2025-07-10T10:00:00+00:00
2025-07-16T16:32:47+00:00
New technologies and innovative concepts frequently enter the product development lifecycle, promising to revolutionize user experiences. However, even the most ingenious ideas risk failure without a fundamental grasp of user interaction with these new experiences.
Consider the plight of the Nintendo Power Glove. Despite being a commercial success (selling over 1 million units), its release in late 1989 was followed by its discontinuation less than a full year later in 1990. The two games created solely for the Power Glove sold poorly, and there was little use for the Glove with Nintendo’s already popular traditional console games.
A large part of the failure was due to audience reaction once the product (which allegedly was developed in 8 weeks) was cumbersome and unintuitive. Users found syncing the glove to the moves in specific games to be extremely frustrating, as it required a process of coding the moves into the glove’s preset move buttons and then remembering which buttons would generate which move. With the more modern success of Nintendo’s WII and other movement-based controller consoles and games, we can see the Power Glove was a concept ahead of its time.
If Power Glove’s developers wanted to conduct effective research prior to building it out, they would have needed to look beyond traditional methods, such as surveys and interviews, to understand how a user might truly interact with the Glove. How could this have been done without a functional prototype and slowing down the overall development process?
Enter the Wizard of Oz method, a potent tool for bridging the chasm between abstract concepts and tangible user understanding, as one potential option. This technique simulates a fully functional system, yet a human operator (“the Wizard”) discreetly orchestrates the experience. This allows researchers to gather authentic user reactions and insights without the prerequisite of a fully built product.
The Wizard of Oz (WOZ) method is named in tribute to the similarly named book by Frank L. Baum. In the book, the Wizard is simply a man hidden behind a curtain, manipulating the reality of those who travel the land of Oz. Dorothy, the protagonist, exposes the Wizard for what he is, essentially an illusion or a con who is deceiving those who believe him to be omnipotent. Similarly, WOZ takes technologies that may or may not currently exist and emulates them in a way that should convince a research participant they are using an existing system or tool.
WOZ enables the exploration of user needs, validation of nascent concepts, and mitigation of development risks, particularly with complex or emerging technologies.
The product team in our above example might have used this method to have users simulate the actions of wearing the glove, programming moves into the glove, and playing games without needing a fully functional system. This could have uncovered the illogical situation of asking laypeople to code their hardware to be responsive to a game, show the frustration one encounters when needing to recode the device when changing out games, and also the cumbersome layout of the controls on the physical device (even if they’d used a cardboard glove with simulated controls drawn in crayon on the appropriate locations.
Jeff Kelley credits himself (PDF) with coining the term WOZ method in 1980 to describe the research method he employed in his dissertation. However, Paula Roe credits Don Norman and Allan Munro for using the method as early as 1973 to conduct testing on an airport automated travel assistant. Regardless of who originated the method, both parties agree that it gained prominence when IBM later used it to conduct studies on a speech-to-text tool known as The Listening Typewriter (see Image below).
In this article, I’ll cover the core principles of the WOZ method, explore advanced applications taken from practical experience, and demonstrate its unique value through real-world examples, including its application to the field of agentic AI. UX practitioners can use the WOZ method as another tool to unlock user insights and craft human-centered products and experiences.
The WOZ method operates on the premise that users believe they are interacting with an autonomous system while a human wizard manages the system’s responses behind the scenes. This individual, often positioned remotely (or off-screen), interprets user inputs and generates outputs that mimic the anticipated functionality of the experience.
A successful WOZ study involves several key roles:
Creating a convincing illusion is key to the success of a WOZ study. This necessitates careful planning of the research environment and the tasks users will undertake. Consider a study evaluating a new voice command system for smart home devices. The research setup might involve a physical mock-up of a smart speaker and predefined scenarios like “Play my favorite music” or “Dim the living room lights.” The wizard, listening remotely, would then trigger the appropriate responses (e.g., playing a song, verbally confirming the lights are dimmed).
Or perhaps it is a screen-based experience testing a new AI-powered chatbot. You have users entering commands into a text box, with another member of the product team providing responses simultaneously using a tool like Figma/Figjam, Miro, Mural, or other cloud-based software that allows multiple users to collaborate simultaneously (the author has no affiliation with any of the mentioned products).
Maintaining the illusion of a genuine system requires the following:
Transparency is crucial, even in a method that involves a degree of deception. Participants should always be debriefed after the session, with a clear explanation of the Wizard of Oz technique and the reasons for its use. Data privacy must be maintained as with any study, and participants should feel comfortable and respected throughout the process.
The WOZ method occupies a unique space within the UX research toolkit:
This method proves particularly valuable when exploring truly novel interactions or complex systems where building a fully functional prototype is premature or resource-intensive. It allows researchers to answer fundamental questions about user needs and expectations before committing significant development efforts.
Let’s move beyond the foundational aspects of the WOZ method and explore some more advanced techniques and critical considerations that can elevate its effectiveness.
It’s a fair question to ask whether WOZ is truly a time-saver compared to even cruder prototyping methods like paper prototypes or static digital mockups.
While paper prototypes are incredibly fast to create and test for basic flow and layout, they fundamentally lack dynamic responsiveness. Static mockups offer visual fidelity but cannot simulate complex interactions or personalized outputs.
The true time-saving advantage of the WOZ emerges when testing novel, complex, or AI-driven concepts. It allows researchers to evaluate genuine user interactions and mental models in a seemingly live environment, collecting rich behavioral data that simpler prototypes cannot. This fidelity in simulating a dynamic experience, even with a human behind the curtain, often reveals critical usability or conceptual flaws far earlier and more comprehensively than purely static representations, ultimately preventing costly reworks down the development pipeline.
While the core principle of the WOZ method is straightforward, its true power lies in nuanced application and thoughtful execution. Seasoned practitioners may leverage several advanced techniques to extract richer insights and address more complex research questions.
The WOZ method isn’t necessarily a one-off endeavor. Employing it in iterative cycles can yield significant benefits. Initial rounds might focus on broad concept validation and identifying fundamental user reactions. Subsequent iterations can then refine the simulated functionality based on previous findings.
For instance, after an initial study reveals user confusion with a particular interaction flow, the simulation can be adjusted, and a follow-up study can assess the impact of those changes. This iterative approach allows for a more agile and user-centered exploration of complex experiences.
Simulating complex systems can be difficult for one wizard. Breaking complex interactions into smaller, manageable steps is crucial. Consider researching a multi-step onboarding process for a new software application. Instead of one person trying to simulate the entire flow, different aspects could be handled sequentially or even by multiple team members coordinating their responses.
Clear communication protocols and well-defined responsibilities are essential in such scenarios to maintain a seamless user experience.
While qualitative observation is a cornerstone of the WOZ method, defining clear metrics can add a layer of rigor to the findings. These metrics should match research goals. For example, if the goal is to assess the intuitiveness of a new navigation pattern, you might track the number of times users express confusion or the time it takes them to complete specific tasks.
Combining these quantitative measures with qualitative insights provides a more comprehensive understanding of the user experience.
The WOZ method isn’t an island. Its effectiveness can be amplified by integrating it with other research techniques. Preceding a WOZ study with user interviews can help establish a deeper understanding of user needs and mental models, informing the design of the simulated experience. Following a WOZ study, surveys can gather broader quantitative feedback on the concepts explored. For example, after observing users interact with a simulated AI-powered scheduling tool, a survey could gauge their overall trust and perceived usefulness of such a system.
WOZ, as with all methods, has limitations. A few examples of scenarios where other methods would likely yield more reliable findings would be:
The wizard’s skill is critical to the method’s success. Training the individual(s) who will be simulating the system is essential. This training should cover:
All of this suggests the need for practice in advance of running the actual session. We shouldn’t forget to have a number of dry runs in which we ask our colleagues or those who are willing to assist to not only participate but also think about possible responses that could stump the wizard or throw things off if the user might provide them during a live session.
I suggest having a believable prepared error statement ready to go for when a user throws a curveball. A simple response from the wizard of “I’m sorry, I am unable to perform that task at this time” might be enough to move the session forward while also capturing a potentially unexpected situation your team can address in the final product design.
The debriefing session following the WOZ interaction is an additional opportunity to gather rich qualitative data. Beyond asking “What did you think?” effective debriefing involves sharing the purpose of the study and the fact that the experience was simulated.
Researchers should then conduct psychological probing to understand the reasons behind user behavior and reactions. Asking open-ended questions like “Why did you try that?” or “What were you expecting to happen when you clicked that button?” can reveal valuable insights into user mental models and expectations.
Exploring moments of confusion, frustration, or delight in detail can uncover key areas for design improvement. Think about the potential information the Power Gloves’ development team could have uncovered if they’d asked participants what the experience of programming the glove and trying to remember what they’d programmed into which set of keys had been.
The value of the WOZ method becomes apparent when examining its application in real-world research scenarios. Here is an in-depth review of one scenario and a quick summary of another study involving WOZ, where this technique proved invaluable in shaping user experiences.
A significant challenge in the realm of emerging technologies lies in user comprehension. This was particularly evident when our team began exploring the potential of Agentic AI for enterprise HR software.
Agentic AI refers to artificial intelligence systems that can autonomously pursue goals by making decisions, taking actions, and adapting to changing environments with minimal human intervention. Unlike generative AI that primarily responds to direct commands or generates content, Agentic AI is designed to understand user intent, independently plan and execute multi-step tasks, and learn from its interactions to improve performance over time. These systems often combine multiple AI models and can reason through complex problems. For designers, this signifies a shift towards creating experiences where AI acts more like a proactive collaborator or assistant, capable of anticipating needs and taking the initiative to help users achieve their objectives rather than solely relying on explicit user instructions for every step.
Preliminary research, including surveys and initial interviews, suggested that many HR professionals, while intrigued by the concept of AI assistance, struggled to grasp the potential functionality and practical implications of truly agentic systems — those capable of autonomous action and proactive decision-making. We saw they had no reference point for what agentic AI was, even after we attempted relevant analogies to current examples.
Building a fully functional agentic AI prototype at this exploratory stage was impractical. The underlying algorithms and integrations were complex and time-consuming to develop. Moreover, we risked building a solution based on potentially flawed assumptions about user needs and understanding. The WOZ method offered a solution.
We designed a scenario where HR employees interacted with what they believed was an intelligent AI assistant capable of autonomously handling certain tasks. The facilitator presented users with a web interface where they could request assistance with tasks like “draft a personalized onboarding plan for a new marketing hire” or “identify employees who might benefit from proactive well-being resources based on recent activity.”
Behind the scenes, a designer acted as the wizard. Based on the user’s request and the (simulated) available data, the designer would craft a response that mimicked the output of an agentic AI. For the onboarding plan, this involved assembling pre-written templates and personalizing them with details provided by the user. For the well-being resource identification, the wizard would select a plausible list of employees based on the general indicators discussed in the scenario.
Crucially, the facilitator encouraged users to interact naturally, asking follow-up questions and exploring the system’s perceived capabilities. For instance, a user might ask, “Can the system also schedule the initial team introductions?” The wizard, guided by pre-defined rules and the overall research goals, would respond accordingly, perhaps with a “Yes, I can automatically propose meeting times based on everyone’s calendars” (again, simulated).
As recommended, we debriefed participants following each session. We began with transparency, explaining the simulation and that we had another live human posting the responses to the queries based on what the participant was saying. Open-ended questions explored initial reactions and envisioned use. Task-specific probing, like “Why did you expect that?” revealed underlying assumptions. We specifically addressed trust and control (“How much trust…? What level of control…?”). To understand mental models, we asked how users thought the “AI” worked. We also solicited improvement suggestions (“What features…?”).
By focusing on the “why” behind user actions and expectations, these debriefings provided rich qualitative data that directly informed subsequent design decisions, particularly around transparency, human oversight, and prioritizing specific, high-value use cases. We also had a research participant who understood agentic AI and could provide additional insight based on that understanding.
This WOZ study yielded several crucial insights into user mental models of agentic AI in an HR context:
Based on these findings, we made several key design decisions:
In another project, we used the WOZ method to evaluate user interaction with a voice interface for controlling in-car functions. Our research question focused on the naturalness and efficiency of voice commands for tasks like adjusting climate control, navigating to points of interest, and managing media playback.
We set up a car cabin simulator with a microphone and speakers. The wizard, located in an adjacent room, listened to the user’s voice commands and triggered the corresponding actions (simulated through visual changes on a display and audio feedback). This allowed us to identify ambiguous commands, areas of user frustration with voice recognition (even though it was human-powered), and preferences for different phrasing and interaction styles before investing in complex speech recognition technology.
These examples illustrate the versatility and power of the method in addressing a wide range of UX research questions across diverse product types and technological complexities. By simulating functionality, we can gain invaluable insights into user behavior and expectations early in the design process, leading to more user-centered and ultimately more successful products.
The WOZ method, far from being a relic of simpler technological times, retains relevance as we navigate increasingly sophisticated and often opaque emerging technologies.
The WOZ method’s core strength, the ability to simulate complex functionality with human ingenuity, makes it uniquely suited for exploring user interactions with systems that are still in their nascent stages.
“
WOZ In The Age Of AI
Consider the burgeoning field of AI-powered experiences. Researching user interaction with generative AI, for instance, can be effectively done through WOZ. A wizard could curate and present AI-generated content (text, images, code) in response to user prompts, allowing researchers to assess user perceptions of quality, relevance, and trust without needing a fully trained and integrated AI model.
Similarly, for personalized recommendation systems, a human could simulate the recommendations based on a user’s stated preferences and observed behavior, gathering valuable feedback on the perceived accuracy and helpfulness of such suggestions before algorithmic development.
Even autonomous systems, seemingly the antithesis of human control, can benefit from WOZ studies. By simulating the autonomous behavior in specific scenarios, researchers can explore user comfort levels, identify needs for explainability, and understand how users might want to interact with or override such systems.
Virtual And Augmented Reality
Immersive environments like virtual and augmented reality present new frontiers for user experience research. WOZ can be particularly powerful here.
Imagine testing a novel gesture-based interaction in VR. A researcher tracking the user’s hand movements could trigger corresponding virtual events, allowing for rapid iteration on the intuitiveness and comfort of these interactions without the complexities of fully programmed VR controls. Similarly, in AR, a wizard could remotely trigger the appearance and behavior of virtual objects overlaid onto the real world, gathering user feedback on their placement, relevance, and integration with the physical environment.
The Human Factor Remains Central
Despite the rapid advancements in artificial intelligence and immersive technologies, the fundamental principles of human-centered design remain as relevant as ever. Technology should serve human needs and enhance human capabilities.
The WOZ method inherently focuses on understanding user reactions and behaviors and acts as a crucial anchor in ensuring that technological progress aligns with human values and expectations.
“
It allows us to inject the “human factor” into the design process of even the most advanced technologies. Doing this may help ensure these innovations are not only technically feasible but also truly usable, desirable, and beneficial.
The WOZ method stands as a powerful and versatile tool in the UX researcher’s toolkit. The WOZ method’s ability to bypass limitations of early-stage development and directly elicit user feedback on conceptual experiences offers invaluable advantages. We’ve explored its core mechanics and covered ways of maximizing its impact. We’ve also examined its practical application through real-world case studies, including its crucial role in understanding user interaction with nascent technologies like agentic AI.
The strategic implementation of the WOZ method provides a potent means of de-risking product development. By validating assumptions, uncovering unexpected user behaviors, and identifying potential usability challenges early on, teams can avoid costly rework and build products that truly resonate with their intended audience.
I encourage all UX practitioners, digital product managers, and those who collaborate with research teams to consider incorporating the WOZ method into their research toolkit. Experiment with its application in diverse scenarios, adapt its techniques to your specific needs and don’t be afraid to have fun with it. Scarecrow costume optional.
Design Guidelines For Better Notifications UX Design Guidelines For Better Notifications UX Vitaly Friedman 2025-07-07T13:00:00+00:00 2025-07-09T15:33:43+00:00 In many products, setting notification channels on mute is a default, rather than an exception. The reason for that is their high frequency, which creates disruptions and eventually notification […]
Accessibility
2025-07-07T13:00:00+00:00
2025-07-09T15:33:43+00:00
In many products, setting notification channels on mute is a default, rather than an exception. The reason for that is their high frequency, which creates disruptions and eventually notification fatigue, when any popping messages get dismissed instantly.
There is a good reason for it: high frequency of notifications. In usability testing, it’s the most frequent complaint, yet every app desperately tries to capture a glimpse of our attention, sending more notifications our way. Let’s see how we could make the notifications UX slightly better.
.course-intro{–shadow-color:206deg 31% 60%;background-color:#eaf6ff;border:1px solid #ecf4ff;box-shadow:0 .5px .6px hsl(var(–shadow-color) / .36),0 1.7px 1.9px -.8px hsl(var(–shadow-color) / .36),0 4.2px 4.7px -1.7px hsl(var(–shadow-color) / .36),.1px 10.3px 11.6px -2.5px hsl(var(–shadow-color) / .36);border-radius:11px;padding:1.35rem 1.65rem}@media (prefers-color-scheme:dark){.course-intro{–shadow-color:199deg 63% 6%;border-color:var(–block-separator-color,#244654);background-color:var(–accent-box-color,#19313c)}}
This article is part of our ongoing series on UX. You can find more details on design patterns and UX strategy in Smart Interface Design Patterns 🍣 — with live UX training coming up soon. Jump to table of contents.
Notifications are distractions by nature; they bring a user’s attention to a (potentially) significant event they aren’t aware of or might want to be reminded of. As such, they can be very helpful and relevant, providing assistance and bringing structure and order to the daily routine. Until they are not.
Not every communication option is a notification. As Kim Salazar rightfully noted,
“Status communication often relies on validation, status indicators, and notifications. While they are often considered to be similar, they are actually quite different.”
In general, notifications can be either informational (calendar reminders, delay notifications, election night results) or encourage action (approve payment, install an update, confirm a friend request). They can stream from various sources and have various impacts.
But we don’t pay the same amount of attention to every notification. It can take weeks until they eventually install a software update prompted by their OS notification, or just a few hours to confirm or decline a new LinkedIn request.
The level of attention users grant to notifications depends on their nature, or, more specifically, how and when notifications are triggered. People care more about new messages from close friends and relatives, bank transactions and important alerts, calendar notifications, and any actionable and awaited confirmations or releases.
People care less about news updates, social feed updates, announcements, new features, crash reports, promotional and automated messages in general. Most importantly, a message from another human being is always valued much higher than any automated notification.
As Sara Vilas suggests, we can break down notification design across three levels of severity: high, medium, and low attention. And then, notification types need to be further defined by specific attributes on those three levels, whether they are alerts, warnings, confirmations, errors, success messages, or status indicators.
High Attention
Medium Attention
Low Attention
Taking it one step further, we can map the attention against the type of messaging we are providing — very similar to Zendesk’s mapping tone above, which plots impact against the type of messaging, and shows how the tone should adjust — becoming more humble, real, distilled or charming.
So, notifications can be different, and different notifications are perceived differently; however, the more personal, relevant, and timely notifications are, the higher engagement we should expect.
It’s not uncommon to sign up, only to realize a few moments later that the inbox is filling up with all kinds of irrelevant messages. That’s exactly the wrong thing to do. A study by Facebook showed that sending fewer notifications improved user satisfaction and long-term usage of a product.
Initially, once the notification rate was reduced, there was indeed a loss of traffic, but it has “gradually recovered over time”, and after an extended period, it had fully recovered and even turned out to be a gain.
A good starting point is to set up a slow default notification frequency for different types of customers. As the customer keeps using the interface, we could ask them to decide on the kind of notifications they’d prefer and their frequency.
Send notifications slowly, and over time slowly increase and/or decrease the number of notifications per type of customer. This might work much better for our retention rates.
Typically, users can opt in and opt out of every single type of notification in their settings. In general, it’s a good idea, but it can also be very overwhelming — and not necessarily clear how important each notification is. Alternatively, we could provide predefined recommended options, perhaps with a “calm mode” (low frequency), a “regular mode” (medium frequency), and a “power-user mode” (high frequency).
As time passes, the format of notifications might need adjustments as well. Rather than having notifications sent one by one as events occur, users could choose a “summary mode,” with all notifications grouped into a single standalone message delivered at a particular time each day or every week.
That’s one of the settings that Slack provides when it comes to notifications; in fact, the system adapts the frequency of notifications over time, too. Initially, as Slack channels can be quite silent, the system sends notifications for every posted message.
As activities become more frequent, Slack recommends reducing the notification level so the user will be notified only when they are actually mentioned.
We could also include frequency options in our onboarding design. A while back Basecamp, for example, has introduced “Always On” and “Work Can Wait” options as a part of their onboarding, so new customers can select if they wish to receive notifications as they occur (at any time), or choose specific time ranges and days when notifications can be sent.
Or, the other way around, we could ask users when they don’t want to be disturbed, and suspend notifications at that time. Not every customer wants to receive work-related notifications outside of business hours or on the weekend, even if their colleagues might be working extra hours on Friday night on the other side of the planet.
User’s context changes continuously. If you notice an unusual drop in engagement rate, or if you’re anticipating an unusually high volume of notifications coming up (a birthday, wedding anniversary, or election night, perhaps), consider providing an option to mute, snooze, or pause notifications, perhaps for the next 24 hours.
This might go very much against our intuition, as we might want to re-engage the customer if they’ve gone silent all of a sudden, or we might want to maximize their engagement when important events are happening. However, it’s easy to reach a point when a seemingly harmless notification will steer a customer away, long term.
Another option would be to suggest a change of medium used to consume notifications. Users tend to associate different levels of urgency with different channels of communication.
In-app notifications, push notifications, and text messages are considered to be much more intrusive than good ol’ email, so when frequency exceeds a certain threshold, you might want to nudge users towards a switch from push notifications to daily email summaries.
As always in design, timing matters, and so do timely notifications. Start slowly, and evolve your notification frequency depending on how exactly a user actually uses the product. For every type of user, set up notification profiles: frequent users, infrequent users, one-week-experience users, one-month-experience users, and so on.
And whenever possible, allow your users to snooze and mute notifications for a while. Eventually, you might even want to suggest a change in the medium used to consume notifications. And when in doubt, postpone, rather than sending through.
You can find more details on design patterns and UX in Smart Interface Design Patterns, our 15h-video course with 100s of practical examples from real-life projects — with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview. Use code BIRDIE to save 15% off.
$ 495.00 $ 699.00
Get Video + UX Training
25 video lessons (15h) + Live UX Training.
100 days money-back-guarantee.
40 video lessons (15h). Updated yearly.
Also available as a UX Bundle with 2 video courses.
Turning User Research Into Real Organizational Change Turning User Research Into Real Organizational Change Paul Boag 2025-07-01T10:00:00+00:00 2025-07-02T15:03:33+00:00 This article is sponsored by Lyssna We’ve all been there: you pour your heart and soul into conducting meticulous user research. You gather insightful data, create detailed […]
Accessibility
2025-07-01T10:00:00+00:00
2025-07-02T15:03:33+00:00
This article is sponsored by Lyssna
We’ve all been there: you pour your heart and soul into conducting meticulous user research. You gather insightful data, create detailed reports, and confidently deliver your findings. Yet, months later, little has changed. Your research sits idle on someone’s desk, gathering digital dust. It feels frustrating, like carefully preparing a fantastic meal, only to have it left uneaten.
There are so many useful tools (like Lysnna) to help us run incredible user research, and articles about how to get the most from them. However, there’s much less guidance about ensuring our user research gets adopted and brings about real change. So, in this post, I want to answer a simple question: How can you make sure your user research truly transforms your organization?
User research is only as valuable as the impact it has.
When research insights fail to make their way into decisions, teams miss out on opportunities to improve products, experiences, and ultimately, business results. In this post, we’ll look at:
By covering each of these areas, you’ll have a clear roadmap for turning your hard-won research into genuine action.
If you’ve ever felt your research get stuck, it probably came down to one (or more) of these issues.
When findings aren’t tied to business objectives or ROI, they struggle to gain traction. Sharing a particular hurdle that users face will fall on deaf ears if stakeholders cannot see how that problem will impact their bottom line.
Research arriving too late is another hurdle. If you share insights after key decisions are made, stakeholders assume your input won’t change anything. Finally, research often competes with other priorities. Teams might have limited resources and focus on urgent deadlines rather than long-term user improvements.
Even brilliant research can get lost in translation if it’s buried in dense reports. I’ve seen stakeholders glaze over when handed 30-page documents full of jargon. When key takeaways aren’t crystal clear, decision-makers can’t quickly act on your findings.
Organizational silos can make communication worse. Marketing might have valuable insights that product managers never see, or designers may share findings that customer support doesn’t know how to use. Without a way to bridge those gaps, research lives in a vacuum.
Great insights require a champion. Without a clear owner, research often lives with the person who ran it, and no one else feels responsible. Stakeholder skepticism also plays a role. Some teams doubt the methods or worry the findings don’t apply to real customers.
Even if there is momentum, insufficient follow-up or progress tracking can stall things. I’ve heard teams say, “We started down that path but ran out of time.” Without regular check-ins, good ideas fade away.
Legal, compliance, or tech constraints can limit what you propose. I once suggested a redesign to comply with new accessibility standards, but the existing technical stack couldn’t support it. Resistance due to established culture is also common. If a company’s used to launching fast and iterating later, they might see research-driven change as slowing them down.
Now that we understand what stands in the way of effective research implementation, let’s explore practical solutions to overcome these challenges and drive real organizational change.
When research ties directly to business goals, it becomes impossible to ignore. Here’s how to do it.
Invite key decision-makers into the research planning phase. I like to host a kickoff session where we map research objectives to specific KPIs, like increasing conversions by 10% or reducing support tickets by 20%. When your stakeholders help shape those objectives, they’re more invested in the results.
While UX designers often focus on user metrics like satisfaction scores or task completion rates, it’s crucial to connect our research to business outcomes that matter to stakeholders. Start by identifying the key business metrics that will demonstrate the value of your research:
When presenting user research to groups, it’s easy to fall into the trap of delivering a one-size-fits-all message that fails to truly resonate with anyone. Instead, we need to carefully consider how different stakeholders will receive and act on our findings.
The real power of user research emerges when we can connect our insights directly to what matters most for each specific audience:
Stakeholders want to see real numbers. Develop simple templates to estimate potential cost savings or revenue gains. For example, if you uncover a usability issue that’s causing a 5% drop-off in the signup flow, translate that into lost revenue per month.
I also recommend documenting success stories from similar projects within your own organization or from case studies. When a stakeholder sees that another company boosted revenue by 15% after addressing a UX flaw, they’re more likely to pay attention.
Integrate research tasks directly into your product roadmap. Schedule user interviews or usability tests just before major feature sprints. That way, findings land at the right moment — when teams are making critical decisions.
It’s essential to maintain consistent communication with strategic teams through regular research review meetings. These sessions provide a dedicated space to discuss new insights and findings. To keep everyone aligned, stakeholders should have access to a shared calendar that clearly marks key research milestones. Using collaborative tools like Trello boards or shared calendars ensures the entire team stays informed about the research plan and progress.
Research doesn’t have to be a massive, months-long effort each time. Build modular research plans that can scale. If you need quick, early feedback, run a five-user usability test rather than a full survey. For deeper analysis, you can add more participants later.
Making research understandable is almost as important as the research itself. Let’s explore how to share insights so they stick.
Condense key findings into a scannable one-pager. No more than a single sheet. Start with a brief summary of the problem, then highlight three to five top takeaways. Use bold headings and visual elements (charts, icons) to draw attention.
Avoid dumping all details at once. Start with a high-level executive summary that anyone can read in 30 seconds. Then, link to a more detailed section for folks who want the full methodology or raw data. This layered approach helps different stakeholders absorb information at their own pace.
Humans are wired to respond to stories. Transform data into a narrative by using journey maps, before/after scenarios, and user stories. For example, illustrate how a user feels at each step of a signup process, then show how proposed changes could improve their experience.
Keep the conversation going. Schedule brief weekly or biweekly “research highlights” emails or meetings. These should be no more than five minutes and focus on one or two new insights. When stakeholders hear snippets of progress regularly, research stays top of mind.
Take research readouts beyond slide decks. Host workshop-style sessions where stakeholders engage with findings hands-on. For instance, break them into small groups to discuss a specific persona and brainstorm solutions. When people physically interact with research (sticky notes, printed journey maps), they internalize it better.
Now that stakeholders understand and value your research, let’s make sure they turn insights into action.
Assign a dedicated owner for each major recommendation. Use a RACI matrix to clarify who’s Responsible, Accountable, Consulted, and Informed. I like to share a simple table listing each initiative, the person driving it, and key milestones.
When everyone knows who’s accountable, progress is more likely.
Initiative | Responsible | Accountable | Consulted | Informed |
---|---|---|---|---|
Redesign Signup Flow | UX Lead | Product Manager | Engineering, Legal | Marketing, Support |
Create One-Pager Templates | UX Researcher | Design Director | Stakeholder Team | All Departments |
Break recommendations down into phases. For example,
Each phase needs clear timelines, success metrics, and resources identified upfront.
Be transparent about your methods. Share your recruitment screeners, interview scripts, and a summary of analysis steps. Offer validation sessions where stakeholders can ask questions about how the data was collected and interpreted. When they understand the process, they trust the findings more.
Even when stakeholders agree, they need help executing. Establish mentorship or buddy programs where experienced researchers or designers guide implementation. Develop training materials, like short “how-to” guides on running usability tests or interpreting survey data. Set up feedback channels (Slack channels, shared docs) where teams can ask questions or share roadblocks.
Establish regular progress reviews weekly or biweekly. Use dashboards to track metrics such as A/B test performance, error rates, or user satisfaction scores. Even a more complicated dashboard can be built using no-code tools and AI, so you no longer need to rely on developer support.
Even the best strategic plans and communication tactics can stumble if policies and culture aren’t supportive. Here’s how to address systemic barriers.
First, audit existing policies for anything that blocks research-driven changes. Maybe your data security policy requires months of legal review before you can recruit participants. Document those barriers and work with legal or compliance teams to create flexible guidelines. Develop a process for policy exception requests — so if you need a faster path for a small study, you know how to get approval without massive delays.
Technology can be a silent killer of good ideas. Before proposing changes, work with IT to understand current limitations. Document technical requirements clearly so teams know what’s feasible. Propose a phased approach to any necessary infrastructure updates. Start with small changes that have an immediate impact, then plan for larger upgrades over time.
Culture shift doesn’t happen overnight. Share quick wins and success stories from early adopters in your organization. Recognize and reward change pioneers. Send a team-wide shout-out when someone successfully implements a research-driven improvement. Create a champions network across departments, so each area has at least one advocate who can spread best practices and encourage others.
Change management is about clear, consistent communication. Develop tailored communication plans for different stakeholder groups. For example, executives might get a one-page impact summary, while developers get technical documentation and staging environments to test new designs. Establish feedback channels so teams can voice concerns or suggestions. Finally, provide change management training for team leaders so they can guide their direct reports through transitions.
Culture can be hard to quantify, but simple pulse surveys go a long way. Ask employees how they feel about recent changes and whether they are more confident using data to make decisions. Track employee engagement metrics like survey participation or forum activity in research channels. Monitor resistance patterns (e.g., repeated delays or rejections) and address the root causes proactively.
Transforming user research into organizational change requires a holistic approach. Here’s what matters most:
When you bring all of these elements together, research stops being an isolated exercise and becomes a driving force for real, measurable improvements. Keep in mind:
This is an iterative, ongoing process. Each success builds trust and opens doors for more ambitious research efforts. Be patient, stay persistent, and keep adapting. When your organization sees research as a core driver of decisions, you’ll know you’ve truly succeeded.
Can Good UX Protect Older Users From Digital Scams? Can Good UX Protect Older Users From Digital Scams? Carrie Webster 2025-06-25T12:00:00+00:00 2025-06-25T15:04:30+00:00 A few years ago, my mum, who is in her 80s and not tech-savvy, almost got scammed. She received an email from what […]
Accessibility
2025-06-25T12:00:00+00:00
2025-06-25T15:04:30+00:00
A few years ago, my mum, who is in her 80s and not tech-savvy, almost got scammed. She received an email from what appeared to be her bank. It looked convincing, with a professional logo, clean formatting, and no obvious typos. The message said there was a suspicious charge on her account and presented a link asking her to “verify immediately.”
She wasn’t sure what to do. So she called me.
That hesitation saved her. The email was fake, and if she’d clicked on the link, she would’ve landed on a counterfeit login page, handing over her password details without knowing it.
That incident shook me. I design digital experiences for a living. And yet, someone I love almost got caught simply because a bad actor knew how to design well. That raised a question I haven’t stopped thinking about since: Can good UX protect people from online scams?
Quite apart from this incident, I see my Mum struggle with most apps on her phone. For example, navigating around her WhatsApp and YouTube apps seems to be very awkward for her. She is not used to accessing the standard app navigation at the bottom of the screen. What’s “intuitive” for many users is simply not understood by older, non-tech users.
Online scams are becoming increasingly sophisticated, leveraging advanced technologies like artificial intelligence and deepfake videos to create more convincing yet fraudulent content. Scammers are also exploiting new digital platforms, including social media and messaging apps, to reach victims more directly and personally.
Phishing schemes have become more targeted, often using personal information taken from social media to craft customised attacks. Additionally, scammers are using crypto schemes and fake investment opportunities to lure those seeking quick financial gains, making online scams more convincing, diverse, and harder to detect.
In 2021, there were more than 90,000 older victims of fraud, according to the FBI. These cases resulted in US$1.7 billion in losses, a 74% increase compared with 2020. Even so, that may be a significant undercount since embarrassment or lack of awareness keeps some victims from reporting.
In Australia, the ACCC’s 2023 “Targeting Scams” report revealed that Australians aged 65 and over were the only age group to experience an increase in scam losses compared to the previous year. Their losses rose by 13.3% to $120 million, often following contact with scammers on social media platforms.
In the UK, nearly three in five (61%) people aged over 65 have been the target of fraud or a scam. On average, older people who have been scammed have lost nearly £4,000 each.
According to global consumer protection agencies, people over 60 are more likely to lose money to online scams than any other group. That’s a glaring sign: we need to rethink how we’re designing experiences for them.
Older users are disproportionately targeted by scammers for several reasons:
Scammers exploit trust. They impersonate banks, government agencies, health providers, and even family members. The one that scares me the most is the ability to use AI to mimic a loved one’s voice — anyone can be tricked by this.
Imagine navigating a confusing mobile app after a long day. Now imagine you’re in your 70s or 80s; your eyesight isn’t as sharp, your finger tapping isn’t as accurate, and every new screen feels like a puzzle.
As people age, they may experience slower processing speeds, reduced working memory, and lower tolerance for complexity. That means:
Decision fatigue hits harder, too. If a user has already made five choices on an app, they may click the 6th button without fully understanding what it does, especially if it seems to be part of the flow.
Scammers rely on these factors. However, good UX can help to reduce it.
There’s a big difference between someone who grew up with the internet and someone who started using it in their 60s. Older users often struggle with:
They may also be more likely to blame themselves when something goes wrong, leading to underreporting and repeat victimization.
Design can help to bridge some of that gap. But only if we build with their experience in mind.
As UX designers, we focus on making things easy, intuitive, and accessible. But we can also shape how people understand risk.
Every choice, from wording to layout to colour, can affect how users interpret safety cues. When we design for the right cues, we help users avoid mistakes. When we get them wrong or ignore them altogether, we leave people vulnerable.
The good news? We have tools. We have influence. And in a world where digital scams are rising, we can use both to design for protection, not just productivity.
The list below describes some UX design improvements that we can consider as designers:
Let’s be realistic: UX isn’t magic. We can’t stop phishing emails from landing in someone’s inbox. We can’t rewrite bad policies, and we can’t always prevent users from clicking on a well-disguised trap.
I personally think that even good UX may be limited in helping people like my mother, who will never be tech-savvy. To help those like her, ultimately, additional elements like support contact numbers, face-to-face courses on how to stay safe on your phone, and, of course, help from family members as required. These are all about human contact touch points, which can never be replaced by any kind of digital or AI support that may be available.
What we can do as designers is build systems that make hesitation feel natural. We can provide visual clarity, reduce ambiguity, and inject small moments of friction that nudge users to double-check before proceeding, especially in financial and banking apps and websites.
That hesitation might be the safeguard we need.
Scammers often pose as trusted entities like banks, government agencies, or tech support to trick individuals into revealing personal information. Avoid clicking on links or downloading attachments from unknown sources, and never share personal details like your Medicare number, passwords, or banking information unless you’ve verified the request independently.
Create complex passwords that combine letters, numbers, and symbols, and avoid reusing passwords across different accounts. Whenever possible, enable two-factor authentication (2FA) to add an extra layer of security to your online accounts.
Educate yourself on prevalent scams targeting seniors, such as phishing emails, romance scams, tech support fraud, and investment schemes. Regularly consult trusted resources like the NCOA and Age UK for updates on new scam tactics and prevention strategies.
If you receive a request for money or personal information, especially if it’s urgent, take a moment to verify its legitimacy. Contact the organization directly using official contact information, not the details provided in the suspicious message. Be particularly cautious with unexpected requests from supposed family members or friends.
If you believe you’ve encountered a scam, report it to the appropriate authorities. Reporting helps protect others and contributes to broader efforts to combat fraud.
For more comprehensive information and resources, consider exploring the following:
I recall my mother not recognising a transaction in her banking app, and she thought that money was being taken from her account. It turns out that it was a legitimate transaction made in a local cafe, but the head office was located in a suburb she was not familiar with, which caused her to think it was fraudulent.
This kind of scenario could easily be addressed with a feature I have seen in the ING banking app (International Netherlands Group). You tap on the transaction to view more information about your transaction.
These interventions are not aimed at stopping users, but they can give them one last chance to rethink their transactions. That’s powerful.
Finally, here’s an example of clear UX cues that streamline the experience and guide users through their journey with greater confidence and clarity.
Added security features in banking apps, like the examples above, aren’t just about preventing fraud; they’re examples of thoughtful UX design. These features are built to feel natural, not burdensome, helping users stay safe without getting overwhelmed. As UX professionals, we have a responsibility to design with protection in mind, anticipating threats and creating experiences that guide users away from risky actions. Good UX in financial products isn’t just seamless; it’s about security by design.
And in a world where digital deception is on the rise, protection is usability. Designers have the power and the responsibility to make interfaces that support safer choices, especially for older users, whose lives and life savings may depend on a single click.
Let’s stop thinking of security as a backend concern or someone else’s job. Let’s design systems that are scam-resistant, age-inclusive, and intentionally clear. And don’t forget to reach out with the additional human touch to help your older family members.
When it comes down to it, good UX isn’t just helpful — it can be life-changing.
Meet Accessible UX Research, A Brand-New Smashing Book Meet Accessible UX Research, A Brand-New Smashing Book Vitaly Friedman 2025-06-20T16:00:00+00:00 2025-06-25T15:04:30+00:00 UX research can take so much of the guesswork out of the design process! But it’s easy to forget just how different people are and […]
Accessibility
2025-06-20T16:00:00+00:00
2025-06-25T15:04:30+00:00
UX research can take so much of the guesswork out of the design process! But it’s easy to forget just how different people are and how their needs and preferences can vary. We can’t predict the needs of every user, but we shouldn’t expect different people using the product in roughly the same way. That’s how we end up with an incomplete, inaccurate, or simply wrong picture of our customers.
There is no shortage of accessibility checklists and guidelines. But accessibility isn’t a checklist. It doesn’t happen by accident. It’s a dedicated effort to include and consider and understand different needs of different users to make sure everyone can use our products successfully. That’s why we’ve teamed up with Michele A. Williams on a shiny new book around just that.
Meet Accessible UX Research, your guide to making UX research more inclusive of participants with different needs — from planning and recruiting to facilitation, asking better questions, avoiding bias, and building trust. Pre-order the book.
{
“sku”: “accessible-ux-research”,
“type”: “Book”,
“price”: “44.00”,
“prices”: [{
“amount”: “44.00”,
“currency”: “USD”,
“items”: [
{“amount”: “34.00”, “type”: “Book”},
{“amount”: “10.00”, “type”: “E-Book”}
]
}, {
“amount”: “44.00”,
“currency”: “EUR”,
“items”: [
{“amount”: “34.00”, “type”: “Book”},
{“amount”: “10.00”, “type”: “E-Book”}
]
}
]
}
$
44.00
Quality hardcover. Free worldwide shipping starting in August 2025.
100 days money-back-guarantee.
{
“sku”: “accessible-ux-research-ebook”,
“type”: “E-Book”,
“price”: “19.00”,
“prices”: [{
“amount”: “19.00”,
“currency”: “USD”
}, {
“amount”: “19.00”,
“currency”: “EUR”
}
]
}
$
19.00
Free!
DRM-free, of course. ePUB, Kindle, PDF available for download later this summer.
Included with your Smashing Membership.
Download PDF, ePUB, Kindle.
Thanks for being smashing! ❤️
The book isn’t a checklist for you to complete as a part of your accessibility work. It’s a practical guide to inclusive UX research, from start to finish. If you’ve ever felt unsure how to include disabled participants, or worried about “getting it wrong,” this book is for you. You’ll get clear, practical strategies to make your research more inclusive, effective, and reliable.
Inside, you’ll learn how to:
The book also challenges common assumptions about disability and urges readers to rethink what inclusion really means in UX research and beyond. Let’s move beyond compliance and start doing research that reflects the full diversity of your users. Whether you’re in industry or academia, this book gives you the tools — and the mindset — to make it happen.
High-quality hardcover. Written by Dr. Michele A. Williams. Cover art by Espen Brunborg. Print shipping in August 2025. eBook available for download later this summer. Pre-order the book.
Whether a UX professional who conducts research in industry or academia, or more broadly part of an engineering, product, or design function, you’ll want to read this book if…
{
“sku”: “accessible-ux-research”,
“type”: “Book”,
“price”: “44.00”,
“prices”: [{
“amount”: “44.00”,
“currency”: “USD”,
“items”: [
{“amount”: “34.00”, “type”: “Book”},
{“amount”: “10.00”, “type”: “E-Book”}
]
}, {
“amount”: “44.00”,
“currency”: “EUR”,
“items”: [
{“amount”: “34.00”, “type”: “Book”},
{“amount”: “10.00”, “type”: “E-Book”}
]
}
]
}
$
44.00
Quality hardcover. Free worldwide shipping starting in August 2025.
100 days money-back-guarantee.
{
“sku”: “accessible-ux-research-ebook”,
“type”: “E-Book”,
“price”: “19.00”,
“prices”: [{
“amount”: “19.00”,
“currency”: “USD”
}, {
“amount”: “19.00”,
“currency”: “EUR”
}
]
}
$
19.00
Free!
DRM-free, of course. ePUB, Kindle, PDF available for download later this summer.
Included with your Smashing Membership.
Download PDF, ePUB, Kindle.
Thanks for being smashing! ❤️
Dr. Michele A. Williams is owner of M.A.W. Consulting, LLC – Making Accessibility Work. Her 20+ years of experience include influencing top tech companies as a Senior User Experience (UX) Researcher and Accessibility Specialist and obtaining a PhD in Human-Centered Computing focused on accessibility. An international speaker, published academic author, and patented inventor, she is passionate about educating and advising on technology that does not exclude disabled users.
“Accessible UX Research stands as a vital and necessary resource. In addressing disability at the User Experience Research layer, it helps to set an equal and equitable tone for products and features that resonates through the rest of the creation process. The book provides a solid framework for all aspects of conducting research efforts, including not only process considerations, but also importantly the mindset required to approach the work.
This is the book I wish I had when I was first getting started with my accessibility journey. It is a gift, and I feel so fortunate that Michele has chosen to share it with us all.”
Eric Bailey, Accessibility Advocate
“User research in accessibility is non-negotiable for actually meeting users’ needs, and this book is a critical piece in the puzzle of actually doing and integrating that research into accessibility work day to day.”
Devon Pershing, Author of The Accessibility Operations Guidebook
“Our decisions as developers and designers are often based on recommendations, assumptions, and biases. Usually, this doesn’t work, because checking off lists or working solely from our own perspective can never truly represent the depth of human experience. Michele’s book provides you with the strategies you need to conduct UX research with diverse groups of people, challenge your assumptions, and create truly great products.”
Manuel Matuzović, Author of the Web Accessibility Cookbook
“This book is a vital resource on inclusive research. Michele Williams expertly breaks down key concepts, guiding readers through disability models, language, and etiquette. A strong focus on real-world application equips readers to conduct impactful, inclusive research sessions. By emphasizing diverse perspectives and proactive inclusion, the book makes a compelling case for accessibility as a core principle rather than an afterthought. It is a must-read for researchers, product-makers, and advocates!”
Anna E. Cook, Accessibility and Inclusive Design Specialist
Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be free for Smashing Members as soon as it’s out. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! 😉
Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.
In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Addy, Heather, and Steven are three of these people. Have you checked out their books already?
A deep dive into how production sites of different sizes tackle performance, accessibility, capabilities, and developer experience at scale.
Everything you need to know to put your users first and make a better web.
Learn how touchscreen devices really work — and how people really use them.
What I Wish Someone Told Me When I Was Getting Into ARIA What I Wish Someone Told Me When I Was Getting Into ARIA Eric Bailey 2025-06-16T13:00:00+00:00 2025-06-25T15:04:30+00:00 If you haven’t encountered ARIA before, great! It’s a chance to learn something new and exciting. If […]
Accessibility
2025-06-16T13:00:00+00:00
2025-06-25T15:04:30+00:00
If you haven’t encountered ARIA before, great! It’s a chance to learn something new and exciting. If you have heard of ARIA before, this might help you better understand it or maybe even teach you something new!
These are all things I wish someone had told me when I was getting started on my web accessibility journey. This post will:
It is my hope that in doing so, this post will help make an oft-overlooked yet vital corner of web design and development easier to approach.
This is not a recipe book for how to use ARIA to build accessible websites and web apps. It is also not a guide for how to remediate an inaccessible experience. A lot of accessibility work is highly contextual. I do not know the specific needs of your project or organization, so trying to give advice here could easily do more harm than good.
Instead, think of this post as a “know before you go” guide. I’m hoping to give you a good headspace to approach ARIA, as well as highlight things to watch out for when you undertake your journey. So, with that out of the way, let’s dive in!
ARIA is what you turn to if there is not a native HTML element or attribute that is better suited for the job of communicating interactivity, purpose, and state.
Think of it like a spice that you sprinkle into your markup to enhance things.
Adding ARIA to your HTML markup is a way of providing additional information to a website or web app for screen readers and voice control software.
Here is an illustration to help communicate what I mean by this:
button
element will instruct assistive technology to report it as a button, letting someone know that it can be activated to perform a predefined action.aria-pressed="true"
means that someone or something has previously activated the button, and it is now in a “pushed in” state that sustains its action.This overall pattern will let people who use assistive technology know:
ARIA has been around for a long time, with the first version published on September 26th, 2006.
ARIA was created to provide a bridge between the limitations of HTML and the need for making interactive experiences understandable by assistive technology.
“
The latest version of ARIA is version 1.2, published on June 6th, 2023. Version 1.3 is slated to be released relatively soon, and you can read more about it in this excellent article by Craig Abbott.
You may also see it referred to as WAI-ARIA, where WAI stands for “Web Accessibility Initiative.” The WAI is part of the W3C, the organization that sets standards for the web. That said, most accessibility practitioners I know call it “ARIA” in written and verbal communication and leave out the “WAI-” part.
The reason for this is simple: The web was a lot less mature in the past than it is now. The most popular operating system in 2006 was Windows XP. The iPhone didn’t exist yet; it was released a year later.
From a very high level, ARIA is a snapshot of the operating system interaction paradigms of this time period. This is because ARIA recreates them.
Smartphones with features like tappable, swipeable, and draggable surfaces were far less commonplace. Single Page Application “web app” experiences were also rare, with Ajax-based approaches being the most popular. This means that we have to build the experiences of today using the technology of 2006. In a way, this is a good thing. It forces us to take new and novel experiences and interrogate them.
Interactions that cannot be broken down into smaller, more focused pieces that map to ARIA patterns are most likely inaccessible. This is because they won’t be able to be operated by assistive technology or function on older or less popular devices.
I may be biased, but I also think these sorts of novel interactions that can’t translate also serve as a warning that a general audience will find them to be confusing and, therefore, unusable. This belief is important to consider given that the internet serves:
Contemporary expectations for keyboard-based interaction for web content — checkboxes, radios, modals, accordions, and so on — are sourced from Windows XP and its predecessor operating systems. These interaction models are carried forward as muscle memory for older people who use assistive technology. Younger people who rely on assistive technology also learn these de facto standards, thus continuing the cycle.
What does this mean for you? Someone using a keyboard to interact with your website or web app will most likely try these Windows OS-based keyboard shortcuts first. This means things like pressing:
This is not to say that ARIA has stagnated. It is constantly being worked on with new additions, removals, and clarifications. Remember, it is now at version 1.2, with version 1.3 arriving soon.
In parallel, HTML as a language also reflects this evolution. Elements were originally created to support a document-oriented web and have been gradually evolving to support more dynamic, app-like experiences. The great bit here is that this is all conducted in the open and is something you can contribute to if you feel motivated to do so.
There are five rules included in ARIA’s documentation to help steer how you approach it:
<a>
) for a link rather than a div
with a click handler and a role
of link
.div
.role="presentation"
or aria-hidden="true"
on a focusable element.button
element.Observing these five rules will do a lot to help you out. The following is more context to provide even more support.
There is a structured grammar to ARIA, and it is centered around roles, as well as states and properties.
A Role is what assistive technology reads and then announces. A lot of people refer to this in shorthand as semantics. HTML elements have implied roles, which is why an anchor element will be announced as a link by screen readers with no additional work.
Implied roles are almost always better to use if the use case calls for them. Recall the first rule of ARIA here. This is usually what digital accessibility practitioners refer to when they say, “Just use semantic HTML.”
There are many reasons for favoring implied roles. The main consideration is better guarantees of support across an unknown number of operating systems, browsers, and assistive technology combinations.
Roles have categories, each with its own purpose. The Abstract role category is notable in that it is an organizing supercategory not intended to be used by authors:
Abstract roles are used for the ontology. Authors MUST NOT use abstract roles in content.
<!-- This won't work, don't do it -->
<h2 role="sectionhead">
Anatomy and physiology
</h2>
<!-- Do this instead -->
<section aria-labeledby="anatomy-and-physiology">
<h2 id="anatomy-and-physiology">
Anatomy and physiology
</h2>
</section>
Additionally, in the same way, you can only declare ARIA on certain things, you can only declare some ARIA as children of other ARIA declarations. An example of this is the the listitem
role, which requires a role of list
to be present on its parent element.
So, what’s the best way to determine if a role requires a parent declaration? The answer is to review the official definition.
States and properties are the other two main parts of ARIA‘s overall taxonomy.
Implicit roles are provided by semantic HTML, and explicit roles are provided by ARIA. Both describe what an element is. States describe that element’s characteristics in a way that assistive technology can understand. This is done via property declarations and their companion values.
ARIA states can change quickly or slowly, both as a result of human interaction as well as application state. When the state is changed as a result of human interaction, it is considered an “unmanaged state.” Here, a developer must supply the underlying JavaScript logic to control the interaction.
When the state changes as a result of the application (e.g., operating system, web browser, and so on), this is considered “managed state.” Here, the application automatically supplies the underlying logic.
Think of ARIA as an extension of HTML attributes, a suite of name/value pairs. Some values are predefined, while others are author-supplied:
For the examples in the previous graphic, the polite
value for aria-live
is one of the three predefined values (off
, polite
, and assertive
). For aria-label
, “Save” is a text string manually supplied by the author.
You declare ARIA on HTML elements the same way you declare other attributes:
<!--
Applies an id value of
"carrot" to the div
-->
<div id="carrot"></div>
<!--
Hides the content of this paragraph
element from assistive technology
-->
<p aria-hidden="true">
Assistive technology can't read this
</p>
<!--
Provides an accessible name of "Stop",
and also communicates that the button
is currently pressed. A type property
with a value of "button" prevents
browser form submission.
-->
<button
aria-label="Stop"
aria-pressed="true"
type="button">
<!-- SVG icon -->
</button>
Other usage notes:
class
or id
. The order of declarations does not matter here, either.It might also be helpful to know that boolean attributes are treated a little differently in ARIA when compared to HTML. Hidde de Vries writes about this in his post, “Boolean attributes in HTML and ARIA: what’s the difference?”.
In this context, “hardcoding” means directly writing a static attribute or value declaration into your component, view, or page.
A lot of ARIA is designed to be applied or conditionally modified dynamically based on application state or as a response to someone’s action. An example of this is a show-and-hide disclosure pattern:
aria-expanded
attribute is toggled from false
to true
to communicate if the disclosure is in an expanded or collapsed state.hidden
attribute is conditionally removed or added in tandem to show or hide the disclosure’s full content area.<div class="disclosure-container">
<button
aria-expanded="false"
class="disclosure-toggle"
type="button">
How we protect your personal information
</button>
<div
hidden
class="disclosure-content">
<ul>
<li>Fast, accurate, thorough and non-stop protection from cyber attacks</li>
<li>Patching practices that address vulnerabilities that attackers try to exploit</li>
<li>Data loss prevention practices help to ensure data doesn't fall into the wrong hands</li>
<li>Supply risk management practices help ensure our suppliers adhere to our expectations</li>
</ul>
<p>
<a href="/security/">Learn more about our security best practices</a>.
</p>
</div>
</div>
A common example of a hardcoded ARIA declaration you’ll encounter on the web is making an SVG icon inside a button decorative:
<button type="button>
<svg aria-hidden="true">
<!-- SVG code -->
</svg>
Save
</button>
Here, the string “Save” is what is required for someone to understand what the button will do when they activate it. The accompanying icon helps that understanding visually but is considered redundant and therefore decorative.
An implied role is all you need if you’re using semantic HTML. Explicitly declaring its role via ARIA does not confer any additional advantages.
<!--
You don't need to declare role="button" here.
Using the <button> element will make assistive
technology announce it as a button. The
role="button" declaration is redundant.
-->
<button role="button">
Save
</button>
You might occasionally run into these redundant declarations on HTML sectioning elements, such as <main role="main">
, or <footer role="contentinfo">
. This isn’t needed anymore, and you can just use the <main>
or <footer>
elements.
The reason for this is historic. These declarations were done for support reasons, in that it was a stop-gap technique for assistive technology that needed to be updated to support these new-at-the-time HTML elements.
Contemporary assistive technology does not need these redundant declarations. Think of it the same way that we don’t have to use vendor prefixes for the CSS border-radius
property anymore.
Note: There is an exception to this guidance. There are circumstances where certain complex and complicated markup patterns don’t work as expected for assistive technology. In these cases, we want to hardcode the implicit role as explicit ARIA to ensure it works. This assistive technology support concern is covered in more detail later in this post.
Both implicit and explicit roles are announced by screen readers. You don’t need to include that part for things like the interactive element’s text string or an aria-label
.
<!-- Don't do this -->
<button
aria-label="Save button"
type="button">
<!-- Icon SVG -->
</button>
<!-- Do this instead -->
<button
aria-label="Save"
type="button">
<!-- Icon SVG -->
</button>
Had we used the string value of “Save button” for our Save button, a screen reader would announce it along the lines of, “Save button, button.” That’s redundant and confusing.
We sometimes refer to website and web app navigation colloquially as menus, especially if it’s an e-commerce-style mega menu.
In ARIA, menus mean something very specific. Don’t think of global or in-page navigation or the like. Think of menus in this context as what appears when you click the Edit menu button on your application’s menubar.
Using a role improperly because its name seems like an appropriate fit at first glance creates confusion for people who do not have the context of the visual UI. Their expectations will be set with the announcement of the role, then subverted when it does not act the way it is supposed to.
Imagine if you click on a link, and instead of taking you to another webpage, it sends something completely unrelated to your printer instead. It’s sort of like that.
Declaring role="menu"
is a common example of a misapplied role, but there are others. The best way to know what a role is used for? Go straight to the source and read up on it.
These roles are caption
, code
, deletion
, emphasis
, generic
, insertion
, paragraph
, presentation
, strong
, subscript
, and superscript
.
This means you can try and provide an accessible name for one of these elements — say via aria-label
— but it won’t work because it’s disallowed by the rules of ARIA’s grammar.
<!-- This won't work-->
<strong aria-label="A 35% discount!">
$39.95
</strong>
<!-- Neither will this -->
<code title="let JavaScript example">
let submitButton = document.querySelector('button[type="submit"]');
</code>
For these examples, recall that the role is implicit, sourced from the declared HTML element.
Note here that sometimes a browser will make an attempt regardless and overwrite the author-specified string value. This overriding is a confusing act for all involved, which led to the rule being established in the first place.
I’ve witnessed some developers guess-adding CSS classes, such as .background-red
or .text-white
, to their markup and being rewarded if the design visually updates correctly.
The reason this works is that someone previously added those classes to the project. With ARIA, the people who add the content we can use are the Accessible Rich Internet Applications Working Group. This means each new version of ARIA has a predefined set of properties and values. Assistive technology is then updated to parse those attributes and values, although this isn’t always a guarantee.
Declaring ARIA, which isn’t part of that predefined set, means assistive technology won’t know what it is and consequently won’t announce it.
<!--
There is no "selectpanel" role in ARIA.
Because of this, this code will be announced
as a button and not as a select panel.
-->
<button
role="selectpanel"
type="button">
Choose resources
</button>
This speaks to the previous section, where ARIA won’t understand words spoken to it that exist outside its limited vocabulary.
There are no console errors for malformed ARIA. There’s also no alert dialog, beeping sound, or flashing light for your operating system, browser, or assistive technology. This fact is yet another reason why it is so important to test with actual assistive technology.
You don’t have to be an expert here, either. There is a good chance your code needs updating if you set something to announce as a specific state and assistive technology in its default configuration does not announce that state.
Applying ARIA to something does not automatically “unlock” capabilities. It only sends a hint to assistive technology about how the interactive content should behave.
For assistive technology like screen readers, that hint could be for how to announce something. For assistive technology like refreshable Braille displays, it could be for how it raises and lowers its pins. For example, declaring role="button"
on a div
element does not automatically make it clickable. You will still need to:
div
element in JavaScript,This all makes me wonder why you can’t save yourself some work and use a button
element in the first place, but that is a different story for a different day.
Additionally, adjusting an element’s role via ARIA does not modify the element’s native functionality. For example, you can declare role="image"
on a div
element. However, attempting to declare the alt
or src
attributes on the div
won’t work. This is because alt
and src
are not supported attributes for div
.
This speaks to the previous section on ARIA only exposing something’s presence. Don’t forget that certain HTML elements have primary and secondary interactive capabilities built into them.
For example, an anchor element’s primary capability is navigating to whatever URL value is provided for its href
attribute. Secondary capabilities for an anchor element include copying the URL value, opening it in a new tab or incognito window, and so on.
These secondary capabilities are still preserved. However, it may not be apparent to someone that they can use them — or use them in the way that they’d expect — depending on what is announced.
The opposite is also true. When an element has no capabilities, having its role adjusted does not grant it any new abilities. Remember, ARIA only announces. This is why that div
with a role
of button
assigned to it won’t do anything when clicked if no companion JavaScript logic is also present.
A lot of the previous content may make it seem like ARIA is something you should avoid using altogether. This isn’t true. Know that this guidance is written to help steer you to situations where HTML does not offer the capability to describe an interaction out of the box. This space is where you want to use ARIA.
Knowing how to identify this area requires spending some time learning what HTML elements there are, as well as what they are and are not used for. I quite like HTML5 Doctor’s Element Index for upskilling on this.
This is analogous to how HTML has both global attributes and attributes that can only be used on a per-element basis. For example, aria-describedby
can be used on any HTML element or role. However, aria-posinset
can only be used with article
, comment
, listitem
, menuitem
, option
, radio
, row
, and tab
roles. Remember here that these roles can be provided by either HTML or ARIA.
Learning what states require which roles can be achieved by reading the official reference. Check for the “Used in Roles” portion of each entry’s characteristics:
aria-setsize
. (Large preview)
Automated code scanners — like axe, WAVE, ARC Toolkit, Pa11y, equal-access, and so on — can catch this sort of thing if they are written in error. I’m a big fan of implementing these sorts of checks as part of a continuous integration strategy, as it makes it a code quality concern shared across the whole team.
Speaking of technology that listens, it is helpful to know that the ARIA you declare instructs the browser to speak to the operating system the browser is installed on. Assistive technology then listens to what the operating system reports. It then communicates that to the person using the computer, tablet, smartphone, and so on.
A person can then instruct assistive technology to request the operating system to take action on the web content displayed in the browser.
This interaction model is by design. It is done to make interaction from assistive technology indistinguishable from interaction performed without assistive technology.
There are a few reasons for this approach. The most important one is it helps preserve the privacy and autonomy of the people who rely on assistive technologies.
This support issue was touched on earlier and is a difficult fact to come to terms with.
Contemporary developers enjoy the hard-fought, hard-won benefits of the web standards movement. This means you can declare HTML and know that it will work with every major browser out there. ARIA does not have this. Each assistive technology vendor has its own interpretation of the ARIA specification. Oftentimes, these interpretations are convergent. Sometimes, they’re not.
Assistive technology vendors also have support roadmaps for their products. Some assistive technology vendors:
There is also the operating system layer to contend with, which I’ll cover in more detail in a little bit. Here, the mechanisms used to communicate with assistive technology are dusty, oft-neglected areas of software development.
With these layers comes a scenario where the assistive technology can support the ARIA declared, but the operating system itself cannot communicate the ARIA’s presence, or vice-versa. The reasons for this are varied but ultimately boil down to a historic lack of support, prioritization, and resources. However, I am optimistic that this is changing.
Additionally, there is no equivalent to Caniuse, Baseline, or Web Platform Status for assistive technology. The closest analog we have to support checking resources is a11ysupport.io, but know that it is the painstaking work of a single individual. Its content may not be up-to-date, as the work is both Herculean in its scale and Sisyphean in its scope. Because of this, I must re-stress the importance of manually testing with assistive technology to determine if the ARIA you use works as intended.
How To Determine ARIA Support
There are three main layers to determine if something is supported:
Each operating system (e.g., Windows, macOS, Linux) has its own way of communicating what content is present to assistive technology. Each piece of assistive technology has to accommodate how to parse that communication.
Some assistive technology is incompatible with certain operating systems. An example of this is not being able to use VoiceOver with Windows, or JAWS with macOS. Furthermore, each version of each operating system has slight variations in what is reported and how. Sometimes, the operating system needs to be updated to “teach” it the updated AIRA vocabulary. Also, do not forget that things like bugs and regressions can occur.
There is no “one true way” to make assistive technology. Each one is built to address different access needs and wants and is done so in an opinionated way — think how different web browsers have different features and UI.
Each piece of assistive technology that consumes web content has its own way of communicating this information, and this is by design. It works with what the operating system reports, filtered through things like heuristics and preferences.
aria-label
. (Large preview)
Like operating systems, assistive technology also has different versions with what each version is capable of supporting. They can also be susceptible to bugs and regressions.
Another two factors worth pointing out here are upgrade hesitancy and lack of financial resources. Some people who rely on assistive technology are hesitant to upgrade it. This is based on a very understandable fear of breaking an important mechanism they use to interact with the world. This, in turn, translates to scenarios like holding off on updates until absolutely necessary, as well as disabling auto-updating functionality altogether.
Lack of financial resources is sometimes referred to as the disability or crip tax. Employment rates tend to be lower for disabled populations, and with that comes less money to spend on acquiring new technology and updating it. This concern can and does apply to operating systems, browsers, and assistive technology.
Some assistive technology works better with one browser compared to another. This is due to the underlying mechanics of how the browser reports its content to assistive technology. Using Firefox with NVDA is an example of this.
Additionally, the support for this reporting sometimes only gets added for newer versions. Unfortunately, it also means support can sometimes accidentally regress, and people don’t notice before releasing the browser update — again, this is due to a historic lack of resources and prioritization.
Common ARIA declarations you’ll come across include, but are not limited to:
aria-label
,aria-labelledby
,aria-describedby
,aria-hidden
,aria-live
.These are more common because they’re more supported. They are more supported because many of these declarations have been around for a while. Recall the previous section that discussed actual assistive technology support compared to what the ARIA specification supplies.
Newer, more esoteric ARIA, or historically deprioritized declarations, may not have that support yet or may never. An example of how complicated this can get is aria-controls
.
aria-controls
is a part of ARIA that has been around for a while. JAWS had support for aria-controls
, but then removed it after user feedback. Meanwhile, every other screen reader I’m aware of never bothered to add support.
What does that mean for us? Determining support, or lack thereof, is best accomplished by manual testing with assistive technology.
This fact takes into consideration the complexities in preferences, different levels of support, bugs, regressions, and other concerns that come with ARIA’s usage.
Philosophically, it’s a lot like adding more interactive complexity to your website or web app via JavaScript. The larger the surface area your code covers, the bigger the chance something unintended happens.
Consider the amount of ARIA added to a component or discrete part of your experience. The more of it there is declared nested into the Document Object Model (DOM), the more it interacts with parent ARIA declarations. This is because assistive technology reads what the DOM exposes to help determine intent.
A lot of contemporary development efforts are isolated, feature-based work that focuses on one small portion of the overall experience. Because of this, they may not take this holistic nesting situation into account. This is another reason why — you guessed it — manual testing is so important.
Anecdotally, WebAIM’s annual Millions report — an accessibility evaluation of the top 1,000,000 websites — touches on this phenomenon:
Increased ARIA usage on pages was associated with higher detected errors. The more ARIA attributes that were present, the more detected accessibility errors could be expected. This does not necessarily mean that ARIA introduced these errors (these pages are more complex), but pages typically had significantly more errors when ARIA was present.
There is a chance that ARIA, which is authored inaccurately, will actually function as intended with assistive technology. While I do not recommend betting on this fact to do your work, I do think it is worth mentioning when it comes to things like debugging.
This is due to the wide range of familiarity there is with people who author ARIA.
Some of the more mature assistive technology vendors try to accommodate the lower end of this familiarity. This is done in order to better enable the people who use their software to actually get what they need.
There isn’t an exhaustive list of what accommodations each piece of assistive technology has. Think of it like the forgiving nature of a browser’s HTML parser, where the ultimate goal is to render content for humans.
aria-label
Is Trickyaria-label
is one of the most common ARIA declarations you’ll run across. It’s also one of the most misused.
aria-label
can’t be applied to non-interactive HTML elements, but oftentimes is. It can’t always be translated and is oftentimes overlooked for localization efforts. Additionally, it can make things frustrating to operate for people who use voice control software, where the visible label differs from what the underlying code uses.
Another problem is when it overrides an interactive element’s pre-existing accessible name. For example:
<!-- Don't do this -->
<a
aria-label="Our services"
href="/services/">
Services
</a>
This is a violation of WCAG Success Criterion 2.5.3: Label in Name, pure and simple. I have also seen it used as a way to provide a control hint. This is also a WCAG failure, in addition to being an antipattern:
<!-- Also don't do this -->
<a
aria-label="Click this link to learn more about our unique and valuable services"
href="/services/">
Services
</a>
These factors — along with other considerations — are why I consider aria-label
a code smell.
aria-live
Is Even TrickierLive region announcements are powered by aria-live
and are an important part of communicating updates to an experience to people who use screen readers.
Believe me when I say that getting aria-live
to work properly is tricky, even under the best of scenarios. I won’t belabor the specifics here. Instead, I’ll point you to “Why are my live regions not working?”, a fantastic and comprehensive article published by TetraLogical.
Also referred to as the APG, the ARIA Authoring Practices Guide should be treated with a decent amount of caution.
The guide was originally authored to help demonstrate ARIA’s capabilities. As a result, its code examples near-exclusively, overwhelmingly, and disproportionately favor ARIA.
Unfortunately, the APG’s latest redesign also makes it far more approachable-looking than its surrounding W3C documentation. This is coupled with demonstrating UI patterns in a way that signals it’s a self-serve resource whose code can be used out of the box.
These factors create a scenario where people assume everything can be used as presented. This is not true.
Recall that just because ARIA is listed in the spec does not necessarily guarantee it is supported. Adrian Roselli writes about this in detail in his post, “No, APG’s Support Charts Are Not ‘Can I Use’ for ARIA”.
Also, remember the first rule of ARIA and know that an ARIA-first approach is counter to the specification’s core philosophy of use.
In my experience, this has led to developers assuming they can copy-paste code examples or reference how it’s structured in their own efforts, and everything will just work. This leads to mass frustration:
This is to say nothing about things like timelines and resourcing, working relationships, reputation, and brand perception.
The APG’s main strength is highlighting what keyboard keypresses people will expect to work on each pattern.
Consider the listbox pattern. It details keypresses you may expect (arrow keys, Space, and Enter), as well as less-common ones (typeahead selection and making multiple selections). Here, we need to remember that ARIA is based on the Windows XP era. The keyboard-based interaction the APG suggests is built from the muscle memory established from the UI patterns used on this operating system.
While your tree view component may look visually different from the one on your operating system, people will expect it to be keyboard operable in the same way. Honoring this expectation will go a long way to ensuring your experiences are not only accessible but also intuitive and efficient to use.
Another strength of the APG is giving standardized, centralized names to UI patterns. Is it a dropdown? A listbox? A combobox? A select menu? Something else?
When it comes to digital accessibility, these terms all have specific meanings, as well as expectations that come with them. Having a common vocabulary when discussing how an experience should work goes a long way to ensuring everyone will be on the same page when it comes time to make and maintain things.
VoiceOver on macOS has been experiencing a lot of problems over the last few years. If I could wager a guess as to why this is, as an outsider, it is that Apple’s priorities are focused elsewhere.
The bulk of web development efforts are conducted on macOS. This means that well-intentioned developers will reach for VoiceOver, as it comes bundled with macOS and is therefore more convenient. However, macOS VoiceOver usage has a drastic minority share for desktops and laptops. It is under 10% of usage, with Windows-based JAWS and NVDA occupying a combined 78.2% majority share:
The sad, sorry truth of the matter is that macOS VoiceOver, in its current state, has a lot of problems. It should only be used to confirm that it can operate the experience the way Windows-based screen readers can.
This means testing on Windows with NVDA or JAWS will create an experience that is far more accurate to what most people who use screen readers on a laptop or desktop will experience.
Because of this situation, I heavily encourage a workflow that involves:
Most of the time, I find myself having to declare redundant ARIA on the semantic HTML I write in order to address missed expected announcements for macOS VoiceOver.
macOS VoiceOver testing is still important to do, as it is not the fault of the person who uses macOS VoiceOver to get what they need, and we should ensure they can still have access.
You can use apps like VirtualBox and Windows evaluation Virtual Machines to use Windows in your macOS development environment. Services like AssistivLabs also make on-demand, preconfigured testing easy.
What About iOS VoiceOver?
Despite sharing the same name, VoiceOver on iOS is a completely different animal. As software, it is separate from its desktop equivalent and also enjoys a whopping 70.6% usage share.
With this knowledge, know that it’s also important to test the ARIA you write on mobile to make sure it works as intended.
ARIA attributes can be targeted via CSS the way other HTML attributes can. Consider this HTML markup for the main navigation portion of a small e-commerce site:
<nav aria-label="Main">
<ul>
<li>
<a href="/home/">Home</a>
<a href="/products/">Products</a>
<a aria-current="true" href="/about-us/">About Us</a>
<a href="/contact/">Contact</a>
</li>
</ul>
</nav>
The presence of aria-current="true"
on the “About Us” link will tell assistive technology to announce that it is the current part of the site someone is on if they are navigating through the main site navigation.
We can also tie that indicator of being the current part of the site into something that is shown visually. Here’s how you can target the attribute in CSS:
nav[aria-label="Main"] [aria-current="true"] {
border-bottom: 2px solid #ffffff;
}
This is an incredibly powerful way to tie application state to user-facing state. Combine it with modern CSS like :has()
and view transitions and you have the ability to create robust, sophisticated UI with less reliance on JavaScript.
Tests are great. They help guarantee that the code you work on will continue to do what you intended it to do.
A lot of web UI-based testing will use the presence of classes (e.g., .is-expanded
) or data attributes (ex, data-expanded
) to verify a UI’s existence, position and states. These types of selectors also have a far greater likelihood to be changed as time goes on when compared to semantic code and ARIA declarations.
This is something my coworker Cam McHenry touches on in his great post, “How I write accessible Playwright tests”. Consider this piece of Playwright code, which checks for the presence of a button that toggles open an edit menu:
// Selects an element with a role of `button`
// that has an accessible name of "Edit"
const editMenuButton = await page.getByRole('button', { name: "Edit" });
// Requires the edit button to have a property
// of `aria-haspopup` with a value of `true`
expect(editMenuButton).toHaveAttribute('aria-haspopup', 'true');
The test selects UI based on outcome rather than appearance. That’s a far more reliable way to target things in the long-term.
This all helps to create a virtuous feedback cycle. It enshrines semantic HTML and ARIA’s presence in your front-end UI code, which helps to guarantee accessible experiences don’t regress. Combining this with styling, you have a powerful, self-contained system for building robust, accessible experiences.
Web accessibility can be about enabling important things like scheduling medical appointments. It is also about fun things like chatting with your friends. It’s also used for every web experience that lives in between.
Using semantic HTML — supplemented with a judicious application of ARIA — helps you enable these experiences. To sum things up, ARIA:
aria-label
, the ARIA Authoring Practices Guide, and macOS VoiceOver support;Viewed one way, ARIA is arcane, full of misconceptions, and fraught with potential missteps. Viewed another, ARIA is a beautiful and elegant way to programmatically communicate the interactivity and state of a user interface.
I choose the second view. At the end of the day, using ARIA helps to ensure that disabled people can use a web experience the same way everyone else can.
Thank you to Adrian Roselli and Jan Maarten for their feedback.
Creating The “Moving Highlight” Navigation Bar With JavaScript And CSS Creating The “Moving Highlight” Navigation Bar With JavaScript And CSS Blake Lundquist 2025-06-11T13:00:00+00:00 2025-06-25T15:04:30+00:00 I recently came across an old jQuery tutorial demonstrating a “moving highlight” navigation bar and decided the concept was due for […]
Accessibility
2025-06-11T13:00:00+00:00
2025-06-25T15:04:30+00:00
I recently came across an old jQuery tutorial demonstrating a “moving highlight” navigation bar and decided the concept was due for a modern upgrade. With this pattern, the border around the active navigation item animates directly from one element to another as the user clicks on menu items. In 2025, we have much better tools to manipulate the DOM via vanilla JavaScript. New features like the View Transition API make progressive enhancement more easily achievable and handle a lot of the animation minutiae.
In this tutorial, I will demonstrate two methods of creating the “moving highlight” navigation bar using plain JavaScript and CSS. The first example uses the getBoundingClientRect
method to explicitly animate the border between navigation bar items when they are clicked. The second example achieves the same functionality using the new View Transition API.
Let’s assume that we have a single-page application where content changes without the page being reloaded. The starting HTML and CSS are your standard navigation bar with an additional div
element containing an id
of #highlight
. We give the first navigation item a class of .active
.
See the Pen [Moving Highlight Navbar Starting Markup [forked]](https://codepen.io/smashingmag/pen/EajQyBW) by Blake Lundquist.
For this version, we will position the #highlight
element around the element with the .active
class to create a border. We can utilize absolute
positioning and animate the element across the navigation bar to create the desired effect. We’ll hide it off-screen initially by adding left: -200px
and include transition
styles for all properties so that any changes in the position and size of the element will happen gradually.
#highlight {
z-index: 0;
position: absolute;
height: 100%;
width: 100px;
left: -200px;
border: 2px solid green;
box-sizing: border-box;
transition: all 0.2s ease;
}
We want the highlight element to animate when a user changes the .active
navigation item. Let’s add a click
event handler to the nav
element, then filter for events caused only by elements matching our desired selector. In this case, we only want to change the .active
nav item if the user clicks on a link that does not already have the .active
class.
Initially, we can call console.log
to ensure the handler fires only when expected:
const navbar = document.querySelector('nav');
navbar.addEventListener('click', function (event) {
// return if the clicked element doesn't have the correct selector
if (!event.target.matches('nav a:not(active)')) {
return;
}
console.log('click');
});
Open your browser console and try clicking different items in the navigation bar. You should only see "click"
being logged when you select a new item in the navigation bar.
Now that we know our event handler is working on the correct elements let’s add code to move the .active
class to the navigation item that was clicked. We can use the object passed into the event handler to find the element that initialized the event and give that element a class of .active
after removing it from the previously active item.
const navbar = document.querySelector('nav');
navbar.addEventListener('click', function (event) {
// return if the clicked element doesn't have the correct selector
if (!event.target.matches('nav a:not(active)')) {
return;
}
- console.log('click');
+ document.querySelector('nav a.active').classList.remove('active');
+ event.target.classList.add('active');
});
Our #highlight
element needs to move across the navigation bar and position itself around the active item. Let’s write a function to calculate a new position and width. Since the #highlight
selector has transition
styles applied, it will move gradually when its position changes.
Using getBoundingClientRect
, we can get information about the position and size of an element. We calculate the width of the active navigation item and its offset from the left boundary of the parent element. Then, we assign styles to the highlight element so that its size and position match.
// handler for moving the highlight
const moveHighlight = () => {
const activeNavItem = document.querySelector('a.active');
const highlighterElement = document.querySelector('#highlight');
const width = activeNavItem.offsetWidth;
const itemPos = activeNavItem.getBoundingClientRect();
const navbarPos = navbar.getBoundingClientRect()
const relativePosX = itemPos.left - navbarPos.left;
const styles = {
left: `${relativePosX}px`,
width: `${width}px`,
};
Object.assign(highlighterElement.style, styles);
}
Let’s call our new function when the click event fires:
navbar.addEventListener('click', function (event) {
// return if the clicked element doesn't have the correct selector
if (!event.target.matches('nav a:not(active)')) {
return;
}
document.querySelector('nav a.active').classList.remove('active');
event.target.classList.add('active');
+ moveHighlight();
});
Finally, let’s also call the function immediately so that the border moves behind our initial active item when the page first loads:
// handler for moving the highlight
const moveHighlight = () => {
// ...
}
// display the highlight when the page loads
moveHighlight();
Now, the border moves across the navigation bar when a new item is selected. Try clicking the different navigation links to animate the navigation bar.
See the Pen [Moving Highlight Navbar [forked]](https://codepen.io/smashingmag/pen/WbvMxqV) by Blake Lundquist.
That only took a few lines of vanilla JavaScript and could easily be extended to account for other interactions, like mouseover
events. In the next section, we will explore refactoring this feature using the View Transition API.
The View Transition API provides functionality to create animated transitions between website views. Under the hood, the API creates snapshots of “before” and “after” views and then handles transitioning between them. View transitions are useful for creating animations between documents, providing the native-app-like user experience featured in frameworks like Astro. However, the API also provides handlers meant for SPA-style applications. We will use it to reduce the JavaScript needed in our implementation and more easily create fallback functionality.
For this approach, we no longer need a separate #highlight
element. Instead, we can style the .active
navigation item directly using pseudo-selectors and let the View Transition API handle the animation between the before-and-after UI states when a new navigation item is clicked.
We’ll start by getting rid of the #highlight
element and its associated CSS and replacing it with styles for the nav a::after
pseudo-selector:
<nav>
- <div id="highlight"></div>
<a href="#" class="active">Home</a>
<a href="#services">Services</a>
<a href="#about">About</a>
<a href="#contact">Contact</a>
</nav>
- #highlight {
- z-index: 0;
- position: absolute;
- height: 100%;
- width: 0;
- left: 0;
- box-sizing: border-box;
- transition: all 0.2s ease;
- }
+ nav a::after {
+ content: " ";
+ position: absolute;
+ left: 0;
+ top: 0;
+ width: 100%;
+ height: 100%;
+ border: none;
+ box-sizing: border-box;
+ }
For the .active
class, we include the view-transition-name
property, thus unlocking the magic of the View Transition API. Once we trigger the view transition and change the location of the .active
navigation item in the DOM, “before” and “after” snapshots will be taken, and the browser will animate the border across the bar. We’ll give our view transition the name of highlight
, but we could theoretically give it any name.
nav a.active::after {
border: 2px solid green;
view-transition-name: highlight;
}
Once we have a selector that contains a view-transition-name
property, the only remaining step is to trigger the transition using the startViewTransition
method and pass in a callback function.
const navbar = document.querySelector('nav');
// Change the active nav item on click
navbar.addEventListener('click', async function (event) {
if (!event.target.matches('nav a:not(.active)')) {
return;
}
document.startViewTransition(() => {
document.querySelector('nav a.active').classList.remove('active');
event.target.classList.add('active');
});
});
Above is a revised version of the click
handler. Instead of doing all the calculations for the size and position of the moving border ourselves, the View Transition API handles all of it for us. We only need to call document.startViewTransition
and pass in a callback function to change the item that has the .active
class!
At this point, when clicking on a navigation link, you’ll notice that the transition works, but some strange sizing issues are visible.
This sizing inconsistency is caused by aspect ratio changes during the course of the view transition. We won’t go into detail here, but Jake Archibald has a detailed explanation you can read for more information. In short, to ensure the height of the border stays uniform throughout the transition, we need to declare an explicit height
for the ::view-transition-old
and ::view-transition-new
pseudo-selectors representing a static snapshot of the old and new view, respectively.
::view-transition-old(highlight) {
height: 100%;
}
::view-transition-new(highlight) {
height: 100%;
}
Let’s do some final refactoring to tidy up our code by moving the callback to a separate function and adding a fallback for when view transitions aren’t supported:
const navbar = document.querySelector('nav');
// change the item that has the .active class applied
const setActiveElement = (elem) => {
document.querySelector('nav a.active').classList.remove('active');
elem.classList.add('active');
}
// Start view transition and pass in a callback on click
navbar.addEventListener('click', async function (event) {
if (!event.target.matches('nav a:not(.active)')) {
return;
}
// Fallback for browsers that don't support View Transitions:
if (!document.startViewTransition) {
setActiveElement(event.target);
return;
}
document.startViewTransition(() => setActiveElement(event.target));
});
Here’s our view transition-powered navigation bar! Observe the smooth transition when you click on the different links.
See the Pen [Moving Highlight Navbar with View Transition [forked]](https://codepen.io/smashingmag/pen/ogXELKE) by Blake Lundquist.
Animations and transitions between website UI states used to require many kilobytes of external libraries, along with verbose, confusing, and error-prone code, but vanilla JavaScript and CSS have since incorporated features to achieve native-app-like interactions without breaking the bank. We demonstrated this by implementing the “moving highlight” navigation pattern using two approaches: CSS transitions combined with the getBoundingClientRect()
method and the View Transition API.
getBoundingClientRect()
method documentationCollaboration: The Most Underrated UX Skill No One Talks About Collaboration: The Most Underrated UX Skill No One Talks About Carrie Webster 2025-06-05T08:00:00+00:00 2025-06-25T15:04:30+00:00 When people talk about UX, it’s usually about the things they can see and interact with, like wireframes and prototypes, smart […]
Accessibility
2025-06-05T08:00:00+00:00
2025-06-25T15:04:30+00:00
When people talk about UX, it’s usually about the things they can see and interact with, like wireframes and prototypes, smart interactions, and design tools like Figma, Miro, or Maze. Some of the outputs are even glamorized, like design systems, research reports, and pixel-perfect UI designs. But here’s the truth I’ve seen again and again in over two decades of working in UX: none of that moves the needle if there is no collaboration.
Great UX doesn’t happen in isolation. It happens through conversations with engineers, product managers, customer-facing teams, and the customer support teams who manage support tickets. Amazing UX ideas come alive in messy Miro sessions, cross-functional workshops, and those online chats (e.g., Slack or Teams) where people align, adapt, and co-create.
Some of the most impactful moments in my career weren’t when I was “designing” in the traditional sense. They have been gaining incredible insights when discussing problems with teammates who have varied experiences, brainstorming, and coming up with ideas that I never could have come up with on my own. As I always say, ten minds in a room will come up with ten times as many ideas as one mind. Often, many ideas are the most useful outcome.
There have been times when a team has helped to reframe a problem in a workshop, taken vague and conflicting feedback, and clarified a path forward, or I’ve sat with a sales rep and heard the same user complaint show up in multiple conversations. This is when design becomes a team sport, and when your ability to capture the outcomes multiplies the UX impact.
The reason collaboration feels so urgent now is that the way we work since COVID has changed, according to a study published by the US Department of Labor. Teams are more cross-functional, often remote, and increasingly complex. Silos are easier to fall into, due to distance or lack of face-to-face contact, and yet alignment has never been more important. We can’t afford to see collaboration as a “nice to have” anymore. It’s a core skill, especially in UX, where our work touches so many parts of an organisation.
Let’s break down what collaboration in UX really means, and why it deserves way more attention than it gets.
Let’s start by clearing up a misconception. Collaboration is not the same as cooperation.
Collaboration, as defined in the book Communication Concepts, published by Deakin University, involves working with others to produce outputs and/or achieve shared goals. The outcome of collaboration is typically a tangible product or a measurable achievement, such as solving a problem or making a decision. Here’s an example from a recent project:
Recently, I worked on a fraud alert platform for a fintech business. It was a six-month project, and we had zero access to users, as the product had not yet hit the market. Also, the users were highly specialised in the B2B finance space and were difficult to find. Additionally, the team members I needed to collaborate with were based in Malaysia and Melbourne, while I am located in Sydney.
Instead of treating that as a dead end, we turned inward: collaborating with subject matter experts, professional services consultants, compliance specialists, and customer support team members who had deep knowledge of fraud patterns and customer pain points. Through bi-weekly workshops using a Miro board, iterative feedback loops, and sketching sessions, we worked on design solution options. I even asked them to present their own design version as part of the process.
After months of iterating on the fraud investigation platform through these collaboration sessions, I ended up with two different design frameworks for the investigator’s dashboard. Instead of just presenting the “best one” and hoping for buy-in, I ran a voting exercise with PMs, engineers, SMEs, and customer support. Everyone had a voice. The winning design was created and validated with the input of the team, resulting in an outcome that solved many problems for the end user and was owned by the entire team. That’s collaboration!
It is definitely one of the most satisfying projects of my career.
On the other hand, I recently caught up with an old colleague who now serves as a product owner. Her story was a cautionary tale: the design team had gone ahead with a major redesign of an app without looping her in until late in the game. Not surprisingly, the new design missed several key product constraints and business goals. It had to be scrapped and redone, with her now at the table. That experience reinforced what we all know deep down: your best work rarely happens in isolation.
As illustrated in my experience, true collaboration can span many roles. It’s not just between designers and PMs. It can also include QA testers who identify real-world issues, content strategists who ensure our language is clear and inclusive, sales representatives who interact with customers on a daily basis, marketers who understand the brand’s voice, and, of course, customer support agents who are often the first to hear when something goes wrong. The best outcomes arrive when we’re open to different perspectives and inputs.
If collaboration is so powerful, why don’t we talk about it more?
In my experience, one reason is the myth of the “lone UX hero”. Many of us entered the field inspired by stories of design geniuses revolutionising products on their own. Our portfolios often reflect that as well. We showcase our solo work, our processes, and our wins. Job descriptions often reinforce the idea of the solo UX designer, listing tool proficiency and deliverables more than soft skills and team dynamics.
And then there’s the team culture within many organisations of “just get the work done”, which often leads to fewer meetings and tighter deadlines. As a result, a sense of collaboration is inefficient and wasted. I have also experienced working with some designers where perfectionism and territoriality creep in — “This is my design” — which kills the open, communal spirit that collaboration needs.
In an ideal world, we’d always have direct access to users. But let’s be real. Sometimes that just doesn’t happen. Whether it’s due to budget constraints, time limitations, or layers of bureaucracy, talking to end users isn’t always possible. That’s where collaboration with team members becomes even more crucial.
The next best thing to talking to users? Talking to the people who talk to users. Sales teams, customer success reps, tech support, and field engineers. They’re all user researchers in disguise!
On another B2C project, the end users were having trouble completing the key task. My role was to redesign the onboarding experience for an online identity capture tool for end users. I was unable to schedule interviews with end users due to budget and time constraints, so I turned to the sales and tech support teams.
I conducted multiple mini-workshops to identify the most common onboarding issues they had heard directly from our customers. This led to a huge “aha” moment: most users dropped off before the document capture process. They may have been struggling with a lack of instruction, not knowing the required time, or not understanding the steps involved in completing the onboarding process.
That insight reframed my approach, and we ultimately redesigned the flow to prioritize orientation and clear instructions before proceeding to the setup steps. Below is an example of one of the screen designs, including some of the instructions we added.
This kind of collaboration is user research. It’s not a substitute for talking to users directly, but it’s a powerful proxy when you have limited options.
Glad you asked! Even AI tools, which are increasingly being used for idea generation, pattern recognition, or rapid prototyping, don’t replace collaboration; they just change the shape of it.
AI can help you explore design patterns, draft user flows, or generate multiple variations of a layout in seconds. It’s fantastic for getting past creative blocks or pressure-testing your assumptions. But let’s be clear: these tools are accelerators, not oracles. As an innovation and strategy consultant Nathan Waterhouse points out, AI can point you in a direction, but it can’t tell you which direction is the right one in your specific context. That still requires human judgment, empathy, and an understanding of the messy realities of users and business goals.
You still need people, especially those closest to your users, to validate, challenge, and evolve any AI-generated idea. For instance, you might use ChatGPT to brainstorm onboarding flows for a SaaS tool, but if you’re not involving customer support reps who regularly hear “I didn’t know where to start” or “I couldn’t even log in,” you’re just working with assumptions. The same applies to engineers who know what is technically feasible or PMs who understand where the business is headed.
AI can generate ideas, but only collaboration turns those ideas into something usable, valuable, and real. Think of it as a powerful ingredient, but not the whole recipe.
“
If collaboration doesn’t come naturally or hasn’t been a focus, that’s okay. Like any skill, it can be practiced and improved. Here are a few ways to level up:
Great design doesn’t emerge from a vacuum. It comes from open dialogue, cross-functional understanding, and a shared commitment to solving real problems for real people.
If there’s one thing I wish every early-career designer knew, it’s this:
Collaboration is not a side skill. It’s the engine behind every meaningful design outcome. And for seasoned professionals, it’s the superpower that turns good teams into great ones.
“
So next time you’re tempted to go heads-down and just “crank out a design,” pause to reflect. Ask who else should be in the room. And invite them in, not just to review your work, but to help create it.
Because in the end, the best UX isn’t just what you make. It’s what you make together.