12 Months of Designing for AI : March 2024
March 2024
Tools
SXSW 2024 happened in March. A fair few tech enthusiasts and pioneers shared their experiences. In the mix were 🎧 Ioana Teleanu ↗, 🎧 Marco Barbosa and Hjörtur Hilmarsson ↗, all pioneer designers in AiUX. Their talks focused on do’s and don’ts, effective mindsets to adopt when designing for AI, critical thinking and questioning skills, and so much more.
Several thought provoking opinion pieces are beginning to emerge on the topic of AiUX. One such piece by Nurkhon Akhmedov↗ details a framework for gauging when to use AI. Of the many important aspects to consider, the need for quality data from the start is one that can safely be considered a foundational element of successful AiUX. You can read more about Googles’ guidelines here.
Thoughts
With everyday dealings in designing AI-based features, some necessities are beginning to emerge:
Customer Journeys:
Detailed customer journeys are documentations of high and low points of the applications ability to meet needs. Creating a customer journey that detail a variety of fidelities is repeatedly presenting itself as a need for sound decision making. There are many reasons for this.
They help highlight the areas where AI can be most impactful. For example, a customer journey may show first time user on-boarding as a point of distress, confusion and lacking guidance. A deeper dive can reveal exactly what about existing on-boarding experience or key points to include in a newer on-boarding experience is lacking in the guidance they provide.
A user or click flow highlights the steps a user is taking to get the value the application is trying to provide. This can highlight ways to lessen time to value and even enhance the value delivered by the application.
Either user journey or customer journey maps provide a sensible, informed location for AI-based features and insights placement, therefore increasing the perceived value of both the application and the AI-powered tooling. It also helps build trust in the AI capabilities of an application.
Visual/Interface Design
In my team, we discuss implementing a style guide and interface that differentiates the AI-powered aspects from the rest of the application. The logic driving this discussion is this- if the users know what’s AI based, they’ll have an increased perceived value coming from the up-charge of using these capabilities.
I find myself disagreeing with the assumption that the differentiation through interface and visual design adequately highlights the value of AI features and insights in an application. Here’s why:
Depending on the soundness of the insights generated by AI, differentiating on a visual basis also draws attention to the flaws.
As AI awareness grows, parity with existing AI applications’ ways of representation should take precedence over differentiation. This is for a simple reason- humans like patterns. They are easier to recognize, learn and anticipate consequence of. If every application introduced a different interface paradigm, the likelihood of finding a dependable and repeatable pattern that maintains usability and goal achievement is going to have a high delta, therefor having a steep learning curve from application to application.
I believe that inducing value when and where the user needs it to delight and engage them is a better experience than introducing additional paradigms to commit to memory. Reducing friction and effort to adoption has always been a key tenet for User Experience, one that doesn’t miss in applicability in AiUX.
Tinkerings
In an attempt to understand viable injection points for AI in the application I am currently working on, I began adopting a framework that’s a hybrid between a research model created by a talented co-worker, Carrie Taylor ↗, and Vincent Koc’s recently published framework for AiUX ↗.
Carrie’s research model provides a fool-proof method of going broad to narrow down on capabilities, while Vincents’ framework enables proper vetting, critical thought, and analysis of solutions to implement. He has documented his framework and rationale in-depth, so I’ll articulate more on Carrie’s research model briefly.
The research model details a progression of research areas and takeaways to derive in order to better gauge outcomes from the succeeding step:
Competitive Analysis: Document competition, key features, pain points, advantage/disadvantage of home app, pricing, etc. to get a high-level picture of the landscape you’re designing for.
Qualitative Research: Reviewing call recordings can highlight mentions of competition noted above, the features that a home application does well vs. has room to improve on, and clear areas of opportunities that may benefit from capabilities powered by AI or otherwise.
Quantitative Research: Quantitative research by segment and mapping in-app behavior drives understanding of the impact of pain points discovered in the previous step.
Participant Selection: Finally, using the above information on segments experience pain points with high frequency, a panel of participants is selected for interviewing and testing features as they get designed.
About This Series: I began exploring AI application in B2B SaaS products in January 2024. It isn’t lost on me that there are few precedents for designing for AI, or even researching and iterating for it. In this series, I share tools, terms, thoughts and tinkerings, along with some lived experience about how cross-functional collaboration evolves, and more as I grow into a UX Designer for AI.