12 Months of Designing for AI: January 2024

January 2024

2024 has been a banner year so far- new tech, new job, new things to learn, and coming together in pure serendipity.

Consumer AI has become mainstream in the last year and tech companies are in a race to adopt to it. Its gaining use as a tool for analyzing and predicting user behavior. AI is showing up in an assistant/sidekick form in many applications. Data is its main currency, which immediately brings trust, safety and privacy into the forefront.

The best design practices rely heavily on collaborating with engineers and product management. Learning strategic approaches, business plans and basic technical concepts has helped me be a better collaborator, feedback seeker, and outcome definer. As AI becomes more mainstream, I will continue to learn about and share the AI-specific aspects of business, strategy, design and technology.

And that’s why you, Designer, should care about AI. Its powerful when built correctly and implemented to benefit users in an ethical way, best achieved through close collaboration with your product and engineer counterparts.

Feedback, comments, questions and coffee (jk) are always welcome ☻

Terms

  • LLMs: Large Language Models
    After passive familiarity, I took a deep dive into LLMs. LLMs are the main functionality that makes AI powerful. Its the tech that enables AI to learn from whats accessible on the internet or custom inputs, and generate insights, answers, articles, plans, etc. What’s different about LLMs from previous neural networks is that one, LLMs don’t rely on numeric allocation to words, making them faster and more flexible. And two, this flexibility affords LLMs the capability to parallelize tasks such as reading and concurrent analysis without multiple prompts. Want a concise deep dive? Check out this article by Amazon AWS.

  • Hallucinations
    Diving into how LLMs function went hand in hand in learning about measuring responses, including a look at LMSys Chatbot Arena. There are many variables that lead to scoring the accuracy, relevancy and credibility of AI generated responses. The one that stood out to me the most are “Hallucinations”. It was fascinating to learn that AI can be imaginative. One of the most mainstream examples of an AI hallucination is the chihuahua vs. muffins, or even golden doodles vs. fried chicken 😂 At the end of the day AI is a self-learner, which ironically makes it human in its missteps.

Tools

  • Learn: Elements of AI
    Free course covering AI foundations and building for AI.

Thoughts

  • How should one approach user research for AI application?

  • Between archetypes and personas, which are a better lens to capture mental models in a way appropriate for AI implementation? What even is appropriate in this context? I’m leaning towards Archetypes…Medium post coming soon!

  • Changes in Strategy: As I dive into my new role, I’m feeling myself make slight changes to how I’d traditionally approach collecting, analyzing, and solving problems. One of the significant points of adjustment is how problem solving is structured. I find myself nudging designers on my team to be more aware of mental models, sometimes even instead of app flows. My hunch is its critical to put the “feels” and “thinks” quadrants in extreme focus to design solutions that feel intuitive and trustworthy.

  • How does one design for trust?

  • One of the reasons for AI hallucinations is incomplete or inadequate information. This led me down a rabbit hole of trying to understand the appropriate amount of data needed to have negligible hallucinations. I didn’t find the answer, but I did stumble on to how much energy AI can consume. I wrote about information consumption habits a few years ago during which I learned a basic amount about server farms, structures and energy consumption. Because LLMs are inherently programmed to learn and grow, its a safe assumption that the energy required to sustain its growth will be proportional. Maybe AI will give us the answer on how to keep it alive and growing…without running out of resources.

Tinkerings

  • Arc Search (iOS Only, for now)
    Propensity-based Arc Search is cutting down my learning time in half. Its smart, helpful and currently my favorite web search tool. The core functionality I’m hooked on is that it “browses for you”, which in practice translates to “scrolling 359762057 internet pages for me” and generating results curated to my interests.


About This Series:  I began exploring AI application in B2B SaaS products in January 2024. It isn’t lost on me that there are few precedents for designing for AI, or even researching and iterating for it. In this series, I share tools, terms, thoughts and tinkerings, along with some lived experience about how cross-functional collaboration evolves, and more as I grow into a UX Designer for AI.

Previous
Previous

12 Months of Designing for AI : February 2024

Next
Next

What my portfolio says about me as a designer. And why.