Muzli - Design Inspiration

All the design inspiration you need. It's like crack for designers. And good for you too! #design #ux #ui #inspiration #creativity #art #startup

Follow publication

ChatGPT & Beyond: A Next-Gen Wave in Personalized UX/UI Design

nikel
Muzli - Design Inspiration
8 min readJan 18, 2024

Illustration by Joana Ricardo

In the realm of digital design, generative AI emerges as a transformative force, poised to revolutionized the way UX/UI designers craft exceptional user experiences. This remarkable tool, with its autonomous content generation capabilities, holds the promise of automating repetitive tasks, fostering creative co-creation, and unlocking personalized, adaptive, and dynamically evolving user interfaces. As generative AI continues to evolve, its impact on UX/UI design is poised to transcend boundaries, shaping the future of digital interactions in profound ways.

AI’s Influence on UI Paradigms

In the ever-evolving landscape of UX/UI design, the convergence of artificial intelligence (AI) brings forth a multitude of challenges and opportunities. This discourse isn’t about AI replacing design tools; it’s an exploration of AI’s substantial impact on UX/UI design. Our task as designers is to uncover effective strategies for harnessing AI’s potential to craft engaging and user-centric digital experiences.

Jacob Nielsen’s prophetic vision foresees a future where AI significantly shapes user interfaces. This impending transformation prompts a comprehensive revaluation of the UI paradigms that have underpinned design for decades.

Understanding the evolution of these paradigms offers invaluable insights. Let’s revisit the historical progression. Nielsen’s quote underscores a pivotal shift: “ User interfaces will be imbued with more AI than ever before”. This compositional change inherently alters the roles of UX professionals. To comprehend this shift holistically, it’s crucial to trace the trajectory of UI paradigms.

The first paradigm traces back to the era of batch processing. Specifications were handed to computers, and outcomes were produced after extensive processing periods.

Image via effectrode

The second paradigm, which reigns supreme today, revolves around command-based interaction. Users initiate actions, manipulate outputs, and iteratively steer towards desired outcomes.

Photo by freestocks on Unsplash

However, the advent of AI introduces a paradigm that transcends mere command-based interactions. Enter the third paradigm: intent and outcome-based interactions. This paradigm is exemplified by contemporary AI-driven tools, particularly evident in image generation. Consider the AI-driven image creation process, where the DALL-E-inspired design philosophy reigns supreme. This approach pivots from a command-driven structure to a more conversational interface. An essential feature of this new interface mindset is the introduction of a prompt at the core of the input. Herein lies the transformation — a shift in the user’s role.

No longer are users instructed step-by-step; instead, they articulate the desired outcome. This marks the emergence of the intent-based outcome specification — the third UI paradigm.

Exploring third paradigm

To delve into the third paradigm in our current landscape, let’s examine a prime example: ChatGPT. With the public release of ChatGPT, a new era of content generation and interaction has emerged. While the concept of chatbots and digital assistants isn’t novel, what sets ChatGPT apart is its ability to deliver more personalized and higher-quality responses to user queries. This remarkable capability has spurred numerous platforms to seize the opportunity and integrate generative AI features.

However, a prevalent issue among these applications lies in the flawed input methods employed. As we’ve previously discussed, the third paradigm emphasizes intent and outcome expectations. Yet, current applications predominantly utilize a chat box method for input with AI systems. Consider the recent feature of the Bunq app, introducing Finn, an AI assistant within a banking app. While Finn claims to assist with recognizing spending patterns and budgeting, it appears constrained by the chat box interaction method.

Finn promotionl material from Bunq.

For instance, the feature could enhance user experience by integrating financial projections directly into the app, rather than relying solely on user queries. The chatbot design, by confining itself to a specific interaction method, limits the potential of generative AI models in elevating individual aspects of an application.

Another illustrative example involves the use of generative AI in media. AI models have been leveraged to create images and videos, revealing a friction between intent and generated results. Experimenting with these models highlights the significance of context clues and knowledge in our daily lives.

For instance, when I asked Microsoft Bing to generate images of a Star Wars-inspired café, the initial results fell short of my expectations.

Initial Results from first prompt

With additional iterations and contextual information, the AI eventually produced results that aligned more closely with my preferences. Below I have shown some of the examples that comes closer to what I wanted when I gave the initial prompt.

Final results gathered from multiple prompt iterations

This scenario in my opinion is a glowing example of constraints of chat box style of interaction method and struggle with third dilemma within the design space for generative AI currently. The first set of images generated serves as a good result of prompt I had given but it fails to consider my personal knowledge set and preferences.

I as a user have no understanding of the what toolsets the AI will be using and its understanding of its my current preferences and needs. I would be in need for a retro vibe today but in a years times, my preference can change and become a new standard. This creates a scenario where as my knowledge set and preferences changes but the initial query I had will be same with different expected results. Since the input method creates an expectation of receiving the result envisioned by user, this interaction can ultimately lead to feeling of disappointed form the interaction carried out.

One way to mitigate this problem of AI’s understanding of knowledge set of user, is to allow them to give custom data to AI so they can adapt to the user preferences. OpenAI to recently added custom data set for the ChatGPT so it can have a base level of understanding from a custom data set but it still is limited with the level of understanding provided from it. It also brings the issue related to the preferences in data sets that AI has been trained on creating preferences for certain data. In the next section we will discuss some ways that can help with challenging the paradigm.

Rethinking AI Interaction

Adobe has been exploring various methods to assist designers in generating content within their Creative Cloud applications. Their approach in Photoshop involves an additional layer of interaction before the text input box, allowing users to select a specific area and instruct the AI to generate content within those boundaries, taking into account the existing image content and adapting to the prompt provided.

Adobe photoshop showcasing the generative fill feature. Image form Adobe

This selection menu serves two purposes: it focuses the AI’s attention on a particular section of the image, providing more context, and it guides users towards achieving more specific results.

These efforts echo similar AI integration challenges we’ve encountered in the past. Google Lens, an AI-powered tool that identifies objects and text in the real world, comes to mind. Adrian Zumbrunnen, in a compelling presentation on context-based interfaces and AI, addressed the difficulty in keeping users engaged while the camera scans and processes the surrounding environment. He explained that a lack of visual feedback can make the process feel unresponsive and confusing.

Visual lookup feature showing the white dots around objects being scanned. image from Google

Zumbrunnen’s solution was to introduce an animation that visualizes the AI’s object identification process, providing users with a clear understanding of what’s happening behind the scenes. This visual feedback fosters engagement and encourages users to interact more actively with the AI tool. We can try to bridge the gap with some of the lessons we have learned currency to bring more context to the interface for AI. This exercise highlights the importance of considering input methods and user intent when designing AI interfaces.

We can draw inspiration from Spark AI, which enables users to prompt the AI to assist in composing emails. While the implementation still encourages an iterative approach and introduces some ambiguity, it partially addresses this by providing options for proofreading, response length, and tone. These defined context points help users anticipate the outcome with greater clarity.

Spark AI feature list, Image via Spark

Google Duet AI, which recently debuted, offers similar features for word processing applications. However, it still faces the challenge of understanding contextual nuances. For instance, when selecting the option “make text sound more casual,” individual users may have varying interpretations of “casual.” This lack of shared understanding can limit the usefulness of these features. One potential solution is to provide more explicit descriptions for each option, clarifying their intended effect and the range of potential outcomes.

A more ambitious approach involves introducing visual representations of the AI’s decision-making process. Seeing the AI “work” (represented visually) can enhance understanding and provide users with a greater sense of control over the AI’s actions. Animations showcasing image generation or text creation after a prompt is provided can effectively convey the AI’s workings and foster trust in its capabilities.

As Zumbrunnen’s research suggests, without proper animations, automation and AI can be perceived as less responsive. His studies indicate that users generally prefer animations over instant results, as they provide a glimpse into the underlying processes.

Current AI implementations often fall short in this regard, as the AI’s decision-making process often involves multiple steps. When users lack insights into these decisions and how they can influence them, the interaction becomes a guessing game. This lack of transparency can lead to disappointment and uncertainty in the UX.

Apple Visual lookup feature showing contextual information form the images. Image via Apple

Apple’s approach to AI integration offers valuable lessons. Their Photos app has been regularly updated with useful features like plant identification and laundry instructions. Apple calls this feature “Visual Lookup,” and it seamlessly identifies specific objects in photos, such as animals, plants, and landmarks, providing relevant context information.

In contrast to current AI interactions that rely on chat boxes, Visual Lookup is integrated directly into the Photos app, without the need for explicit user input. The AI’s findings are presented within the app’s “Info” section, accompanied by clear explanations of the identified objects and their significance. This approach makes it easy for users to understand how the AI arrived at its conclusions and fosters trust in its capabilities.

Conclusion

Bridging the gap between users and AI requires a multi-pronged approach that combines clear input methods, visual representations of AI processes, and seamless integration into existing workflows. By adopting these strategies, we can create AI interfaces that are more intuitive, engaging, and responsive, ultimately enhancing the overall user experience.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Published in Muzli - Design Inspiration

All the design inspiration you need. It's like crack for designers. And good for you too! #design #ux #ui #inspiration #creativity #art #startup

Written by nikel

I am a self-taught UX designer and passion for Japanese media, games, movies.

No responses yet

Write a response