UX Brighton’s “UX and AI” conference (1st November 2024), covered a broad spectrum of perspectives on how AI is transforming user experience, design, and human relationships. A few common trends emerged from the discussions, revealing both opportunities and serious challenges AI poses to UX design.
Speakers
Thank you to each of the speakers, I hope I’ve captured the essence of your talks below
Will Taylor: All the Things AI Can Do
Key Takeaways:
Will Taylor opened with a sweeping overview of AI’s current state and potential, specifically highlighting the tools relevant to UX professionals. He discussed the rapid pace of AI advancement, noting that new tools and capabilities emerge almost weekly. Taylor’s message focused on the necessity for designers and UX professionals to stay informed about AI innovations while remaining grounded in the core principles of user-centred design. He offered strategies to identify and evaluate AI tools that align with specific user needs, and stressed the importance of UX as a guide to keep AI development focused on enhancing real-world user experiences rather than on technology for technology’s sake.
Glenn Jones: Human Conversations with Grids of Numbers
Key Takeaways:
Glenn Jones explored the intricacies of conversational AI and the ways it diverges from human conversation. He described two distinct design approaches for AI-driven interfaces: pre-determined (structured) and volatile (dynamic) systems. Pre-determined systems are built around static structures and predictable user flows, ideal for interactions requiring consistency and clarity. These interfaces follow a defined script or pattern, ensuring users experience reliability and familiarity, which is crucial in sectors like healthcare or finance, where stability and user trust are paramount.
In contrast, volatile systems, which Jones described as more “dynamic” or responsive, adjust in real-time based on user input, making them suitable for creative or exploratory tasks where flexibility is essential. These systems, often seen in open-ended chatbots or creative design tools, are adept at handling a broader range of user requests but require careful UX design to ensure users aren’t left feeling disoriented or overwhelmed.
Jones argued that the challenge for UX designers is to determine which approach – structured or dynamic – aligns best with the context and user needs. He stressed that designers must consider both the strengths and limitations of conversational AI, recognising when a structured grid-based interface might better serve the user experience than a volatile conversational model. This balance, Jones proposed, is key to developing conversational interfaces that are not only functional but also trustworthy and user-friendly.
Hannah Beresford: People Won’t Use Your Shiny New AI Product
Key Takeaways:
Hannah Beresford’s talk was grounded in user research, particularly regarding users’ interactions with new AI features. She discussed how users often engage with AI tools in ways designers don’t anticipate, which can undermine the intended experience. Beresford emphasised the importance of rigorous user testing and outlined methods for understanding these varied user behaviours. She shared practical examples of user resistance to certain AI-driven features and provided strategies to make AI more accessible and less intimidating. Beresford’s conclusion: for AI to gain widespread acceptance, designers must anticipate user reluctance and bridge the gap with intuitive, transparent interfaces.
Manú Bartlett & Ewa Koc: Calibrating User Trust in AI Products
Key Takeaways:
Manú Bartlett and Ewa Koc addressed the critical role of trust in AI-powered products. They outlined a framework for “trust calibration,” where the aim is to match user expectations with AI’s actual capabilities, rather than overpromising. They shared case studies illustrating techniques to enhance trust, such as providing explanations for AI-driven decisions, incorporating feedback loops that give users a sense of control, and personalising AI interactions to feel more relevant and human-like. Their talk underscored that successful AI implementation hinges on users feeling secure and informed, without relying on “blind trust.”
Pablo Stanley: Designing for AI-Driven Creativity
Key Takeaways:
Pablo Stanley discussed how AI is shaping creative processes within UX design, exploring its potential as a co-creative partner rather than a replacement for human ingenuity. He showcased tools that help designers generate new ideas and streamline workflows, arguing that AI can reduce repetitive work and free designers to focus on more complex, imaginative tasks. Stanley emphasised that, while AI-driven tools are growing increasingly sophisticated, they still lack the intuitive understanding and cultural sensitivity that human designers bring. His key advice was to use AI to empower creativity but to recognise and respect its limits.
Dr Philip Bonhard: Ethical Considerations in AI-Enhanced UX
Key Takeaways:
Dr Philip Bonhard focused on the ethical dimensions of AI in UX, touching on transparency, accountability, and the importance of user consent. He reviewed historical milestones in AI, such as the Turing Test and Eliza, to demonstrate how our understanding of AI has evolved and cautioned against blindly trusting AI-generated outputs. Bonhard stressed the importance of transparency about AI’s role in decision-making processes and encouraged designers to remain vigilant against ethical risks, such as overreliance on AI in areas where human oversight is essential. His core message was that AI should augment human capabilities, not compromise them.
Kwame Ferreira: AI and the Future of User-Centred Design
Key Takeaways:
Kwame Ferreira’s talk revolved around the evolution of user-centred design in an AI-driven world. He argued that traditional design methodologies, while still valuable, must adapt to accommodate the unique capabilities and constraints of AI. Ferreira pointed out that AI often makes “objective” decisions that can seem out of sync with human values, advocating for a more nuanced approach where AI complements, rather than overrides, human intuition. His call to action was for designers to rethink their approach to problem-solving in a way that embraces both the logic of AI and the empathy of human-centred design.
Maggie Appleton: Navigating the AI-Generated Content Landscape
https://maggieappleton.com/forest-talk
Key Takeaways:
Maggie Appleton’s presentation tackled the flood of AI-generated content and its implications for trust, authenticity, and human relationships online. She introduced the concept of “AI slop” – low-quality, algorithmically generated content that degrades the web’s quality. Appleton highlighted the challenges of distinguishing human-created content from AI-generated material and warned of a potential “dark forest” effect, where authentic human interaction retreats to private, gated communities. She urged designers to think critically about the long-term impact of AI on digital trust and to consider how they can design for authenticity in an increasingly synthetic online environment.
Key Trends and Insights
- AI Trust and Transparency
Many of the speakers addressed the critical importance of fostering trust in AI-powered products. AI has achieved incredible fluency and sophistication in communication, but users still struggle with how much to trust its outputs. Manu and Ewa’s talk explored “trust calibration,” noting that users need to understand why an AI makes decisions rather than just how. This helps create a more balanced relationship, where users are empowered to question AI’s reasoning rather than view it as an infallible authority. - The Rise of AI “Slop”
Maggie Appleton introduced the concept of “AI slop” – a deluge of low-quality, AI-generated content flooding the web. Her insights emphasised a growing need to distinguish between human-authored and AI-generated content, as well as the retreat of users into private spaces (the “cozy web”) to escape content saturation. As AI tools proliferate, Appleton highlighted how important it is for creators to uphold quality and authenticity, especially as automated content generation scales. - Continuous User Insight and Synthetic Research
Keane Ferreira’s presentation on synthetic research explored how AI can deliver continuous, scalable insights that bridge gaps traditional user research often struggles to fill. He argued that AI-driven simulations of user behaviour can help organisations quickly obtain insights that would otherwise take weeks or months to gather. This has great potential to democratise access to research for smaller businesses but raises ethical questions around accuracy and the biases in synthetic data. - Human-Centred Design and Ethical AI
Several speakers, including Phillip Bonnard, noted that AI’s ability to “understand” humans is inherently limited, given it operates within rigid, pattern-based frameworks. Discussions highlighted the need for human oversight to mitigate the risks of over-reliance on AI and prevent a drift away from user needs. As Bonnard illustrated with examples from AI history, the field’s ongoing struggle with bias and context limitations suggests that designers must remain vigilant, employing AI only where it genuinely augments human capability. - Augmentation, Not Replacement
Across the board, speakers emphasised that AI should augment, not replace, human abilities. Pablo Stanley stressed how important it is to use AI for tasks humans find repetitive or overly complex, while allowing humans to focus on creative, strategic, and empathy-driven work. The consensus was that we must resist the temptation to automate everything – particularly areas requiring genuine human judgement and empathy.
Conclusions and Implications for Startups
The conference offered a unified message: while AI brings transformative capabilities to UX, it must be harnessed carefully and ethically. As we integrate AI into our products, a few principles are clear:
- Prioritise Transparency
Transparency is crucial for users to trust our AI solutions. Users need to understand AI decisions in ways that align with their expertise level. Showing why an AI reached a certain conclusion, rather than explaining every technical detail, can cultivate this trust. - Human-Centred and Ethical Design
A human-centred approach remains essential. AI is inherently limited by the data it’s trained on and may fail to capture nuances of human culture, emotion, or creativity. Startups should ensure that their AI tools are assistive rather than authoritative, enhancing user capability without overstepping into domains where empathy and judgement are required. - Mitigate AI Slop and Uphold Content Quality
AI-generated content, if not carefully managed, risks reducing overall content quality and authenticity. As startups, we should design AI systems that maintain high-quality standards and avoid contributing to the “AI slop” phenomenon. Appleton’s discussion reminds us that it’s not enough for AI to simply produce content – it must produce value. - Use AI to Enhance Continuous Learning and Insight
For startups, AI offers a unique ability to provide continuous user insights. AI-driven synthetic research can deliver timely feedback on user behaviours and preferences, making it possible to iterate rapidly. However, this requires careful consideration of biases, as synthetic users may not fully reflect reality.
Final Thoughts
UX Brighton’s conference made it clear that AI’s role in user experience is not to act as a standalone solution but as a complementary force. As leaders, our challenge is to embed AI thoughtfully, supporting our users’ needs without detracting from the human qualities that drive engagement, trust, and satisfaction. The future of UX in the age of AI lies in augmenting human potential and maintaining a balanced, ethical approach that values transparency, empathy, and genuine user connection.