UX Sofia is an international conference on the topic of user experience or UXwith a focus on design, usability, and user research. It is organized by Lucrat.
Automation and Sense-making
In 2025, the discipline of user experience entered a rupture due to artificial intelligence (AI). Large language models converse with people, advise them, and do things for them. On a technological level, interest and expectations remain high, and investments in the field continue. On the consumer level, we are beginning to notice various negative effects when users interact with AI, such as fragmentation of workflows, cognitive passivity of users, and over-trusting or, conversely, erosion of trust.
Five Expectations
The editors of MIT Technology Review, in 5 AI Predictions for 2026, share the following expectations for the field of AI in 2026:
- AI becomes invisible: It will fade into the background, embedding itself into everyday life. “AI will simply become software… The sign of success will be that it somehow just disappears, and people continue with their tasks.”
- A decisive year for agents: Agentive AI is the biggest promise for the industry. The opportunity is enormous. Reliability remains a key question: How long and how well can agents work without human involvement?
- The rise of vibe coding: Natural language interfaces lower the barrier to creating software, allowing non-developers to create tools by describing what they want. This will fundamentally change who creates software and how fast.
- Scientific breakthroughs: Large language models go beyond text, code, and images toward generating hypotheses and scientific research. They already help in enabling new scientific discoveries and will play an increasingly important role.
- Reasoning AI: Artificial intelligence evolves from fast pattern recognition toward more deliberate reasoning and planning. It is expected to unlock more reliable decision support across industries.
However we look at these expectations, it is clear that the role of UX is growing:
- For AI to become invisible, someone has to design the flows and interactions.
- Autonomous agents must communicate what they are doing and how. Someone must design how this will happen.
- If you have spoken with AI and expected it to get work done for you, you know how much design is still needed for it to start behaving in acceptable and reliable ways. This applies strongly to vibe coding as well.
- The role of UX in enabling scientific discoveries is the hardest to see—perhaps mainly in dealing with the massive amount of content that AI produces.
- To trust AI’s reasoning for decision-making—but also not to over-trust it—we need design that preserves and strengthens human rationality.
- Users who need to preserve their human rationality.
- Organizations that evolve from “sticky-notes” to integrated problem-solving, where design is part of the internal culture, not an external ritual.
- Professionals who delegate work but preserve their authenticity and expertise.
Program Framework – Three Pillars
1. Cognitive Sovereignty
Design for the mind, not against it. In hyper-automation, “cognitive atrophy” is a major risk. When the machine thinks and acts for us, the human brain stops exercising its critical neural pathways. Protecting human attention and intellectual independence is a noble topic.
- Neuro-ergonomics of the future: Designing interactions that complement, not replace, human thinking.
We seek the fine line between the healthy cognitive offloading from routine and the dangerous cognitive passivity. - Educational UX: Interfaces as pedagogical environments.
We seek ways for interfaces not only to automate tasks but also to guide and teach users to become better at what they do. - Designing “desirable difficulty”: Strategically introducing beneficial friction into interactions.
We seek how to make the user stop, think, and take control instead of acting on autopilot. Also, are we obligated to do this?
2. Systemic Orchestration
Beyond screens: designing seamless ecosystems. Users are exhausted by fragmentation. The lack of integration between software products has been a major industry issue for years. The human expert is often a “courier” of information between the tools they use. AI could be the connector between tools, but we do not see such a trend, yet. It is time to start designing integrated “shared canvases.”
- Building bridges: Designing contextual continuity.
We seek how to design not just pages and screens, but flows of data, states, and places where data is shared and synchronized between applications and agents in the workflow. - Continuous discovery: Blurring the boundaries between research, design, and development.
We seek how the product trio (product manager, designer, engineer) becomes a quartet by including AI in a real environment with a constant stream of data. Also, whether it is time to redefine the UX and PM roles. - Internalizing design: The end of design thinking as a separate—and often external—event.
We seek how to build internal design culture and processes (DesignOps), so that everyone in the company thinks about users as part of the daily operations.
3. Authentic Expertise
Preserving craft and expertise in a noisy world. With the increase of AI “slop,” authenticity becomes a valuable currency. When everything is done “easily” by AI, the difference between expert and amateur disappears. When everyone can do everything and there is abundance of everything, our norms for quality and expertise will change.
- Fighting the “slop”: Strategies for visual and semantic differentiation.
We seek whether, in this new reality of easily generated content, we should create “islands of trust.” Should we fight the slop or accept it as the new status quo and find ways to work with it? - Domain expertise: Deep domain knowledge and understanding remain valuable.
We seek to understand whether subject-matter knowledge remains a differentiator for quality, unlike tool-handling skills which we see as a commodity. Why does the expert who guides and corrects AI achieve more than the novice who simply prompts it? - Truth as interface: Design for transparency, trust, control, and salvation from the curse of automation.
We seek, on one hand, to preserve the control and trust of the user for whom we automate. On the other hand, how we, as experts in human interaction and design, participate in and influence the automation of tasks.
