On March 17 – 20, 2025 at Lövånger, AI Policy Lab (AIPL), Responsible AI group and Research Group for Socially Aware AI had a 3.5-day retreat with 15 participants from both units and Francien Dechesne from Leiden University, Netherlands.
Participants discussed responsible AI from multiple perspectives – technological, ethical, and social. Below we share the key messages from the retreat. We hope to inspire future multidisciplinary discussions, workshops and projects related to responsible AI research, literacy, and practical solutions for diverse stakeholders in Sweden and internationally.

Question Zero & Human Responsibility
When we hear about new AI ventures, we need to ask the question zero: “Is the adoption of an AI tool the best solution for our current problem?” with assessments including the environmental costs of running this AI system and understanding of what specific aspects of operations it improves. This includes critically assessing who is participating, what is the provenance and the management of data, what are the bases for modelling and design choices, how will results and impact be evaluated. AI does not happen to us, we [humans] design and/or adopt and use the AI systems.
- Responsible AI development should follow a compositional approach, where verified datasets and models with clear principles can be combined to create new systems. This framework emphasises the need to balance ethical considerations and accuracy while accounting for differences across sectors, industries, and global value systems. The approach prioritises sustainability, transparency, control, regulation, participation, inclusiveness, trust, and fairness, with a focus on measuring societal implications and ensuring equal representation and identification.
- True AI fairness goes beyond mathematical equality, considering historical disparities and contextual factors, and diverse definitions of fairness, as mathematical fairness alone can still create discriminatory outcomes in complex social settings.
- Participation, where people meaningfully contribute to (design) decisions, strengthens democracy and supports responsible design, development, and use of AI. However, participation alone does not guarantee better outcomes; thoughtful design is needed to prevent manipulation and address blind spots by incorporating diverse perspectives.
- Technological change is ecological, not additive, in that it is transformative and systemic rather than simply incremental, fundamentally reshaping existing environments and society. Drawing from Neil Postman’s (1995) framework, technological innovations fundamentally transform existing systems rather than simply adding new capabilities, creating trade-offs, winners and losers, and often leading to reframing how we perceive the world.
- Structural challenges. Several “traps” hinder responsible AI research: the framing trap (failure to model entire systems), portability trap (ignoring context sensitivity), formalism trap (oversimplifying social concepts), ripple effect trap (missing ecological impacts), and solutionism trap (overreliance on technological fixes).
- AI Art. The concept of “AI as a mirror” raises important questions about artistic expression, with significant differences between the inner experience of human artistic creation versus AI-assisted art generation, challenging traditional perspectives on IP rights, creativity (human only, genAI only, human-AI hybrid), intention, and artistic value.
If the ideas above resonate with you, we encourage to check out the AI Policy Lab LinkedIn page for common cooperation opportunities.

