Responsible AI Retreat
Our 2026 retreat will explore how AI and AI researchers can responsible engage with situated and indigenous knowledge systems, and through this, construct AI systems that work for both new and old knowledge, as well as how to determine where AI-based approaches are inappropriate or irresponsible. We will place particular focus on the concept of situatedness, both for knowledge and for AI systems themselves, as well the concrete impacts and interactions of AI. Key themes are how to integrate lived experience, oral traditions and non-written knowledge into data and models, how to understand equitable benefit distribution and how to translate it into policies, how to address misrepresentation and cultural erasure in generative AI models, and also what are potentially relevant metrics for inclusive and just AI.
The Retreat Topic: a) Situated knowledge b) AI and new and old forms of knowledge c) Indigenous knowledge(s) d) Situatedness and AI
Invited speakers and participants will take part in presenting challenges on these themes, followed by group discussions (which might go their own way from these starting points). The goal of the retreat is not mainly to resolve these issues but to explore together what resolving them might entail, how these approaches might look like and even what it means to solve them.
Registration and participation limited to the Responsible AI Group and invited speakers.
Organising team
Petter Ericsson
Mattias Brännström
Andreas Brännström
Viktoria Movchan