AI in HR: Responsible AI Practices for Human Resources for Umeå Energi

Interactive online workshop for Umeå Energi on 4 December 2025, bringing together HR professionals.

In November-December 2025, the AI Policy Lab at Umeå University, together with the TAIGA research group, conducted a commissioned engagement with Umeå Energi focused on the responsible use of artificial intelligence in human resources management.

The project culminated in an interactive online workshop on 4 December 2025, bringing together HR professionals and organisational leaders to explore both the opportunities and risks of AI adoption in HR contexts.

Scope and Focus

The engagement addressed a range of interconnected themes: from foundational questions about when AI should be used in HR at all, to practical challenges around bias, explainability, and staff capability. A central thread throughout was the principle that AI is not a neutral tool — it embodies human choices and can either advance or undermine fairness and dignity in the workplace.

Topics covered included:

  • The rise of AI-generated job applications and what this means for assessing genuine candidate competence
  • AI in talent acquisition and screening, with emphasis on keeping humans responsible for decisions
  • How AI systems can reinforce stereotypes and structural inequalities — and how HR can actively counteract this
  • Explainability and transparency: what candidates and staff are entitled to know about how AI-assisted decisions are made
  • Governance structures, shadow AI, and the organisational capabilities HR teams need to use AI responsibly

Approach

Each theme was paired with structured group reflection, encouraging participants to connect concepts directly to their own organisational practices — mapping existing AI use, identifying governance gaps, and considering what safeguards would need to be in place before trusting AI tools in sensitive HR decisions.

The workshop drew on frameworks including the EU Ethics Guidelines for Trustworthy AI, the AI Policy Lab’s own Question Zero assessment tool, and research on fairness in recruitment contexts.

Key Takeaway

Responsible AI in HR is not primarily a technical problem — it is a governance, culture, and leadership challenge. HR professionals are often on the front line where the risks of algorithmic discrimination first become visible, making their capacity to question, override, and document AI outputs essential for both legal compliance and public trust.


This project was carried out by Virginia Dignum, Tatjana Titareva, Adam Dahlgren Lindström and Frank Dignum (AI Policy Lab / TAIGA, Umeå University).

Back to Top