Read, comment and share the latest contributions to AIPEX – the AI Policy Exchange Forum!
ARTICLES
- ‘AI First’ to ’Purpose First’: Rethinking Europe’s AI StrategyVirginia Dignum (AI Policy Lab, Department of Computing Science, Umeå University, Sweden), Rachele Carli (AI Policy Lab, Department of Computing Science, Umeå University, Sweden), Petter Ericson (AI Policy Lab, Department of Computing Science, Umeå University, Sweden), Tatjana Titareva (AI Policy Lab, Department of Computing Science, Umeå University, Sweden), Jason Tucker (Institute for Futures Studies, Sweden… Read more: ‘AI First’ to ’Purpose First’: Rethinking Europe’s AI Strategy
- Policy Gap: AI and the Determinants of Public HealthSiri Helle (Psychologist, author and speaker)Published on 30 September 2025 There is growing interest in how artificial intelligence (AI) can be applied in public health – from individual-level interventions such as diagnosis, treatment, and patient follow-up in healthcare, to broader public health applications like health data analysis or pandemic response. Ongoing debates about regulation have… Read more: Policy Gap: AI and the Determinants of Public Health
- The Ecological and Ethical Cost of Scaling AIIrum Younis Khan (Dept. of Management Sciences, COMSATS University Islamabad (CUI))Published on 11 August 2025 1. The Material Demands of AI Artificial intelligence has been presented to the world as a technology driving economic and social transformation, being efficient and minimal environmental impact. Yet the reality of high energy and natural resource consumption, to keep… Read more: The Ecological and Ethical Cost of Scaling AI
- Tracing labour, power, and information in Artificial Intelligence SystemsPetter Ericson (AI Policy Lab, Department of Computing Science, Umeå University)Published on 24 June 2025 1. Introduction It is common for technology to be used to obscure the role of humans, and artificial intelligence (AI)is a field where this is even more true than for many others. From ghost workers and data scraping to algorithmic… Read more: Tracing labour, power, and information in Artificial Intelligence Systems
- Time Out of Joint: Historical reflections on AISomya Joshi (Stockholm Environment Institute) & Remi Paccou (Schneider Electric)Published on 27 May 2025 Artificial Intelligence. The very term conjures images of futuristic robots and sentient machines for some, and images of climatic collapse and existential risk to others. This AI hype represents a disjoint in time with both risks and promises. It signals a… Read more: Time Out of Joint: Historical reflections on AI
-
‘AI First’ to ’Purpose First’: Rethinking Europe’s AI Strategy
Virginia Dignum (AI Policy Lab, Department of Computing Science, Umeå University, Sweden), Rachele Carli (AI Policy Lab, Department of Computing Science, Umeå University, Sweden), Petter Ericson (AI Policy Lab, Department of Computing Science, Umeå University, Sweden), Tatjana Titareva (AI Policy Lab, Department of Computing Science, Umeå University, Sweden), Jason Tucker (Institute for Futures Studies, Sweden and AI Policy Lab, Department of Computing Science, Umeå University, Sweden)
Published on 5 November 2025Abstract
This paper examines the European Commission’s “AI First” strategy, arguing that it places acceleration and economic competitiveness above democratic values, societal benefit, and human-centric innovation. While substantial investment in AI is welcome when it promotes sustainable, equitable, and responsible innovation, the authors warn that policy is shifting from governance to unchecked deployment, risking fragmentation, dependency, and misaligned priorities. Rather than asking how AI can be applied, the paper urges policymakers to ask why, advocating a “People First” approach grounded in societal needs, digital sovereignty, and responsible innovation. The authors argue that Europe’s AI leadership should be shaped not by speed, but by principled direction, inclusivity, and a commitment to long-term public value.
AI First
AI is increasingly being framed as a strategic imperative for economic growth, competitiveness and innovation. Yet, this purpose is often at odds with a more fundamental question: what should the purpose of AI be, and under which conditions does it genuinely add value to society? Following the recent launch of The European Commission’s (2025a,b) Apply AI Strategy and the ambitious InvestAI Programme, aimed at building pan-European AI “gigafactories”, (European Commission 2025c), heralded by the Commission’s President as a cornerstone for Europe’s AI competitiveness, the policy discourse has shifted from governance to acceleration. This rhetoric of Europe becoming the “Continent of AI’” however may signal a worrying departure from Europe’s longstanding commitment to human-centric and responsible innovation.
The timing and framing of the European Commission (2025a) “AI First” narrative appears to be closely aligned with the recommendations of the Draghi Report (European Commission 2025d), which emphasises digital investment and competitiveness as central to Europe’s economic renewal. While the substantial funding and incentives for AI research and innovation are welcome, the framing of “AI First” ignores a deeper set of concerns, including the limited evidence, if any, of substantial productivity and societal gains from AI use ((Estrada 2025; Wearden 2025). As such the shift to “AI First’” not only threatens to erode the foundations of Europe’s long-standing commitment to human-centric and rights-based innovation, leaving citizens, both in Europe and beyond, as the ultimate losers.
Full Steam Ahead, But What’s The Heading?
Despite Europe’s foundational focus on trustworthy and human-centric AI, recent Commission announcements, and public statements from its leadership, suggest a radical shift away from precaution, governance, and shared responsibility on AI, to a position of acceleration and competitiveness. AI is seen as a means to bolster economic growth through a highly ambitious industrial policy. What this perspective overlooks, both in Europe and globally, is a clear “people first’” perspective: recognition that technology must serve human and societal goals, not the other way around. The “AI First’” approach glosses over this vital point. While, lip service is paid to an assessment of the benefits and risks of the technology, these are framed as checks and balances, and fail to asks, for example, if a non-AI solution may be better or safer.
This acceleration approach also is in direct contravention of the explicit instructions of the EU Parliament (2024), which called for stronger precautionary measures, transparency, and accountability in the design and deployment of digital technologies to safeguard human rights, democratic oversight, and consumer protection within the EU single market. On the other hand, the EU has recently been on the receiving end of considerable criticism from key industry actors in Europe and beyond, who claim overregulation is killing competition, supposedly leaving industry vulnerable and driving skilled professionals to Silicon Valley (Haeck 2025). At the same time, concerns about a potential generative AI bubble burst have been raised by industry leaders and governments (Makortoff 2025), sowing fear of an economic collapse. The AI First policy can thus be understood as a response to mounting pressure to increase investment, reduce regulatory constraints, and accelerate AI deployment across society. In doing so, the European Commission has effectively adopted a full-steam-ahead approach to AI, yet without the coherence, governance frameworks, and people-centric orientation necessary to ensure that such acceleration aligns with Europe’s foundational values and long-term public interests. The European Commission must also recognise that framing AI development as a global race is both misguided and counterproductive, because such a narrative reduces a complex societal transformation to a contest of speed, rather than a question of direction, purpose, and public value. Moreover, Europe will not win any AI race. The US is too dominant in the currently popular massive, centralised approaches to AI, with the EU being too dependent on the US for the tech stack that allows the most pervasive forms of AI to function. AI leadership and digital sovereignty will not come from a fragmented approach where Europeans are told to see if and where AI can be wedged into sectors and society at large. Strategic leadership, a focus on key areas of innovation, how Europe’s limited resources can be used to maximise both economic growth and social good are key. An exploratory and human rights-driven alternative is more suitable and aligned with Europe’s values and aims than trying to keep pace with the US at any cost. An AI First policy will only further fragmentation, increase inefficiencies, undermine the EU’s competitive advantage and increase its dependency on non-European actors.
This is a pivotal moment to reflect not only on how we govern AI in the EU, but why we are developing and deploying it in the first place. Too often, we see technology placed before purpose, and innovation before inclusion. So, if not AI First, what is the right question? And how can poorly resourced actors, such as SMEs, civil society, universities, small EU countries and those in the global south with limited AI literacy make this assessment?
Not AI First, But AI Where It Is The Best Solution
Rather than presuming that AI, as claimed by the Commission’s President Ursula von den Leyen, will inevitably deliver “smarter, faster, and more affordable solutions’” (von der Leyen 2025), Europe must first determine where, and whether, AI genuinely serves societal needs.
That is, we must start with Question Zero: Why AI? (Lindstrom 2025). What problem are we trying to solve? Is AI truly the right or only solution for each case where it is being applied or promoted? Who benefits, and who bears the costs? By asking these simple questions, we quickly realise that sometimes, not always, AI is the answer. This approach offers a quick, low-cost way to assess AI’s relevance, especially useful for poorly resourced actors, who are often expected to adopt AI without sufficient AI literacy, resources, or support.
Europe As An AI Leader
AI is not inevitable, nor is its current trajectory predetermined. The EU has real choices to make. As such, the EU need to focus their efforts on actively navigating the correct path forward, rather than assuming that the choices have been made for them, and the only thing they can do is try to catch up. This ability to make choices is what digital sovereignty really means. Having the ability to decide over our futures. While the Commission’s suggestion of AI First may miss the mark, the EU retains the power to define when and how AI should be used, and, vitally, when it should not. By doing so, the EU can lead not through speed, but through purpose, setting a global example of responsible innovation that strengthens independence, upholds democratic values, and turns digital sovereignty into a shared regional strength.
References
Lindström, Anna Dahlgren, Virginia Dignum, Peter Ericson, Tatiana Titareva, and Jason Tucker. 2025. Responsible AI Self-assessment Workshop: Start with Question Zero. https://aipolicylab.se/2025/09/05/responsible-ai-self-assessment-workshop-start-with-question-zero/ AI Policy Lab, Published September 5, 2025. Accessed October 9, 2025.
European Commission. 2025a. Apply AI Strategy. Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/apply-ai Accessed October 9, 2025.
European Commission. 2025b. Communication from the commission to the European Parliament and the Council. Apply AI Strategy (Brussels, 8.10.2025, COM(2025) 723 final). https://ec.europa.eu/newsroom/dae/redirection/document/120429
European Commission. 2025c. EU launches InvestAI initiative to mobilise €200 billion of investment in artificial intelligence. https://digital-strategy.ec.europa.eu/en/news/eu-launches-investai-initiative-mobilise-eu200-billion-investment-artificial-intelligence Last update: 12 February 2025; Accessed October 9, 2025.
European Commission. 2025d. The future of European competitiveness. Part A: A competitiveness strategy for Europe. https://commission.europa.eu/document/download/97e481fd-2dc3-412d-be4c-f152a8232961_en?filename=ThefutureofEuropeancompetitiveness_AcompetitivenessstrategyforEurope.pdf European Parliament. 2024. Addictive design of online services and consumer protection in the EU single market (P9 TA(2023)0459). Official Journal of the European Union C/2024/4164 (2024). https://eur-lex.europa.eu/eli/C/2024/4164/oj/eng Accessed October 9, 2025.
Wearden, Graeme. 2025. Entry-level workers face AI ‘job-pocalypse’; US probes Tesla’s self-driving system – as it happened. The Guardian (2025). https://www.theguardian.com/business/live/2025/oct/09/water-customers-bill-hike-winter-blackouts-risk-falls-stock-markets-pound-ftse-business-live-news Retrieved October 9, 2025.
Makortoff, Kalyeena. 2025. Bank of England warns of growing risk that AI bubble could burst. The Guardian (2025). https://www.theguardian.com/business/2025/oct/08/bank-of-england-warns-of-growing-risk-that-ai-bubble-could-burst Accessed October 2025.
Haeck, Pieter. 2025. Dutch chips giant ASML executive Roger Dassen slams EU AI overregulation. https://www.politico.eu/article/dutch-chips-giant-asml-executive-roger-dassen-slams-eu-ai-overregulation/ Accessed October 9, 2025.
Estrada, Sergio. 2025. MIT Report: 95% of Generative AI Pilots at Companies Are Failing. Fortune (2025). https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/ Retrieved October 9, 2025.
Von der Leyen, Ursula. 2025. From now on it’s “AI First” — today the European Commission launches its new approach to artificial intelligence. LinkedIn post. Available at: https://www.linkedin.com/posts/ursula-von-der-leyen_from-now-on-its-ai-first-today-the-activity-7381720419516452864-2aUp.
Keywords (comma separated):
European Commission, European Union, Invest AI, Apply AI, AI First Policy, Question Zero, Responsible AI
How to cite this article:
Dignum, V., Carli, R., Ericson, P., Titareva, T., & Tucker, J. (2025). ‘AI First’ to ’Purpose First’: Rethinking Europe’s AI Strategy. AI Policy Exchange Forum (AIPEX). https://doi.org/10.63439/LPOU6506
-
Policy Gap: AI and the Determinants of Public Health
Siri Helle (Psychologist, author and speaker)
Published on 30 September 2025There is growing interest in how artificial intelligence (AI) can be applied in public health – from individual-level interventions such as diagnosis, treatment, and patient follow-up in healthcare, to broader public health applications like health data analysis or pandemic response. Ongoing debates about regulation have already led to guidelines, including those from the WHO (2021, 2024).
An important but neglected policy area concerns the secondary effects of AI technologies on public health. Here is a clear regulatory gap. Previous technological revolutions such as electricity and the internet reshaped society and lifestyles, with downstream public health consequences – such as rising sedentary behavior and cardiovascular disease. Already today, we can identify several potential risks and opportunities linked to AI development that must be addressed if we are to safeguard population health in the future.
Key determinants of health likely to be affected include work, relationships, cognition, physical activity, and psychosocial stress. Below are some examples and potential policy responses.
Work
The labor market impacts of AI remain uncertain, but some groups – such as translators and illustrators – are already reporting falling demand due to generative AI (Society of Authors, 2024). Even with opportunities for retraining, job insecurity and layoffs are often perceived as personal crises, with heightened risks of substance use disorders, depression, cardiovascular disease, and suicide (Kim & von dem Knesebeck, 2015; Zellers et al., 2025). Policymakers must be prepared from a public health perspective, for example through preventive health communication and scalable stepped-care interventions that can be expanded as needs increase.
Relationships
Strong social relationships are among the most important protective factors for health and wellbeing (World Happiness Report, 2024). Their effect on mortality risk is comparable to that of well-known risk factors such as smoking and binge drinking (Holt-Lunstad et al., 2010).
While AI services may help alleviate loneliness or coach users toward better social skills, there is also a risk that they replace human relationships due to their convenience. Researchers such as Mahari and Pataranutaporn (2024) have called for regulation in this area. One proposal is to mandate that non-humanized chatbots be the default in vulnerable settings such as health and wellness apps, to reduce the risk of users anthropomorphizing and misusing the technology (De Freitas & Cohen, 2025).
Cognition
AI tools may enhance cognition by supporting personalized learning or compensating for bias. At the same time, emerging evidence suggests they might impair higher-order functions over time. Just as books and calculators shaped cognition through “cognitive offloading,” AI tools may lead to declines in problem-solving, planning, and decision-making – especially among younger generations growing up with them. Although research is still limited, small-scale studies point in this direction (Gerlich, 2025).
Such changes could have broad societal implications, including dependence on AI, loss of critical thinking, and increased vulnerability to manipulation. They also carry direct health consequences: cognitive functioning is closely linked to outcomes such as emotion regulation, longevity, and resilience against neurological diseases like Alzheimer’s (Lövdén et al., 2020).
Sedentary Behavior
AI-driven tools for both work and leisure risk reinforcing already high levels of sedentary time by shifting more tasks to screen-based, automated, and remote interactions. Prolonged sedentary behavior is associated with higher risks of all-cause mortality, cardiovascular disease, type 2 diabetes, and certain cancers, even after adjusting for leisure-time physical activity (Biswas et al., 2015).
Psychosocial Stress
Rapid social change, including AI adoption, can heighten uncertainty, worry, and job insecurity – all well-established psychosocial stressors linked to poor health outcomes, including cardiovascular disease, mental illness, and elevated mortality (Guidi et al., 2021). Strengthening digital self-efficacy can help buffer these effects (Zhao & Wu, 2025), highlighting the need to monitor and address psychosocial consequences alongside technical and clinical AI governance.
Catastrophic Risks
Alongside gradual effects, there is also a class of extreme health risks from AI systems, including catastrophic accidents or loss of human control. Though their probability is debated, the potential scale – up to and including human survival – makes them relevant to a comprehensive public health framework. As with rare but devastating hazards like nuclear accidents or novel pandemics, AI warrants systematic assessment and planning.
POLICY RECOMMENDATIONS
To ensure AI development produces the best possible outcomes for public health, it is not enough to regulate AI applications within healthcare alone. Public health must be integrated into all AI policies, alongside other overarching sustainability perspectives such as climate, equity, and human rights. Here are three proposals:
1. Integrate public health into AI regulation
Frameworks governing AI development and deployment should explicitly include public health provisions. For example, the EU AI Act (Article 5) prohibits AI systems designed to manipulate user behavior in ways that cause significant harm to self or others. The EU Digital Services Act (Article 34) requires very large online platforms to assess and mitigate systemic risks, including those affecting public health and mental wellbeing. Digital services with underage users must be safe and free from harmful content, regardless of their size.
As with food safety standards, technologies should meet minimum health requirements. Consumers have a right not to be exposed to foreseeable risks such as disrupted sleep, distorted body image, or social dysfunction, where such harms can be anticipated and prevented.
2. Build AI capacity within public health institutions
Knowledge of AI remains limited among many public health professionals and officials. Capacity must be strengthened through education, recruitment, and expert networks so that AI-related challenges can be managed at local, regional, national, and international levels.
Global advisory bodies such as the WHO could support governments in integrating public health perspectives into national AI strategies, beyond the medical applications currently emphasized.
3. Stimulate research on AI and public health
Research on the public health effects of AI remains scarce. Neither the International AI Safety Report (2025) nor the MIT AI Risk Repository currently list health risks as a category. Most existing studies focus narrowly on healthcare applications rather than upstream determinants of health.
We need systematic investigation into emerging effects as well as foresight analyses to anticipate future impacts. By mitigating risks and promoting health benefits, AI can be developed in ways that support rather than undermine public health.
This is an initial attempt to articulate the secondary public health dimensions of AI as a societal challenge. I welcome comments, suggestions, and ideas.
References
Bengio, Y., Mindermann, S., Privitera, D., et al. (2025). International AI Safety Report (Research Series No. DSIT 2025/001). UK Department for Science, Innovation & Technology. https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf
Biswas, A., Oh, P. I., Faulkner, G. E. J., et al. (2015). Sedentary time and its association with risk for disease incidence, mortality, and hospitalization in adults: A systematic review and meta‐analysis. Annals of Internal Medicine, 162(2), 123–132. https://doi.org/10.7326/M14-1651
De Freitas, J., & Cohen, A. (2025). Unregulated emotional risks of AI wellness apps. Nature Machine Intelligence, 7(6), 813–815. https://doi.org/10.1038/s42256-025-01051-5
Helliwell, J. F., Layard, R., Sachs, J. D., De Neve, J.-E., Aknin, L. B., & Wang, S. (Eds.). (2024). World Happiness Report 2024. Wellbeing Research Centre, University of Oxford. (World Happiness Report)
Holt-Lunstad, J., Smith, T. B., & Layton, J. B. (2010). Social relationships and mortality risk: A meta‐analytic review. PLoS Medicine, 7(7), e1000316. https://doi.org/10.1371/journal.pmed.1000316
Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006
Guidi, J., Lucente, M., Sonino, N., & Fava, G. A. (2021). Allostatic load and its impact on health: A systematic review. Psychotherapy and Psychosomatics, 90(1), 11–27. https://doi.org/10.1159/000510696
Kim, T. J., & von dem Knesebeck, O. (2015). Is an insecure job better for health than having no job at all? A systematic review of studies investigating the health-related risks of both job insecurity and unemployment. BMC Public Health, 15, 985. https://doi.org/10.1186/s12889-015-2313-1
Lövdén, M., et al. (2020). Education and cognitive functioning across the life span. Psychological Science in the Public Interest, 21(1), 6–41. https://doi.org/10.1177/1529100620920576
Mahari, P., & Pataranutaporn, P. (2024, August 5). We need to prepare for ‘addictive intelligence’. MIT Technology Review. https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/
Slattery, P., Saeri, A. K., Grundy, E. A. C., et al. (2024). The AI Risk Repository: A comprehensive meta‐review, database, and taxonomy of risks from artificial intelligence. arXiv. https://arxiv.org/abs/2408.12622 (arXiv)
Society of Authors. (2024, April 11). SOA survey reveals a third of translators and quarter of illustrators losing work to AI. Society of Authors. https://www.societyofauthors.org/2024/04/11/soa-survey-reveals-a-third-of-translators-and-quarter-of-illustrators-losing-work-to-ai/
Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act). (2022, October 27). Official Journal of the European Union, L 277, 1–102. http://data.europa.eu/eli/reg/2022/2065/oj
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). (2024, July 12). Official Journal of the European Union, L 2024/1689. http://data.europa.eu/eli/reg/2024/1689/oj
World Health Organization. (2021). Ethical guidelines for the application of AI in public health (applications such as screening, treatment, pandemic response strategies). https://www.who.int/publications/i/item/9789240029200
World Health Organization. (2024, January 18). WHO releases AI ethics and governance guidance for large multi‐modal models in health care and medical research. https://www.who.int/news/item/18-01-2024-who-releases-ai-ethics-and-governance-guidance-for-large-multi-modal-models
Zellers, S., Azzi, E., Latvala, A., Kaprio, J., & Maczulskij, T. (2025). Causally-informative analyses of the effect of job displacement on all-cause and specific-cause mortality from the 1990s Finnish recession until 2020: A population registry study of private sector employees. Social Science & Medicine, 370, 117867. https://doi.org/10.1016/j.socscimed.2025.117867
Zhao, X., & Wu, Y. (2025). Artificial intelligence job substitution risks, digital self‐efficacy, and mental health among employees. Journal of Occupational and Environmental Medicine, 67(5), e302–e310. https://doi.org/10.1097/JOM.0000000000003335
How to cite this article:
Helle S. (2025). Policy Gap: AI and the Determinants of Public Health. AI Policy Exchange Forum (AIPEX). https://doi.org/10.63439/HNTR5780
-
The Ecological and Ethical Cost of Scaling AI
Irum Younis Khan (Dept. of Management Sciences, COMSATS University Islamabad (CUI))
Published on 11 August 20251. The Material Demands of AI
Artificial intelligence has been presented to the world as a technology driving economic and social transformation, being efficient and minimal environmental impact. Yet the reality of high energy and natural resource consumption, to keep its data centers operational, often remains invisible in mainstream narrative. This continuous energy and freshwater demand positions AI as a material actor within the planet’s ecosystem with considerable ecological costs, marking a major shift in Anthropocene, where human technologies shape planetary systems (Creutzig, et al., 2022; Wu, et al., 2022). Although the CO2 emissions from data centers have gained much attention, the water consumption remains opaque due to significant lack of independent third-party auditing and assessments (US Department of Energy, 2024). The only information source available comes from the tech giants owning these data centers. With accelerated adoptability rate of AI, expansion and scaling of Hyperscale and Edge data centers requires massive quantities of fresh, potable water – both directly and indirectly – draining the local sources, that local communities rely on, for their livelihoods. Data centers, though used for diverse digital services, are increasingly being scaled to host AI workloads. The lack of transparency and clear policies in design choices, selecting geographical locations and inequities in stakeholder inclusion emerges as a critical issue.
The increased deployment of AI-based systems results in increased demand of its infrastructure, raising urgent concerns about planetary boundaries. Water, a core utility, is a finite and unevenly distributed natural resource and already under stress in many regions (UNESCO, 2021). AI, with its environmental cost, cannot be treated an immaterial. Overlooking AI’s water consumption in sustainability assessments is no longer an option. There is a pressing need of transparency in sourcing, usage and reporting frameworks across the AI value chain.
Data centers rely heavily on fresh water with each query costing a measurable amount of water, as mentioned recently by Sam Altman, CEO of Open AI. Fresh water is used directly for ‘cooling’ and maintaining optimal operational costs (McKinsey & Company, 2024). A single hyperscale datacenter, for example, can use up to 550,000 gallons of water per day, totaling to 200 million gallons (760 million liters) annually. This is enough quantity for approx. 8,000 households (5 person) for their basic needs, based on WHO’s per capita daily water requirement (WHO, 2020).
The situation gets further complicated due to water-energy nexus. The demands of vast AI computational resources need electricity. Globally, most of electricity is produced via thermal or nuclear power plants which places additional strain on freshwater reserves. Analyzing the nexus reveals that the indirect water consumption, needed for electricity generation, can match or even exceed the amount of water needed for cooling, compounding the overall water footprint of AI. Despite this scale, companies do not share the Water Usage Effectiveness (WUE) report consistently and transparently (IEA, 2024). The claims of replenishment of water consumption such as ‘water-positive’ pledges remain vague, often lacking critical information about when, where and how freshwater is drawn and what is actual ecological benefit gained (Microsoft, 2025). Such data is vital in understanding the offset occurring outside the watershed, where extraction happens and therefore fail to provide meaningful accountability.
2. Water-Stressed Geographies and Data Center Boom
The freshwater scarcity has affected billions of people in recent years. Yet the growing mismatch between data center geographical placement and water availability reflects a negligence towards this escalating crisis. A closer look at spatial convergence of water risk locations and number of data centers they are hosting, reveals the overlooked construction of digital futures on fragile water foundations (World Resources Institute, 2023; Data Center Map, 2024). Countries like Belgium, Spain, Chile and India are hosting large number of data centers despite high or extremely-high baseline (2023) and projected (2030) water stress level, as reported by Aqueduct Water Risk Atlas. India, for instance, hosts 265 datacenters while facing extreme water stress while major tech firms like Google, plan to set additional hyperscale data centers in the country. Spain hosts 161 data centers with presence of Microsoft, Google, Meta and Amazon Web Services and Belgium has 48 data centers supporting operations of significant and long-standing hyperscale data centers such as Google.
Between depletion of water resources and economic benefits, countries show a paradox between digital infrastructure growth and water scarcity. The local ecosystems and communities increasingly compete for resource availability, without any mechanism to govern or mitigate this competition (Lehuedé, 2024; Vinuesa, et al., 2020). Meanwhile the developing countries position themselves in AI revolution by actively welcoming foreign investment brought by data centers as part of their digital transformation goals. Nations like South Africa, Egypt, Angola, and Pakistan, with ongoing water scarcity challenges, seek investments in data centers by offering land and rebates. They seemingly ignore any consideration of ecological trade-off that comes along with economic prosperity. Egypt, for example, faces extremely-high projected water-stress and when combined with its rising number of AI data centers, it may have a short-term economic uplift but also deepen long-term water scarcity.
3. AI Growth and Ecological Fragility
With scaling of global AI models and increase in localization, demand for data centers will intensify. This expansions risks triggering a rebound effect (Hertwich, 2005). The efficiency gains in AI models will lead to increase in overall resource consumption and a need for more data centers. With AI capabilities getting more distributed and embedded in daily systems, the consumption of energy and water will be accelerating too. So, despite any improvements in AI model efficiency or cooling innovations, the need of accountability and transparency in its environmental impact cannot be denied. To keep up with the demand, the data center operators pursue locations with low operational cost and greater water accessibility. Such sites are often situated in resource-constrained regions. Meanwhile developing countries are ready to avail this opportunity for digital transformation and foreign investment. However, any international standard, guiding the integration of water risk in AI infrastructure policy is still missing, leaving the discussions to fragmented national strategy or at public private discretion.
The absence of regulatory safeguards, along with its environmental impacts, may intensify conflicts between data center operators, agricultural users, diverse industrial consumers and local populations who share freshwater sources. Such conflict may lead to disrupted food systems, local protests and inequitable water allocations, and in some cases, the corporate interests overriding public welfare. Such politicization may further destabilize already vulnerable communities.
The expansion of AI calls for accountability of its environmental footprint in sustainability and governance debates requiring independent audits, inclusive planning and enforceable global standards. In Anthropocene, responsible AI must align with planetary limits and social equity or otherwise risk heavy ecological strain and systematic injustice.
References
Creutzig, F., Acemoglu, D., Bai, X., Edwards, P., Hintz, M., Kaack, L., . . . Rejeski, D. (2022). Digitalization and the Anthropocene. Annual review of environment and resources, 47(1), 479-509.
Data Center Map. (2024). Search data centers by country. Retrieved from Data Center Map: https://www.datacentermap.com/
Hertwich, E. G. (2005). Consumption and the rebound effect: An industrial ecology perspective. Journal of industrial ecology, 9(1‐2), 85-98.
IEA. (2024). Data Centres and Data Transmission Networks. Retrieved from International Energy Agency (IEA): https://www.iea.org/energy-system/buildings/data-centres-and-data-transmission-networks
Lehuedé, S. (2024). An elemental ethics for artificial intelligence: water as resistance within AI’s value chain. AI & SOCIETY, 1-14.
McKinsey & Company. (2024, September 17). McKinsey.com. Retrieved from How data centers and the energy sector can sate AI’s hunger for power: https://www.mckinsey.com/industries/private-capital/our-insights/how-data-centers-and-the-energy-sector-can-sate-ais-hunger-for-power
Microsoft. (2025). Microsoft.com. Retrieved from Environmental Sustainability Report 2025: https://www.microsoft.com/en-us/corporate-responsibility/sustainability/report/
US Department of Energy. (2024). Recommendations on Powering Artificial Intelligence and Data Center Infrastructure. Retrieved from US Department of Energy: https://www.energy.gov/sites/default/files/2024-08/PoweringAIandDataCenterInfrastructureRecommendationsJuly2024.pdf
Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., . . . Fuso Nerini, F. (2020). The role of artificial intelligence in achieving the Sustainable Development Goals. Nature communications, 11(1), 233.
WHO. (2020). Domestic water quantity, service level and health (2nd Ed.). Retrieved from World Health Organization: https://iris.who.int/handle/10665/338044
World Resources Institute. (2023). World Resources Institute. Retrieved from Aqueduct Water Risk Atlas: https://www.wri.org/applications/aqueduct/water-risk-atlas
Wu, C., Raghavendra, R., Gupta, U., Acun, B., Ardalani, N., Maeng, K., . . . Gschwind, M. (2022). Sustainable ai: Environmental implications, challenges and opportunities. Proceedings of Machine Learning and Systems, 4, pp. 795-813.
Keywords (comma separated):
AI governance, water-energy nexus, Anthropocene, environmental sustainability, ecological costs of AI, data center impact
How to cite this article:
Khan Y. (2025). The Ecological and Ethical Cost of Scaling AI. AI Policy Exchange Forum (AIPEX). https://doi.org/10.63439/LGGN1494
-
Tracing labour, power, and information in Artificial Intelligence Systems
Petter Ericson (AI Policy Lab, Department of Computing Science, Umeå University)
Published on 24 June 20251. Introduction
It is common for technology to be used to obscure the role of humans, and artificial intelligence (AI)is a field where this is even more true than for many others. From ghost workers and data scraping to algorithmic management and automated decision making, AI technologies are used to displace, appropriate, and hide human labour in various ways. Decisions are hidden inside technical systems, externalising them from individuals and organisations who could be held meaningfully accountable, which can make agency and power flow in new and often poorly understood ways. By having automated systems imitate humans, teleworkers can be seamlessly swapped in and out as needed with users none the wiser, making the systems appear significantly more capable than they actually are.
A useful abstraction for studying and exposing the workings of these systems is to consider how and where information is flowing through them. However, tools for such analyses tend to either be highly abstracted and focused on the broader sociotechnical systems where AI components are situated, or highly technical and focused on the details of specific software and hardware architectures, or on idealized and abstract models thereof. The proposed work will attempt to bridge this gap. On the one hand, it will take a more rigorous approach grounded in information theory to qualify and quantify the information that the humans involved share both through some specific technical system and through outside channels. On the other hand, it will take a wider view on the concrete workings of those technical systems, incorporating sociotechnical metadata into analyses of digital information flows.
In sum, we aim to use tools and methods from information theory and sociotechnical system modelling, together with formal graph models and complexity theory, to investigate and explain how diverse human labour and knowledge is discretized, datafied, and expressed throughout the development and deployment of different types and architectures of AI systems.
A complementary aim of this work is to build on and further develop existing research connecting information, computation, labour, and value, and how these concepts interact, specifically in the context of AI systems, making concrete contributions to interdisciplinary studies on AI and data work. A second major aim is to yield new insights into how to quantify and qualify information flows, through a focused study of sociotechnical systems involving AI components, where information and its flows in the digital realm can be directly studied, and comparisons can be made to both models and empirical realities of the social realm surrounding them.
Ultimately, we aim to investigate the following research questions:
* RQ1 What types of information flows can be identified within different AI system architectures, and how can these be formally categorized?
* RQ2 How do human actors contribute to and engage with information flows in and around AI systems, and how can these social interactions be systematically modeled?
* RQ3 How can we develop and validate models of information flows in AI-based sociotechnical systems that integrate both technical and human components? * RQ4 How do modeled information flows reflect or reinforce particular organizational or institutional power structures?2. Related work
Though to the best of the authors knowledge there is very little research on precisely the present topic, its interdisciplinary nature means that there are a number of intersecting areas of active research. In particular, works that cover the intersection between AI and information theory, between information theory and labour, between labour and AI, and between any of these three areas and sociotechnical modelling are all relevant.
For the first intersection, Jeon and Roy have recently investigated the connections between Bayesian machine learning and Shannon information theory, drawing an equivalence between the cumulative errors during a learning process of an optimal machine learning algorithm, and the amount of information contained in the data. From a different angle, several works such as Tseng et al. [2] and [3] have looked specifically at large language models (LLMs), drawing on compression and entropy calculations to study the use, and training, respectively of LLMs as related to natural language texts.
For the second, Dantas [4] has drawn direct connections between information and both labor and value in an explicitly Marxist framework, distinguishing between not only use and exchange value, but also semiotic value, and deriving a specific notion of information work which will be of direct use in the proposed work. Dantas also draws a distinction between random and redundant information work, which is similar to the distinction between semantic and syntactic information work in [5] which is further nuanced into an explicit spectrum in [6].
The third intersection is itself a broad area, with many different aspects of relevance. Nguyen and Mateescu [7] gives a good overview of the current landscape in relation to Generative AI specifically, while Davis [8] provides a broader review of relevant issues, making an explicit (and useful) distinction between cases where AI use impacts labour demand (through automation) and those relating more to worker power (through surveillance, algorithmic management, and the like). Further, works such as Crawford [9], Miceli and Posada [10], Gray and Suri [11], Merchant [12], Sadowski [13], and Mejias and Couldry [14] are all relevant for further developing this work. The Data Workers Inquiry (https://data-workers.org) will be another important source of alternate perspectives on AI and labour.
In terms of studying AI sociotechnical systems, once again several active areas are of interest. In particular Wu et al. [15] has developed a framework for integrating various types of models of sociotechnical systems (STS) into a single meta-model. Several modelling languages for sociotechnical systems exist, including STS-ml [16], which was developed for cybersecurity applications, and the host of standards and notations related to Business Process Model and Notation (BPMN), such as Decision Model and Notation (DMN), which is particularly relevant for models integrating AI decision support systems. However, all of these abstractions and models tend to integrate assumptions that are not always helpful for the purposes of this work. A relevant example of how existing modelling framework scan be extended to cover new areas is [17], which adds properties and functionality to STS-ml in order to check sociotechnical systems for compliance with the EU General Data Protection Regulation (GDPR).
A relevant parallel effort, though not directly related to the work we propose here, is that of Gutierrez Lopez and Halford [18], who aim towards an extension of XAI principles that including the sociotechnical environment of the machine learning system.
3. Aims
The main contribution of this work will be to integrate previous work on sociotechnical systems modelling with several notions of information and labour, specifically in the context of artificial intelligence. An additional benefit of this work will be to lay a basis for a further analysis of agency and accountability: By studying the information flows and potential inputs and decisions from humans involved in an AI sociotechnical system, together with an analysis of the power relations among them, accountability and responsibility can be transparently and meaningfully assigned.
We hope to make meaningful contributions to the practical use of information theory and information flow, as well as yield actionable and concrete directions for further exploration of new AI sociotechnical architectures. As part of this, a major component will consist of qualitatively and quantitively analysing the information flows into and out of AI and ML systems, which will also give new and useful insights into the design of Hybrid AI systems in particular.
By creating concrete tools and methods for tracing information flows through both technical and social layers of AI systems, this work will attempt to offer not just theoretical insight, but practical value for those developing, regulating, or critically analyzing such systems. In a time when the societal consequences of AI are increasingly opaque yet consequential, this research will provide actionable models that can inform transparency standards, system audits, and future AI governance efforts.
4. Preliminary results
4.1. Categories of information
A foundational topic of this work includes clarifying and classifying different types of information. In particular, though at extreme small ranges reality can occasionally appear to be digital, for most practical purposes, it is continuous. In contrast, digital information, and computer and AI systems, while implemented on physical hardware, are conceptually and practically discrete. As such, while abstractions of the human and physical sections and relations of a sociotechnical system are going to inevitably be lossy, for the digital parts it is in principle possible to be both precise and concrete. This then, must be our first distinction between fundamentally different types of information: Abstracted notions and models of the real world, and concrete digital bits and bytes.
In terms of different theoretical notions of information, we further contrast the more mathematical definitions of Shannon [19] (’minimal code’) and Kolmogorov [20] (’minimal program’) with the Batesonian concept of ’a difference which makes a difference’ [21]. A fourth relevant concept is Corning’s’ control information’[22], which rather than connecting Shannon entropy/negentropy directly to the physical thermodynamic concepts with the same name, instead quantifies the amount of information contained in some signal or phenomenon by the amount of physical changes that it can effect. An example taken directly from Corning [22] is that of a car approaching a stoplight. If the driver does not notice or understand the traffic light, there is no control information being transferred by whatever light is shown. Only if the driver both sees the light, understands it, and is prepared to change the future trajectory of the car, is there any control information being sent out by the light switching to red. Broadly, we can thus consider two very different types of information flows: The almost entirely discrete and abstract digital information exchanges between and inside of software components, and the messy, socially situated, and necessarily contingent and abstracted information flows that can be modelled to exist between humans, technological artefacts, and their surrounding physical context. The main interest of this work lies precisely where these information flows intersect and interact.
4.2. Human-computer information interactions
With a minimal distinction of information flows as above, consider the interactions between a human and a computer system: the transfer of information from human to computer will necessarily abstract some concrete intention of the human into a concrete digital signal, but likewise a (digital) computer output will take on a specific meaning to the human which depends on their prior knowledge and the context in which the output is given. We can depict these shifts as in figure 1.
4.3. Analysis example
As an illustrative example of the type of analysis that we aim to make more concrete, detailed, and empirically grounded, consider the case of an article being written about a sports event. It is, at this point, plausible that such an article could be written by a large language model (LLM) given an appropriate prompt, including some sort of summary of the ’relevant facts’ of the event in question (e.g. the final tally of points, who made them when, and any injuries and other specific incidents, which are accessible from some sort of API). The situation would look something like figure 2. We can complicate this picture, however, by adding more context. The article will not reach publication without an editor, and the hidden labour that has resulted in the LLM is entirely absent in our initial figure, as is the work to set up the “sports API” and the later work to feed it with the ’relevant facts’ from observations of the event. A more realistic picture emerges, as in figure 3.
Compare this to a situation where a human writer is the author of the same article. Though the plain (abstract) facts of the event in question may be the same, the human will also have access to an infinitely larger context as part of their writing process, both through direct experience and memory, and through communications with other humans, computer systems, and physical objects such as books and recordings, nevermind the sights, sounds, and smells of the event itself if the writer was also present at the event. In this case, the situation will look more like figure 4. This too can be made more complex, particularly if we imagine the writer to make use of an LLM for writing assistance of some sort, yielding a situation as in figure 5.
5. The path forward
Though primarily based in computing science, the nature of the problems addressed by the work call for an interdisciplinary approach. Notably, by building on existing work in Science and Technology Studies, as well critical marxist literature, it is possible to better situate and analyse the information flows and AI sociotechnical systems inside existing societal power structures and socioeconomic realities. In addition to the various theories of information mentioned in Section 4.1, we will also distinguish between different types of information work, as outlined in Section 2. The distinctions between data, information, and knowledge have been explored e.g. in [23], [24], and these perspectives will also be considered.
We will primarily be building on existing frameworks for the analysis of program structure and information flow through software. Notably, the theory and practice of Quantitative Information Flow (QIF) analysis in computer security, though focused on detecting and plugging information leaks between public and private variables under static source code analysis conditions, offers a range of useful tools for modelling intentional information flows as well. From a software engineering lens, constructing program flow graphs and clearly and consistently delineating components in a software system is a well established practice, with a host of frameworks available for use. An example of an abstract framework for describing program and information flows that has been specifically developed for purposes of describing Hybrid AI systems is the boxology of Harmelen and Teije [25].
For situating a software system in an organisational context, tools from business modelling are available as well, with well-established frameworks such as BPMN, STS-ml and various derivatives having seen extensive use to analyse information flows and decision processes in business contexts. Concretely, the near parts of this work will consist a phase of conceptual and theoretical grounding, studying and comparing existing frameworks for information flow analysis, to arrive at a rigorous and flexible framework for modelling information flows in sociotechnical systems, incorporating the above distinctions and specificities, and giving specific attention to questions of agency and valuation. This work will aim at identifying connections and distinctions in how different frameworks frame decisions, and how labour is considered within them.
In the course of this development, a metadata schema for information and information flows will be developed that can describe and categorise information both in terms of its qualities, its different information contents, as well as its role at a specific point in a described sociotechnical process. Tracing the changes of these properties as attached to a particular piece of information will be an important complement to the analyses of the flows themselves, and of the various transformations imposed on and driven by the information.
During and after these developments, the framework and schema will be empirically applied to real-world cases, both existing ones from the literature, and new and comparable studies of previously understudied sociotechnical contexts. Modelling these flows will be accomplished through direct study of technical artefacts and their documentation, as well as organisational policies and descriptions of their surrounding sociotechnical contexts. These will be supplemented by interviews and surveys of involved stakeholders to elicit new and undocumented perspectives not previously represented even in internal documents.
Through comparative analysis across multiple cases (to be selected to reflect diversity in AI architecture and deployment and across different domains, e.g. public-sector automation, language models, decision-support tools) the framework will be further refined to capture how different system configurations mediate flows of information, labour, and power across different AI configurations. Ultimately, we aim for a formal, extensible modelling framework for analyzing information flows in sociotechnical systems involving AI, as well as a richly annotated library of concrete case studies. Additionally, we aim to make both conceptual and methodological contributions to the study of accountability, power, and labour in AI, as well as help drive further developments in related fields.
References
[1] H. J. Jeon, B. V. Roy, Information-Theoretic Foundations for Machine Learning, 2024. URL: http://arxiv.org/abs/2407.12288. doi:10.48550/arXiv.2407.12288, arXiv:2407.12288 [stat].
[2] Y.-H. Tseng, P.-E. Chen, D.-C. Lian, S.-K. Hsieh, The semantic relations in LLMs: An informationtheoretic compression approach, in: T. Dong, E. Hinrichs, Z. Han, K. Liu, Y. Song, Y. Cao,C. F. Hempelmann, R. Sifa (Eds.), Proceedings of the Workshop: Bridging Neurons and Symbolsfor Natural Language Processing and Knowledge Graphs Reasoning (NeusymBridge) @ LRECCOLING-2024, ELRA and ICCL, Torino, Italia, 2024, pp. 8–21. URL: https://aclanthology.org/2024.neusymbridge-1.2/.
[3] M. Yin, C. Wu, Y. Wang, H. Wang, W. Guo, Y. Wang, Y. Liu, R. Tang, D. Lian, E. Chen, EntropyLaw: The Story Behind Data Compression and LLM Performance, 2024. URL: http://arxiv.org/abs/2407.06645. doi:10.48550/arXiv.2407.06645, arXiv:2407.06645 [cs].
[4] M. Dantas, Information as Work and as Value, tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society 15 (2017) 816–847. URL: https://www.triple-c.at/index.php/tripleC/article/view/885. doi:10.31269/triplec.v15i2.885.
[5] J. Warner, Labor in information systems, Annual Review of Information Science and Technology39 (2005) 551–573. URL: https://asistdl.onlinelibrary.wiley.com/doi/10.1002/aris.1440390120. doi:10.1002/aris.1440390120.
[6] J. Warner, The spectrum of semantic and syntactic labour, Journal of Documentation 80 (2024)649–664. URL: https://www.emerald.com/insight/content/doi/10.1108/JD-03-2023-0057/full/html.doi:10.1108/JD-03-2023-0057.
[7] A. Nguyen, A. Mateescu, Generative AI and Labor: Power, Hype, and Value at Work, Technical Report, Data & Society Research Institute, 2024. URL: https://datasociety.net/library/generative-ai-and-labor. doi:10.69985/gksj7804.
[8] O. F. Davis, Artificial Intelligence and Worker Power (2024).
[9] K. Crawford, Atlas of AI: power, politics, and the planetary costs of artificial intelligence, Yale University Press, New Haven London, 2021.
[10] M. Miceli, J. Posada, The Data-Production Dispositif, Proceedings of the ACM on Human-Computer Interaction 6 (2022) 1–37. Publisher: ACM New York, NY, USA.
[11] M. L. Gray, S. Suri, Ghost work: how to stop Silicon Valley from building a new global underclass, Houghton Mifflin Harcourt, Boston, 2019.
[12] B. Merchant, Blood in the machine: the origins of the rebellion against big tech, first edition ed., Little, Brown and Company, New York, 2023. OCLC: on1389775757.
[13] J. Sadowski, The mechanic and the luddite: a ruthless criticism of technology and capitalism, University of California Press, Oakland, California, 2025. doi:10.1525/9780520398085.
[14] U. A. Mejias, N. Couldry, Data grab: the new colonialism of big tech and how to fight back, WHAllen, London, 2024.
[15] P. P.-Y. Wu, C. Fookes, J. Pitchforth, K. Mengersen, A framework for model integration and holistic modelling of socio-technical systems, Decision Support Systems 71 (2015) 14–27. URL: https://www.sciencedirect.com/science/article/pii/S016792361500007X. doi:10.1016/j.dss.2015.01.006.
[16] E. Paja, F. Dalpiaz, P. Giorgini, Modelling and reasoning about security requirements in socio-technical systems, Data & Knowledge Engineering 98 (2015) 123–143. URL: https://www.sciencedirect.com/science/article/pii/S0169023X1500052X. doi:10.1016/j.datak.2015.07.007.
[17] C. Negri-Ribalta, R. Noel, N. Herbaut, O. Pastor, C. Salinesi, Socio-Technical Modelling for GDPR Principles: an Extension for the STS-ml, in: 2022 IEEE 30th International Requirements Engineering Conference Workshops (REW), 2022, pp. 238–243. URL: https://ieeexplore.ieee.org/document/9920163/?arnumber=9920163. doi:10.1109/REW56159.2022.00052, iSSN: 2770-6834.
[18] M. Gutierrez Lopez, S. Halford, Explaining machine learning practice: findings from an engaged science and technology studies project, Information, Communication & Society 28 (2025) 616–632. URL:
https://www.tandfonline.com/doi/full/10.1080/1369118X.2024.2400130. doi:10.1080/1369118X.2024.2400130.
[19] C. E. Shannon, A mathematical theory of communication, The Bell System Technical Journal 27(1948) 379–423. URL: https://ieeexplore.ieee.org/document/6773024. doi:10.1002/j.1538-7305.1948.tb01338.x, conference Name: The Bell System Technical Journal.
[20] A. N. Kolmogorov, On Tables of Random Numbers, Sankhyā: The Indian Journal of Statistics, Series A (1961-2002) 25 (1963) 369–376. URL: http://www.jstor.org/stable/25049284, publisher: Springer.
[21] G. Bateson, Form, substance and difference, Essential readings in biosemiotics 501 (1970). Publisher: Springer.
[22] P. A. Corning, Control information theory: the ‘missing link’ in the science of cybernetics, Systems Research and Behavioral Science 24 (2007) 297–311. URL:https://onlinelibrary.wiley.com/doi/abs/10.1002/sres.808. doi:10.1002/sres.808, _eprint:https://onlinelibrary.wiley.com/doi/pdf/10.1002/sres.808.
[23] L. Businska, I. Supulniece, M. Kirikova, On Data, Information, and Knowledge Representation in Business Process Models, in: R. Pooley, J. Coady, C. Schneider, H. Linger, C. Barry, M. Lang (Eds.),Information Systems Development, Springer, New York, NY, 2013, pp. 613–627. doi:10.1007/978-1-4614-4951-5_49.
[24] L. Businska, I. Supulniece, Towards Systematic Reflection of Data, Information, and Knowledge, Scientific Journal of Riga Technical University. Computer Sciences 43 (2011). URL: https://content.sciendo.com/doi/10.2478/v10143-011-0002-9. doi:10.2478/v10143-011-0002-9.
[25] F. v. Harmelen, A. t. Teije, A Boxology of Design Patterns for Hybrid Learning and Reasoning Systems, Journal of Web Engineering 18 (2019) 97–124. URL: http://arxiv.org/abs/1905.12389.doi:10.13052/jwe1540-9589.18133, arXiv:1905.12389 [cs].
Keywords (comma separated):
information theory, information flow, socio-technical system modelling
Related URL (if any):
https://people.cs.umu.se/~pettter/tracing_information_figures.pdf
How to cite this article:
Ericson P. (2025). Tracing labour, power, and information in Artificial Intelligence Systems. AI Policy Exchange Forum (AIPEX). https://doi.org/10.63439/AUHD8541
-
Time Out of Joint: Historical reflections on AI
Somya Joshi (Stockholm Environment Institute) & Remi Paccou (Schneider Electric)
Published on 27 May 2025Artificial Intelligence. The very term conjures images of futuristic robots and sentient machines for some, and images of climatic collapse and existential risk to others. This AI hype represents a disjoint in time with both risks and promises. It signals a paradigm shift marked by unprecedented capabilities in information processing, autonomous reasoning, and pattern recognition, challenging traditional notions of progress and sustainability while demanding a nuanced approach to harness its potential responsibly & ethically.
The Three Technological Paradigms: From water wheels to apps:
Human technological evolution can be understood through three major paradigms. The first focused on the transformation of materials, spanning from the Stone Age through the Bronze and Iron Ages(1), where humans developed increasingly sophisticated ways to manipulate their physical environment. The second paradigm, also known as the Industrial Revolution, centered on the transformation of energy. The first industrial revolution (1770–1850), as identified by Schumpeter(2), was driven by water-powered mechanization, including mills and irrigation systems. The following long wave (1850–1900) was enabled by steam-powered technology, revolutionizing transportation with trains and transforming industrial machinery. Around 1900, the Third Kondratieff Cycle began(3), marked by the electrification of society and production from 1900 to 1940. Each revolution introduced new tools, industries, and fundamentally impacted lifestyles.
Today, we stand at the cusp of an era defined by the transformation of information. Late 20th-century digital electronics fueled ICT digitalization, leading to AI disruption. But what does this disjoint in time truly entail? History reveals three fundamental mechanisms that have been central to major technological transitions: transmission, storage, and processing. These mechanisms have propelled every major technological shift: from the wheel and rope of transport to smoke signals and the internet for transmission; from containers and reservoirs to photography and magnetic media for storage; and from fire-making to electronic computation for processing(4).
In 1990, less than 0.05% of the global population used the internet. By 2020, over 59% of humanity was connected (10). Networks now move exabytes monthly, enabling unprecedented global information flow. Storage has mirrored this progression: from physical media like books, we’ve advanced to digital systems that store humanity’s collective knowledge on infinitesimal footprints—a leap from 1% digital in the late 1980s to 99% by 2012. AI compute has completed the picture, with processing power showcasing the most striking leap forward. Today’s supercomputers operate at exaflop speeds, solving in seconds problems that would take humans decades. These leaps in transmission, storage, and processing power form the bedrock of AI, enabled by infrastructure that facilitate information transmission, storage, and processing at unprecedented levels. However, these accelerations come at a cost – both to human societies and the planet.
Continuity and Discontinuity in AI Development
Unlike past technologies that built upon human abilities, AI promises autonomous reasoning, planning, and pattern detection beyond human limits. This shift, especially with the rise of agentic AI systems, challenges traditional augmentation concepts, introducing self-referential mechanisms that redefine intelligence, creativity, and technological agency.
This transformation can be framed through the concept of autopoiesis, where technological systems evolve to create themselves, or sympoiesis, where AI is built upon human knowledge to enable novel futures(6). These theoretical lenses help us understand not only the abstract nature of AI’s development but also its tangible manifestations in the evolution of computational hardware. While computational hardware has experienced profound changes, marked by incremental efficiency gains and increased capabilities, the nature of AI’s advancements, particularly its generative capacity, introduces a new dimension. AI’s generative capacity, as it currently stands, challenges human cognitive boundaries and increases technological opacity, introducing a fundamental break from previous technological trajectories. It is not merely an extension of human capabilities but a transformative force capable of generating insights and futures untethered from human precedent.
Untethered from Planetary Health: Rebound Effects and Sustainability Challenges
Current research warns of potential “rebound effects,” where gains in efficiency paradoxically lead to higher overall consumption—an abundance without limits that could undermine sustainability goals by constraining decarbonization efforts or generating waste through unrestricted growth in AI development(7). Addressing this requires policy interventions and investments in sustainable infrastructure prioritizing accuracy, frugality, proven impact assessments for electricity demand growth—and circular economy practices for both hardware and software. To align AI development with planetary (and by virtue of that human) resilience, guardrails need to be designed within the architecture of AI technologies, at the very heart instead of as an afterthought. This would entail a shift away from a focus on efficiency and optimisation alone, towards a more integrated perspective that considers the entire value chain of AI(8). Furthermore, the environmental impact of AI, including the energy & water consumption of large language models and resource depletion from hardware production, must be addressed via caps and transparent open architecture for data sharing.
From Extraction to Global Common: Resetting AI Development
Another critical discontinuity stems from historical notions—dating back to early industrial revolutions—that “human progress” exists outside of nature, which then reduces our environment to a resource for extraction. Today’s dominant discourse around scaling larger AI models risks perpetuating this extractive mindset despite rising environmental costs like energy & water crises caused by mismatched demand on infrastructure or resource depletion.
We call for a “Global Commons” approach (drawing on the seminal work of Elinor Ostrom(9), which would mean sharing benefits across borders while challenging protectionist development paradigms through sustainable practices. This includes optimizing software, improving models, evaluating environmental impacts, and promoting circular economies. We must also in parallel build global governance, set AI standards, and boost digital literacy through international collaboration. The fundamental question remains: when not to use AI? In other words, we must dare to imagine futures with and without AI, rather than accept it as a fait accompli.
To responsibly leverage AI, we must center its design and direction towards nature aligned principles, address potential risks and harms head on, and foster global collaboration in the face of an increasingly polarizing world. Sustainable strategies require long-term vision, while short-term profits shackle us to false promises of shared progress, which history reveals to be mere mirages.
References
- Roos, R. A. (2019). The Stone Age, Bronze Age and Iron Age Revisited. HISTO THOMSEN V2-. J. Terrestrial Electrostatics.
- Schumpeter, J. A. (1939). Business Cycles: A Theoretical, Historical, and Statistical Analysis of the Capitalist Process. McGraw-Hill.
- Korotayev, A. V., & Tsirel, S. V. (2010). A Spectral Analysis of World GDP Dynamics: Kondratieff Waves, Kuznets Swings, Juglar and Kitchin Cycles in Global Economic Development, and the 2008–2009 Economic Crisis. Structure and Dynamics, 4.
- Arthur, W. B. (2009). The nature of technology: What it is and how it evolves. Simon and Schuster.
- Maturana, H. R., & Varela, F. J. (1980). Autopoiesis and cognition: The realization of the living. D. Reidel Publishing Company.
- Haraway, D. J. (2016). Staying with the trouble: Making kin in the Chthulucene. Duke University Press.
- Paccou, R., & Wijnoven, F. (2024). Artificial intelligence and electricity: A system dynamics approach. ResearchGate.
- Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University 1 Press
- Ostrom, E., Burger, J., Field, C. B., Norgaard, R. B., & Policansky, D. (1999). Revisiting the commons: local lessons, global challenges. science, 284(5412), 278-282.
- “Data Page: Share of the population using the Internet”, part of the following publication: Hannah Ritchie, Edouard Mathieu, Max Roser, and Esteban Ortiz-Ospina (2023) – “Internet”. Data adapted from International Telecommunication Union (via World Bank). Retrieved from https://ourworldindata.org/grapher/share-of-individuals-using-the-internet [online resource]
Keywords:
Artificial Intelligence, Sustainability, Geopolitics, Environment, Automation, EquityHow to cite this article:
Joshi S. (2025). Time Out of Joint: Historical reflections on AI. AI Policy Exchange Forum (AIPEX). https://doi.org/10.63439/DNPK4001
-
Will university teachers become obsolete in times of AI?
Elin Kvist (Department of Sociology, Umeå University)
Published on 22 May 2025We live in times of endless crisis alarms, a general state of uncertainty, combined with an orchestrated and intentional undermining of established academic institutions. Science and academic research are under questioning around the world, and universities’ position as a legitimate source of knowledge and critical thinking are under attack. Adding to this sense of instability and uncertainty, there is an ongoing digital and technological development that some argue has become a major risk for technical unemployment – fortelling the “end of work” (Brynjolfsson & McAfee, 2014; Danaher, 2017).
Automation, history and current threats
Historically, we have seen how new technology has changed the way we work and our everyday work practices. Machines have revolutionised and increased efficiency in agriculture and industrial production, and reduced the number of workers demanded in the process. In light of today’s ever-growing improvements in computer power, artificial intelligence and robotics, the gloomiest predictors are now once again convinced that we are moving towards a jobless future (Brynjolfsson & Mcafee, 2014). This time, technology is substituting more cognitively advanced and emotionally demanding jobs – ones previously performed by professionals in technical and managerial professions, including university teachers (Autor, 2015).
However, the past two centuries of automatisation and technological change have not made human labour obsolete. Even though unemployment rates have fluctuated cyclically, there has been no long-run increase in unemployment (Autor, 2015). Through governmental programs of re-education and reorientation, most exempted workers have been able to move to other forms of labour, and new areas of work have opened up in the wake of technological transformation. In previous technological automation processes the focus have been on replacing the most dangerous, bodily harming, and repetitive tasks, and in doing so contributing to a more human friendly labour market.
The digital transformation of academic work
University teachers have seen their work tasks and everyday work practices changed dramatically with digitalisation. Through digital aids and tools, they book rooms, coordinate and organise lectures and seminars, examine and grade students digitally, do research, apply for research funding and ethical approval, publish in academic outlets, manage conference bookings, calculate budgets for research proposals and develop data management plans. These are just a few examples. Their everyday work practices include a significant amount of digital administration, interacting with large numbers of different digital platforms. In the name of efficiency, an increasing number of tasks that have been previously assigned to administrative employees, have gradually been reassigned to the university teachers.
However, when trying to understand the consequences of technological changes of the everyday work situation for university employees, it is important to also take into consideration that during the same time these organisations have also been subject to New Public Management (Thomas & Davies, 2002). Which has also entailed increases in the number of students with diverse needs, less preparation time for teaching, and continuous monitoring of performance through audits and performance evaluations. In result, this left each university teacher with less time to do research due to increased demands and shrinking resources. Consequently, this has spurred even more administrative work as researchers constantly need to apply for research grants in highly competitive, complex and time-consuming funding processes, resulting in additional time and resource-consuming processes. This is also important to take into consideration when trying to understand the implications of genAI on the future of university teachers, illustrating the importance of moving beyond a techno-deterministic understanding (Lindberg et al., 2022). Automation and digitalisation are often presented as neutral, a consequence of technological progress, and as something inevitable. The ideological and material consequences remain hidden (Lindgren, 2024).
AI’s role and data dependency
We have to understand what distinguishes AI technology from previous types of technology, and in doing so, understand what consequences it will have on the everyday practices of university teachers. First, how can they use AI in their everyday work? What tasks do they have that could be suitable for genAI tools? Tasks such as compiling large amounts of text or getting an overview of a new research field for teaching or research, writting summaries, and polishing research applications, compiling CVs and creating concise bios, conducting text analysis of ethnographic materials, supporting peer-review and expert processes, when assessing exams and essays, supporting development of lectures, conference and seminar presentations. The possibilities are endless. In the modern universities that the New Public Management have constructed, with its endless rounds of evaluations, constant applications and assessments, genAI tools can function to support and facilitate the everyday administrative work, making the work practices more manageable. However, it is also important to keep in mind that AI needs large amounts of data to be able to learn from the environment in which it will operate. To be able to help the teachers in their professional practices, genAI needs access to information and data, and the employees must assist and train the systems. Algorithmic systems depend on humans performing a certain kind of digital work, data labeling and moderation, breaking down the work into smaller components for autonomous decisions (Lindgren, 2024). This work is not always visible or even seen as actual work (Moore & Woodcock, 2021).
Hidden labour and digital capitalism
In digital capitalism, we all are involved in generating this type of data. When we move around in digital environments, we perform a lot of work for free that contributes to generating profits for the system, often without us being aware of it. As users we contribute to training the AI systems. Those who are involved in creating content online leave behind data traces, and it is these traces that the large digital media giants (Meta, X, etc.) exploit and capitalise on. The work that people do in the borderland between AI and society is often hidden (Lindgren, 2024; Moore & Woodcock, 2021; Taylor, 2018). For example, when you order a pizza for home delivery via an app, you might perceive it as a digital process. However, the actual physical work behind is invisible. Someone is standing and making the pizza. Another person is delivering it to your home. Foodora and Deliveroo’s apps are part of the complex socio-technical ecosystem of digital society. The pizza delivery people use their own bicycles to deliver the pizza. Foodora does not own the bicycles. Therefore, the company is not responsible for them. The companies claim to offer “flexible and free work”. The couriers can work whenever they want. The work is clearly fragmented, and the workers are interchangeable. The work schedule is individualised. The workers have to deal on their own with all the challenges, including icy roads, angry customers, unclear directions, and other issues. The digital platforms pay for the result, not for the time in-between. In many ways, these working conditions resemble those at the beginning of industrialisation, before union mobilisation, labor protection, sick leave pay, and the right to vacation (Ilsøe & Söderqvist, 2023). What is presented as high-tech and new, is in fact a regression in labour law. Historically, we have seen how every technological leap favors the emergence of armies of marginalised workers, who would take jobs that are not considered jobs anymore. In this respect, automation processes are often much less impressive than the big tech companies and large digital platforms want us to believe. Some tasks may disappear and wages will be reduced, though people continue working alongside the machines for lower pay or even sometimes without pay (Taylor, 2018).
To understand work under digital capitalism, we need to go back to the basic question formulated within the socialist feminist tradition “What is work?” (Ferguson, 2020). How digital capitalism has not only survived but prospered while certain types of work have been hidden and unpaid (Fraser, 2016). The unrecognised work performed by most of us in the borderline between genAI and society, mirrors digital capitalism’s historical and current approach to reproductive work (Jarrett, 2018). The indispensable affective and material labour, mostly cast as “women’s work”, often performed without recognition and pay. In other words, a work that is not regarded as work and is not considered as having any social or economic value. This work in practice is extremely important, while ideologically seen as completely unimportant. Departing from this reasoning, we can conclude that the capitalist system have an inherent desire to devalue and hide socially important work. As participants of the digital capitalism, we often ignore the work that takes place behind the applications, and buy the myth of “success”. This way, we give automation more credence than it deserves. We ignore the work that lies behind the shiny facades of digitalisation. Making the machines appear smarter than they are (Taylor, 2018).
If the discussion about technology continues to focuse only on the narrative that technology drives humanity’s development forward and humans have to keep up, there is an imminent risk of missing the social contexts in which these technical devices are created. When considering the consequences of genAI on university teachers and their daily work, it is important to understand that it is not the technology that will make the teachers obsolete. Technology is developed within a specific economic and social system, where certain resourceful organisations and individuals invest in developing technology that will benefit their personal interests, including control, power, and immense financial returns. The technology is designed to replace human labor to some extent, but it is developed within a digital capitalism that thrives on making people feel that they are constantly replaceable and vulnerable.
Conclusion and final reflections
To conclude, will genAI make university teacher obsolete? There is a need to acknowledge both the advantages and disadvantages of these tools. With the current university climate orchestrated by the New Public Management with its constant demands for auditing, evaluations, counting, and compiling information, daily tasks of university teachers might become more manageable with the help of genAI. In a better of world, genAI tools might be used to feed the insatiable New Public Management system, freeing up time for research, ciritical thinking and teaching. On the other hand, AI needs data and other resources to be able to learn from the environment in which it will operate. When university teachers participate in training the algorithmic systems, the digital capitalism thrives. This work is often not recognised as work. Digital capitalism wants us to believe that technological development is unstoppable and that we need to accept that our work is exploited. As educators and citizens, we need to be aware that there is an inherent mechanism in the system that actively benefits from hiding work tasks and treats them as non-work.
References
Autor, D. H. (2015). Why Are There Still So Many Jobs? The History and Future of Workplace Automation. Journal of Economic Perspectives, 29(3), 3–30. https://doi.org/10.1257/jep.29.3.3
Brynjolfsson, E., & McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company.
Danaher, J. (2017). Will Life Be Worth Living in a World Without Work? Technological Unemployment and the Meaning of Life. Science and Engineering Ethics, 23(1), 41–64. https://doi.org/10.1007/s11948-016-9770-5
Ferguson, S. J. (2020). Women and work: Feminism, labour, and social reproduction. Between the lines.
Fraser, N. (2016). Contradictions of Capital and Care. New Left Review, 100, 99–117.
Ilsøe, A., & Söderqvist, C. F. (2023). Will there be a Nordic model in the platform economy? Evasive and integrative platform strategies in Denmark and Sweden. Regulation & Governance, 17(3), 608–626. https://doi.org/10.1111/rego.12465
Jarrett, K. (2018). Laundering women’s history: A feminist critique of the social factory. First Monday. https://doi.org/10.5210/fm.v23i3.8280
Lindberg, J., Kvist,E., & Lindgren, S. (2022). The Ongoing and Collective Character of Digital Care for Older People: Moving Beyond Techno-Determinism in Government Policy. Journal of Technology in Human Services, 40(4), 357–378. https://doi.org/10.1080/15228835.2022.2144588
Lindgren, S. (2024). AI – ett kritiskt perspektiv (Upplaga 1). Studentlitteratur.
Moore, P. V., & Woodcock, J. (2021). Augmented Exploitation: Artificial Intelligence, Automation, and Work. For Work / Against Work. Pluto Press. https://onwork.edu.au/bibitem/2021-Moore,Phoebe+V-Woodcock,Jamie-Augmented+Exploitation+Artificial+Intelligence,Automation,and+Work/
Taylor, A. (2018). The Automation Charade. Logic(s) Magazine. https://logicmag.io/failure/the-automation-charade/
Thomas, R., & Davies, A. (2002). Gender and New Public Management: Reconstituting Academic Subjectivities. Gender, Work & Organization, 9(4), 372–397. https://doi.org/10.1111/1468-0432.00165
Keywords:
Technological unemployment, genAI, work, gender, digital capitalism, university teachers, reproductive work
How to cite this article:
Kvist E. (2025). Will university teachers become obsolete in times of AI? AI Policy Exchange Forum (AIPEX). https://doi.org/10.63439/CBGU8049
