Policy Gap: AI and the Determinants of Public Health

Siri Helle (Psychologist, author and speaker)


Published on 30 September 2025

There is growing interest in how artificial intelligence (AI) can be applied in public health – from individual-level interventions such as diagnosis, treatment, and patient follow-up in healthcare, to broader public health applications like health data analysis or pandemic response. Ongoing debates about regulation have already led to guidelines, including those from the WHO (2021, 2024).

An important but neglected policy area concerns the secondary effects of AI technologies on public health. Here is a clear regulatory gap. Previous technological revolutions such as electricity and the internet reshaped society and lifestyles, with downstream public health consequences – such as rising sedentary behavior and cardiovascular disease. Already today, we can identify several potential risks and opportunities linked to AI development that must be addressed if we are to safeguard population health in the future.

Key determinants of health likely to be affected include work, relationships, cognition, physical activity, and psychosocial stress. Below are some examples and potential policy responses.

Work

The labor market impacts of AI remain uncertain, but some groups – such as translators and illustrators – are already reporting falling demand due to generative AI (Society of Authors, 2024). Even with opportunities for retraining, job insecurity and layoffs are often perceived as personal crises, with heightened risks of substance use disorders, depression, cardiovascular disease, and suicide (Kim & von dem Knesebeck, 2015; Zellers et al., 2025). Policymakers must be prepared from a public health perspective, for example through preventive health communication and scalable stepped-care interventions that can be expanded as needs increase.

Relationships

Strong social relationships are among the most important protective factors for health and wellbeing (World Happiness Report, 2024). Their effect on mortality risk is comparable to that of well-known risk factors such as smoking and binge drinking (Holt-Lunstad et al., 2010).

While AI services may help alleviate loneliness or coach users toward better social skills, there is also a risk that they replace human relationships due to their convenience. Researchers such as Mahari and Pataranutaporn (2024) have called for regulation in this area. One proposal is to mandate that non-humanized chatbots be the default in vulnerable settings such as health and wellness apps, to reduce the risk of users anthropomorphizing and misusing the technology (De Freitas & Cohen, 2025).

Cognition

AI tools may enhance cognition by supporting personalized learning or compensating for bias. At the same time, emerging evidence suggests they might impair higher-order functions over time. Just as books and calculators shaped cognition through “cognitive offloading,” AI tools may lead to declines in problem-solving, planning, and decision-making – especially among younger generations growing up with them. Although research is still limited, small-scale studies point in this direction (Gerlich, 2025).

Such changes could have broad societal implications, including dependence on AI, loss of critical thinking, and increased vulnerability to manipulation. They also carry direct health consequences: cognitive functioning is closely linked to outcomes such as emotion regulation, longevity, and resilience against neurological diseases like Alzheimer’s (Lövdén et al., 2020).

Sedentary Behavior

AI-driven tools for both work and leisure risk reinforcing already high levels of sedentary time by shifting more tasks to screen-based, automated, and remote interactions. Prolonged sedentary behavior is associated with higher risks of all-cause mortality, cardiovascular disease, type 2 diabetes, and certain cancers, even after adjusting for leisure-time physical activity (Biswas et al., 2015).

Psychosocial Stress

Rapid social change, including AI adoption, can heighten uncertainty, worry, and job insecurity – all well-established psychosocial stressors linked to poor health outcomes, including cardiovascular disease, mental illness, and elevated mortality (Guidi et al., 2021). Strengthening digital self-efficacy can help buffer these effects (Zhao & Wu, 2025), highlighting the need to monitor and address psychosocial consequences alongside technical and clinical AI governance.

Catastrophic Risks

Alongside gradual effects, there is also a class of extreme health risks from AI systems, including catastrophic accidents or loss of human control. Though their probability is debated, the potential scale – up to and including human survival – makes them relevant to a comprehensive public health framework. As with rare but devastating hazards like nuclear accidents or novel pandemics, AI warrants systematic assessment and planning.

POLICY RECOMMENDATIONS

To ensure AI development produces the best possible outcomes for public health, it is not enough to regulate AI applications within healthcare alone. Public health must be integrated into all AI policies, alongside other overarching sustainability perspectives such as climate, equity, and human rights. Here are three proposals:

1. Integrate public health into AI regulation

Frameworks governing AI development and deployment should explicitly include public health provisions. For example, the EU AI Act (Article 5) prohibits AI systems designed to manipulate user behavior in ways that cause significant harm to self or others. The EU Digital Services Act (Article 34) requires very large online platforms to assess and mitigate systemic risks, including those affecting public health and mental wellbeing. Digital services with underage users must be safe and free from harmful content, regardless of their size.

As with food safety standards, technologies should meet minimum health requirements. Consumers have a right not to be exposed to foreseeable risks such as disrupted sleep, distorted body image, or social dysfunction, where such harms can be anticipated and prevented.

2. Build AI capacity within public health institutions

Knowledge of AI remains limited among many public health professionals and officials. Capacity must be strengthened through education, recruitment, and expert networks so that AI-related challenges can be managed at local, regional, national, and international levels.

Global advisory bodies such as the WHO could support governments in integrating public health perspectives into national AI strategies, beyond the medical applications currently emphasized.

3. Stimulate research on AI and public health

Research on the public health effects of AI remains scarce. Neither the International AI Safety Report (2025) nor the MIT AI Risk Repository currently list health risks as a category. Most existing studies focus narrowly on healthcare applications rather than upstream determinants of health.

We need systematic investigation into emerging effects as well as foresight analyses to anticipate future impacts. By mitigating risks and promoting health benefits, AI can be developed in ways that support rather than undermine public health.

This is an initial attempt to articulate the secondary public health dimensions of AI as a societal challenge. I welcome comments, suggestions, and ideas.

References

Bengio, Y., Mindermann, S., Privitera, D., et al. (2025). International AI Safety Report (Research Series No. DSIT 2025/001). UK Department for Science, Innovation & Technology. https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf

Biswas, A., Oh, P. I., Faulkner, G. E. J., et al. (2015). Sedentary time and its association with risk for disease incidence, mortality, and hospitalization in adults: A systematic review and meta‐analysis. Annals of Internal Medicine, 162(2), 123–132. https://doi.org/10.7326/M14-1651

De Freitas, J., & Cohen, A. (2025). Unregulated emotional risks of AI wellness apps. Nature Machine Intelligence, 7(6), 813–815. https://doi.org/10.1038/s42256-025-01051-5

Helliwell, J. F., Layard, R., Sachs, J. D., De Neve, J.-E., Aknin, L. B., & Wang, S. (Eds.). (2024). World Happiness Report 2024. Wellbeing Research Centre, University of Oxford. (World Happiness Report)

Holt-Lunstad, J., Smith, T. B., & Layton, J. B. (2010). Social relationships and mortality risk: A meta‐analytic review. PLoS Medicine, 7(7), e1000316. https://doi.org/10.1371/journal.pmed.1000316

Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006

Guidi, J., Lucente, M., Sonino, N., & Fava, G. A. (2021). Allostatic load and its impact on health: A systematic review. Psychotherapy and Psychosomatics, 90(1), 11–27. https://doi.org/10.1159/000510696

Kim, T. J., & von dem Knesebeck, O. (2015). Is an insecure job better for health than having no job at all? A systematic review of studies investigating the health-related risks of both job insecurity and unemployment. BMC Public Health, 15, 985. https://doi.org/10.1186/s12889-015-2313-1

Lövdén, M., et al. (2020). Education and cognitive functioning across the life span. Psychological Science in the Public Interest, 21(1), 6–41. https://doi.org/10.1177/1529100620920576

Mahari, P., & Pataranutaporn, P. (2024, August 5). We need to prepare for ‘addictive intelligence’. MIT Technology Review. https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/

Slattery, P., Saeri, A. K., Grundy, E. A. C., et al. (2024). The AI Risk Repository: A comprehensive meta‐review, database, and taxonomy of risks from artificial intelligence. arXiv. https://arxiv.org/abs/2408.12622 (arXiv)

Society of Authors. (2024, April 11). SOA survey reveals a third of translators and quarter of illustrators losing work to AI. Society of Authors. https://www.societyofauthors.org/2024/04/11/soa-survey-reveals-a-third-of-translators-and-quarter-of-illustrators-losing-work-to-ai/

Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act). (2022, October 27). Official Journal of the European Union, L 277, 1–102. http://data.europa.eu/eli/reg/2022/2065/oj

Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). (2024, July 12). Official Journal of the European Union, L 2024/1689. http://data.europa.eu/eli/reg/2024/1689/oj

World Health Organization. (2021). Ethical guidelines for the application of AI in public health (applications such as screening, treatment, pandemic response strategies). https://www.who.int/publications/i/item/9789240029200

World Health Organization. (2024, January 18). WHO releases AI ethics and governance guidance for large multimodal models in health care and medical research. https://www.who.int/news/item/18-01-2024-who-releases-ai-ethics-and-governance-guidance-for-large-multi-modal-models

Zellers, S., Azzi, E., Latvala, A., Kaprio, J., & Maczulskij, T. (2025). Causally-informative analyses of the effect of job displacement on all-cause and specific-cause mortality from the 1990s Finnish recession until 2020: A population registry study of private sector employees. Social Science & Medicine, 370, 117867. https://doi.org/10.1016/j.socscimed.2025.117867

Zhao, X., & Wu, Y. (2025). Artificial intelligence job substitution risks, digital self‐efficacy, and mental health among employees. Journal of Occupational and Environmental Medicine, 67(5), e302–e310. https://doi.org/10.1097/JOM.0000000000003335

How to cite this article:

Helle S. (2025). Policy Gap: AI and the Determinants of Public Health. AI Policy Exchange Forum (AIPEX). https://doi.org/10.63439/HNTR5780

AI Policy Lab is a multidisciplinary research hub at Umeå University.

Subscribe to our newsletter and receive our very latest news.

Go back

Your message has been sent

Warning
Warning
Warning!

One response to “Policy Gap: AI and the Determinants of Public Health”

  1. Lena Semenova avatar
    Lena Semenova

    Interesting points!

    Like

Leave a comment