Towards Successful Industrial Policy on AI in Healthcare: Establishing the Conditions for Future Public Benefit

Erez Maggor (Assistant Professor, Ben-Gurion University and the Institute for Futures Studies). 
Jason Tucker (Researcher, Institute for Futures Studies, Adjunct Associate Professor, AI Policy Lab @Umeå University and Visiting Research Fellow, AI & Society, LTH, Lund University).

Published on 18 March 2025

Abstract

This paper explores how pro-active government policies could promote artificial intelligence (AI) in healthcare for the public good. Building on insights from the literature on industrial policy, we argue that without clear conditions and guardrails to ensure future public benefit, state assistance and subsidies will be nothing more than corporate welfare with unpredictable, if any, societal benefit. We provide a few concrete examples of this and then conclude by reflecting on how industrial policy can be a useful conceptual lens to challenge techno-solutionism, increase accountability and situate AI healthcare policies in the broader political-economy.

Keywords: Industrial Policy, Corporate Welfare, Artificial Intelligence, Healthcare, Public Interest, Conditionality, Futures.

Introduction

States and regions are increasingly turning to interventionist policies to try and realise the benefits of the development and deployment of artificial intelligence (AI). This is highlighted by the growing number of national and regional AI strategies, where bold visions for how AI will solve a plethora of societal problems are set out (OECD, 2025). For example, just recently the EU announced the 200 billion Euro regional InvestAI, the largest AI investment in history “to make Europe the continent of AI” (European Commission, 2025). This optimism is possible as AI is a useful empty signifier for various visions of technological innovation (Kak, 2024). These visions are often built upon previous AI policies and initiatives, thus reflecting a continuation of socio-technical imaginaries of AI (Bareis & Katzenbach, 2022). 

Healthcare has featured prominently in these strategies (Hoff, 2023). States are promising that the technology can solve a range of societal issues related to health, fix crumbling and underfunded healthcare systems, and improve our individual health (Tucker, 2023). Often, existing policies are market-based, which requires states to collaborate with the private sector. To get private firms on board, governments provide various incentives, including increased public funding of R&D and procurement, access to public health infrastructure and data, and cutting so-called red tape that allegedly “stifles” innovation. Regarding AI in healthcare, the recent increase in interest in industrial policies coincides with fluctuations in private sector investment in the medical and health sector – which rose sharply between 2019-2021, then plummeted afterwards (North, 2025). The sharp decline post 2021 means that relying on the private sector to address healthcare challenges without incentives is not likely to achieve states´ promised future visions. 

Such policies represent a gamble for states, as these technoscientific futures, which are based on the notoriously unpredictable development trajectory of AI, also rely on the alignment of private and public interests if this is realised. This begs the question of how we can improve the odds for these policies to succeed? To answer this question, we draw lessons from the literature on industrial policy. One of the central insights of this body of work is that when government subsidies are provided without proper conditionalities they are unlikely to produce desired socially beneficial results (Bulfone et al, 2023; Bulfone et al, 2024; Mazzucato & Rodrik, 2023).  We argue, therefore, that a critical element to the success of AI policies is the centralizing of the public good in the conditions of future benefit of AI. Without this in place these policies will end up as nothing more than corporate welfare in the guise of public interest innovation. 

Healthcare is arguably one of the sectors with the highest risk and reward in this regard. We have seen an “AI turn” in global and national health discourses (Strange & Tucker, 2024). AI is posited as being the best, and often only means, to address a broad range of individual, societal and systemic healthcare issues. Health is also intricately intertwined with social stability and democracy (Johnson & Longmore, 2023). In addition, the potential benefits of AI in the healthcare sector is often used to justify broader political agendas on AI, as we saw in the case of the EU’s InvestAI. As such, the success or failure of industrial policy on AI in healthcare has ramifications well beyond the sector. 

Industrial Policies versus Corporate Welfare

The historical record of industrial policy has been mixed. States like Japan, South Korea, Taiwan, Israel, France, and, most recently, China, have had remarkable success using industrial policies to upgrade their industries and ‘catch-up’ to more economically developed nations (Amsden, 1989; Johnson, 1982; Zysman, 1984; Wade, 2004; Maggor, 2021; Ang, 2018). However, in India, Turkey, and across Latin America, industrial policies are considered to have been a relative failure, leading to waste and corruption and resulting in these countries failing to advance economically or, at best, becoming stuck in the ‘middle-income trap’ (Doner & Schneider, 2016). The main lesson emerging from this mixed comparative-historical experience has been that rather than representing a dangerous or wrongheaded economic policy – as many on the ideological right often argue – industrial policies are, first and foremost, a daunting political challenge. 

While profit-maximizing firms will gladly accept various government subsidies and assistance that often accompany industrial policy efforts, they will always prefer this assistance be provided with limited strings and conditions. Policymakers, on the other hand, understand that subsidies without conditionalities are a form of corporate welfare, i.e., a gift from the public to the private sector. As a result, the main political challenge for policymakers is designing industrial policies that incorporate institutional mechanisms that dictate clear conditions regarding the future benefit of such public-private collaborations, ensuring the public benefit is protected, and equipping the state with the capacity to ‘discipline’ uncooperative firms (Amsden, 1989; Chibber, 2023; Maggor, 2021). Crucially, this needs to be implemented from the outset, as this is the stage at which policymakers have the greatest leverage over the private sector. To ground and contextualise this we can turn to two recent cases, the development of the COVID-19 vaccine in the USA and the NHS Google-DeepMind health data scandal in the UK. 

Developing the COVID-19 Vaccine in the USA 

One useful example to demonstrate the aforementioned logic is the development of vaccines for COVID-19. In 2020, in response to the pandemic, the Trump White House approved “Operation Warp Speed,” to develop an mRNA vaccine. In addition to federal support for the promotion of R&D, the program also provided government subsidies for scaling-up manufacturing (in the case of Moderna), offered strategic government procurement (in the case of Pfizer), and delivered various government assistance to industry across multiple sectors and regions (Adler, 2021). On the one hand, this experience represented an effective and comprehensive industrial policy program that saw the government partner with the private sector to produce a significant social good– a safe and effective vaccine that helped end a global health emergency. On the other hand, it showed that when state-supported innovation is not governed for the common good via strict terms and conditionalities, many people remain excluded from its benefits. 

For example, even though Moderna used public investments and research to develop its vaccine, they refused to share intellectual property and know-how with less-developed countries (Mazzucato, 2023). The experience  also demonstrates that without imposing strict limitations on profit-taking, private firms that succeeded due to public assistance are likely to retain “astronomical and unconscionable profits”, in this case due to their monopolies of mRNA COVID vaccines — upwards of 69% profit margins in the case of Moderna and BioNTech (Wilson, 2021; Emergency USA, 2021).

The NHS Google-DeepMind Health Data Scandal

There are also examples of industrial policies on AI and health more specifically, such as the UK National Health Service (NHS) Google-DeepMind case. In 2015 the Royal Free London NHS Foundation Trust shared the personal health records of 1.6 million patients with Google’s AI firm DeepMind. This was to support the development of a system that could potentially better diagnose acute kidney injury. Successive UK governments have long courted Google to try and attract more of the firm’s investment in the UK market, so this public-private partnership came as no surprise. 

However, while the diagnostic application showed early signs of success in detecting kidney injury, the partnership hit the headlines in 2017 due to a significant public backlash. There were serious concerns about data privacy and the lack of transparency in how the sharing of publicly funded health data with DeepMind was decided upon. It was also unclear what DeepMind was using the data for, as well as there being an uncertain public benefit from the data transfer (Dickens, 2021). The Information Commissioner’s Office eventually ruled that the NHS had failed to comply with the Data Protection Act by transferring the data (Information Commissioner’s Office, nd), though a subsequent class action lawsuit against DeepMind itself failed in the UK courts. With the introduction of the UK’s AI Opportunities Action plan in January 2025, the case has raised its head once again (Milmo & Stacey, 2025). The legacy of the NHS DeepMind scandal thus lives on, impacting public trust in the next generation of industrial policies on AI. 

Establishing the Conditions for Future Public Benefit

The unpredictable nature of AI development poses challenges in terms of establishing the conditions of future benefits in industrial policies. However, these are not insurmountable, and as argued above, need to be overcome if the social benefit of the technological investment is to be realised. We should remember that conditions do not always need to be very specific (such as the quick development of a vaccine), or indeed directly related to healthcare. For example, supporting a flourishing MedTech sector that creates high skilled jobs, and pays corporation tax into the state coffers, could be seen as an acceptable outcome. Yet, there needs to be clear guardrails about how the public are to benefit from any technological innovation that an industrial policy has facilitated. Assuming private actors will act in the public good if a discovery is made has proven to be wishful thinking time and time again. 

The economist Mariana Mazzucato and her colleagues have outlined several conditionalities that could ensure the public shares the returns of public investment in health. These include charging royalties from companies who profit from technologies developed with public funding, with funds earmarked to finance future innovation. Another strategy could be for states to retain a “golden share” of patents developed with public assistance while incorporating weak and narrow (rather than strong and broad) intellectual property protections to ensure greater access for marginalised members of the community as well as developing nations. Finally, rather than paying exorbitant prices, public health systems should pay prices that reflect their contribution to the development of new therapeutic or health technologies (Mazzucato & Li, 2019; Mazzucato & Roy, 2019).

One should also remember that industrial policies do not only support the growth of certain sectors but can also be used to reduce redundant or harmful ones. Indeed, this insight has been highlighted in the context of the green transition, as policymakers seek to promote green technologies while, at the same time, phasing out the carbon economy (Ergen & Schmitz, 2023). Industrial policies on AI in health can be considered in a similar fashion by situating them in relation to the broader healthcare sector. Healthcare actors, patient groups, unions, and other civil society actors should play a key role here in deciding where to advance and reduce different public health infrastructures and services. This raises the issue of defining “public interest” and classifying the “public(s)”, which is a process of “power, politics and truth seeking” in AI (Sieber et al 2024, p634). How this impacts the establishment of future public interest in AI industrial policy requires context specific interrogation.    

Finally, with this piece we hope to encourage the further use of industrial policy as a conceptual lens to analyse the increase in interventionist policies around AI and healthcare. While this was only an initial foray into the potential of doing so, industrial policy proved to be a useful approach for several reasons. First, the need for conditionality in the future public benefit to be established pushes back against techno-solutionist narratives. It also facilitates a broader understanding of the purpose of these policies within the wider political economy. Thus, we are better able to wrestle with the reality that policies about AI in healthcare are not just about AI or indeed healthcare. Rather, they are about prioritising certain health futures, closing down others and advancing broader national and regional AI visions. 

References

Adler, D. (2021). Inside Operation Warp Speed: A new model for industrial policy. American Affairs, 5(2), 3-32.

Amsden, A. H. (1989). Asia’s next giant: South Korea and late industrialization. Oxford University Press.

Amsden, A. H. (1989). Asia’s next giant: South Korea and late industrialization. Oxford University Press.

Ang, Y. Y. (2018). How China escaped the poverty trap. Cornell University Press.

Bareis, J., & Katzenbach, C. (2022). Talking AI into being: The narratives and imaginaries of national AI strategies and their performative politics. Science, Technology, & Human Values, 47(5), 855-881. https://doi.org/10.1177/01622439211030007

Bulfone, F., Ergen, T., & Kalaitzake, M. (2023). No strings attached: Corporate welfare, state intervention, and the issue of conditionality. Competition & change, 27(2), 253-276.

Bulfone, F., Ergen, T., & Maggor, E. (2024). The political economy of conditionality and the new industrial policy (No. 24/6). MPIfG Discussion Paper.

Chibber, V. (2003). Locked in Place: State-building and Late Industrialization in India. Princeton University Press;

Dickens, AD (2021) The right to health implications of data-driven health research partnerships. PhD thesis, University of Essex.

Doner, R. F., & Schneider, B. R. (2016). The middle-income trap: More politics than economics. World Politics, 68(4), 608-644.

Emergency USA. (2021, September). Pharmaceutical companies reaping immoral profits from COVID vaccines, yet paying low tax rates. Emergency USA. https://www.emergencyusa.org/2021/09/pharmaceutical-companies-reaping-immoral-profits-from-covid-vaccines-yet-paying-low-tax-rates/?doing_wp_cron=1739796679.7930979728698730468750

Ergen, T., & Schmitz, L. (2023). The sunshine problem: Climate change and managed decline in the European Union (No. 23/6). MPIfG Discussion Paper.

European Commission. (2025, February 18). EU 200 billion Euros for AI, largest AI investment in history. European Commission. https://ec.europa.eu/commission/presscorner/detail/en/ip_25_467

Hoff, J. L. (2023). Unavoidable futures? How governments articulate sociotechnical imaginaries of AI and healthcare services. Futures, 148, 103131. https://doi.org/10.1016/j.futures.2023.103131

Information Commissioner’s Office. (n.d.). Google DeepMind and class action lawsuit. Information Commissioner’s Office. https://ico.org.uk/for-the-public/ico-40/google-deepmind-and-class-action-lawsuit/

Johnson, C., (1982). MITI and the Japanese Miracle: The Growth of Industrial Policy: 1925–1975, Stanford: Stanford University Press.

Johnson, D., & Longmore, S. (2023) How Healthy Is Democracy: The Role of Healthcare and Social Equity Considerations in the Governance of People. In Rethinking Democracy and Governance (pp. 345-369). Routledge.

Kak, A. (2024). What does “AI for the public good” really mean? AI Now Institute. https://ainowinstitute.org/general/ai-nationalisms-executive-summary#h-what-does-ai-for-the-public-good-really-mean 

Maggor, E. (2021). Sources of state discipline: lessons from Israel’s developmental state, 1948–1973. Socio-economic review, 19(2), 553-581.

Maggor, E. (2021). The politics of innovation policy: Building Israel’s “neo-developmental” state. Politics & society, 49(4), 451-487.

Mazzucato, M. (2023). Health for All: Transforming economies to deliver what matters. bmj, 381.

Mazzucato, M., & Li, H. L. (2019). Health innovation re-imagined to deliver public value.

Mazzucato, M., & Rodrik, D. (2023). Industrial policy with conditionalities: a taxonomy and sample cases.

Mazzucato, M., & Roy, V. (2019). Rethinking value in health innovation: from mystifications towards prescriptions. Journal of Economic Policy Reform, 22(2), 101-119.

Milmo, D & Stacey, K, (2025, January 13). Labour AI action plan for NHS patient data: Why it’s causing concern. The Guardian. https://www.theguardian.com/politics/2025/jan/13/labour-ai-action-plan-nhs-patient-data-why-causing-concern

North, M. (2025, January 22). 5 ways AI is transforming healthcare. World Economic Forum. https://www.weforum.org/stories/2025/01/ai-transforming-global-health/.

OECD. (2025). National strategies, agendas, and plans. OECD AI Policy Observatory. https://oecd.ai/en/dashboards/policy-instruments/National_strategies_agendas_and_plans

Sieber, R., Brandusescu, A., Adu-Daako, A., & Sangiambut, S. (2024). Who are the publics engaging in AI? Public Understanding of Science, 33(5), 634–653. https://doi.org/10.1177/09636625231219853 p.634

Strange, M., Tucker, J. Global governance and the normalization of artificial intelligence as ‘good’ for human health. AI & Soc 39, 2667–2676 (2024). https://doi.org/10.1007/s00146-023-01774-2

Tucker, J. (2023). The future vision (s) of AI health in the Nordics: comparing the national AI strategies. Futures, 149, 103154. https://doi.org/10.1016/j.futures.2023.103154

Wade, R. (2004). Governing the Market: Economic Theory and the Role of Government in East Asian Industrialization, Princeton University Press.

Wilson, M R. (2021, November 4). Vaccine manufacturers are profiteering. History shows how to stop them. Politico. https://www.politico.com/news/magazine/2021/11/04/vaccine-manufacturers-are-profiteering-history-shows-how-to-stop-them-519504;

Zysman, J., (1984). Governments, Markets, and Growth: Financial Systems and the Politics of Industrial Change. Ithaca: Cornell University Press.

Other

The authors contributed equally to this work.

Funding

JT’s contribution was made possible with the support of the Wallenberg AI, Autonomous Systems and Software Program – Humanity and Society (WASP-HS) funded by the Marianne and Marcus Wallenberg Foundation and the Marcus and Amalia Wallenberg Foundation.

How to cite this article:

Maggor, E., & Tucker, J. (2025), Towards Successful Industrial Policy on AI in Healthcare: Establishing the Conditions for Future Public Benefit, AI Policy Exchange Forum (AIPEX), https://doi.org/10.63439/PFRX3762

AI Policy Lab is a multidisciplinary research hub at Umeå University.

Subscribe to our newsletter and receive our very latest news.

Go back

Your message has been sent

Warning
Warning
Warning!

2 responses to “Towards Successful Industrial Policy on AI in Healthcare: Establishing the Conditions for Future Public Benefit”

  1. petttercc6a690fa2 avatar
    petttercc6a690fa2

    Industrial policy is indeed an important factor and I definitely agree with the piece in calling for more of it, and more thorough, but I think two topics that remain unaddressed by the piece is that of (neo)colonialism, and that of state capacity.

    In particular, I do not think that it was necessarily against US state interests for Moderna not to share IP more widely – indeed, by doing so the US and the West more broadly could (and did) use vaccine access as a geopolitical tool. Similarly, the profits and massive valuations of Moderna and BioNTech largely ended up in the hands of ruling elites in or aligned with the West, which is, again, not against the interests of the US state, necessarily.

    The second component, state capacity, is in my understanding critical when it comes to good industrial policy: Without viable career paths for in-state experts, including practical experience through state-run projects, and long-term viability through state committments to long-term goals, there is going to be an immense risk for revolving-door regulatory capture and/or badly written policies and procurement contracts.

    That all being said, I definitely agree with the call for more and stronger industrial policy, though I would also extend that to also call for a further shift in perspective: Doing accounting in explicitly non-monetary terms allows for an even wider view of policy in general and political goals in particular. Health policy is not about encouraging innovation for the purposes of gathering back corporate taxes or avoiding pandemic-induced GDP crashes, but for the simple purpose of having a more healthy population. That is a worthy goal in and of itself, and needs no more justification than that.

    Like

  2. […] of others. Indeed, unless carefully controlled, states have little say in how such technology benefits societybecause it exists largely in the hands of the private sector. The interests of the market and those […]

    Like

Leave a comment