Artificial intelligence (AI) is rapidly transforming the pharmaceutical industry, but companies are still navigating how best to align innovation with growing regulatory expectations around transparency, traceability, and control. According to Honeywell’s insights, approximately 99% of organizations are already experimenting with some form of AI. However, the scale and depth of adoption vary significantly across the sector.
Shawn Opatka, representing Honeywell, noted that pharma companies are leveraging AI primarily to address persistent workforce gaps, skill shortages, and inefficiencies in knowledge management. Many organizations are using the technology to bridge operational gaps and enhance regulatory or quality systems, though the industry remains far from full-scale integration. AI’s appeal lies in its ability to automate complex tasks, improve compliance, and strengthen product traceability, key requirements as global oversight becomes more stringent.
Adding to the momentum is the White House’s recent AI Action Plan, designed to establish the United States as a global leader in artificial intelligence. Linda Malek highlighted that the plan emphasizes not just acceleration but also smarter regulation. It proposes measures such as regulatory sandboxes, controlled environments where companies can safely test AI technologies, as well as streamlined approval processes to reduce bureaucratic hurdles for AI-driven medical devices and software.
The plan also calls for investment in critical infrastructure, including large-scale data centers and greater access to high-quality datasets, enabling more robust model training and algorithm development.
Together, these shifts signal a pivotal moment for pharma and healthcare organizations: as AI becomes integral to quality, safety, and efficiency, balancing innovation with regulatory confidence will define the next era of digital transformation in life sciences.
A transcript of their conversation with PC can be found below.
PC: How are evolving regulatory expectations around transparency, traceability, and control shaping the way pharma companies approach AI adoption?
Opatka: I think companies are, from what I’m seeing, really jumping into AI, and we see different levels of engagement, but our data actually shows that about 99% of organizations are doing something with AI. What are they doing? I think it varies widely. If I think about it, particular to the businesses we serve as Honeywell, how are they doing it in quality and regulatory areas, as you mentioned, and traceability, that really varies, but we do see a lot of adoption.
I think what we see in it is the willingness to try new things with AI and fill gaps that companies are struggling to fill today. I think we all know the reasons why AI has been adopted, whether it’s skill gaps, worker shortage, just the wealth of knowledge. I think that’s where a lot of companies are experimenting, but I think we’re a long way from full-scale adoption.
PC: Meanwhile, how might the White House AI Action Plan influence the pace and scope of AI adoption across hospitals, health systems, and health-tech companies?
Malek: The White House AI Action Plan that came out recently was interesting in that it really focuses on trying to make the United States a leader as it relates to AI. There’s a lot of discussion throughout the document around trying to accelerate development of AI in the United States, trying to keep pace with and get ahead of our competitor countries, and ways to do that.
I think in terms of how that plan will influence the scope of AI adoption across hospitals and health systems and health tech companies, is that there are a number of different mechanisms whereby the Action Plan tries to ease the regulatory burden, like through regulatory sandboxes—which we can talk about in more detail later—and trying to streamline approval processes for AI adoption, in terms of cutting what they term as red tape to ease approval pathways for AI medical devices and software.
Invest in infrastructure, government infrastructure and calling upon agencies like NIST and like the FDA and others to try to build out data centers, to try to increase access to large datasets and so on, which are essential for really training AI models and algorithms.