10 Areas Where AI Presents Risks in Regulated Life Sciences


10 Areas Where AI Presents Risks in Regulated Life Sciences

Written on behalf of Vistatec by Kit Brown-Hoekstra, Founder and Principal at Comgenesis.

This article is the second in a series about AI’s impact on regulated industries. The previous article focused on the Ethics of AI.

When generative artificial intelligence (AI) burst onto the scene, people seemed to either love or loathe it. For some, AI represented a shiny new toy that could be taught to do anything and solve all our problems. This sentiment was fueled by AI vendors excited about their discoveries and perhaps a bit overzealous in their hype of AI’s capabilities. For others, AI brought dystopian images of robot overlords and worries of job losses to mind. The reality is somewhere in the middle, warranting cautious optimism balanced by a healthy dose of skepticism. 

Despite the hype and enthusiasm with which organizations are implementing AI, it is still in its infancy and, as such, requires supervision and training. Particularly in regulated industries, we must be cautious and thoughtful when implementing AI.

Here are ten instances in which you should not currently depend on AI:

  • Critical decision-making 
  • Interpretation of data without human oversight
  • Single source of truth
  • Stand-in for creativity
  • Arbiter of equity
  • Proxy for authority
  • Self-governance
  • Substitute for human interaction and connection
  • Access to real-time information 
  • Lack of clear business case

1. Critical decision-making 

While well-trained AI systems are invaluable in decision support, they require human supervision and oversight, particularly in complex environments, such as medical diagnosis, legal advice, or hiring processes, to ensure safety and minimize bias. At the current maturity level, AI systems are only as good as the data used to train them, the governance policies used to manage them, and the prompts used to query them. They are also prone to mistakes and hallucinations, leading to unexpected outcomes. This is a consideration for companies looking to manage their own AI and LLM setups.

Managing AI is a multidisciplinary endeavor involving subject matter experts, developers, content and localization professionals, and quality assurance teams. Content and localization professionals provide critical oversight on the front end when curating the datasets, training the AI, and establishing policies and governance, and on the back end when developing prompts and analyzing output.

2. Interpretation of data without human oversight

AI’s ability to interpret data depends on the quality of the datasets it is trained on, the quality of the data it analyzes, and how well-contextualized and structured this data is. 

Megan Gilhooly (formerly of Reltio, now with OneTrust), in an interview with Sarah O’Keefe (Scriptorium), said, “…context that makes sense to humans might not make sense to AI, and so Simplified Technical English and Plain Language becomes more important. The main purpose of AI should be to solve very specific problems.”

Currently, well-trained AI systems excel as assistants for specific tasks, mainly when they are restricted to utilizing only trusted, curated, and structured content you provide.

Vistatec’s CTO, Phil Ritchie, said, “At this stage, AI can be used to remove repetitive, mundane tasks so that people can focus on the business value that only humans can provide to their clients. Even then, it requires some quality assurance checks and balances.”

This underscores that, at present, we cannot rely solely on AI to interpret and analyze data, particularly in highly regulated industries, highlighting the invaluable role of human oversight.

3. Single source of truth

While AI can save time with research and mundane tasks, its tendency to miss context clues, hallucinate, or provide false or out-of-date information means you must fact-check what it tells you. The wacky responses are usually easy to spot. More concerning are the almost-but-not-quite-right answers. In a medical environment, such mistakes can have life-altering consequences. 

Enforcing structure and terminology guidelines and regularly curating your content helps the AI provide more accurate and effective outputs.  

4. Stand-in for Creativity 

Incorporating generative AI into our digital environment signifies a move towards a more vibrant, inventive collaboration with technology. It prompts us to reconsider our positions and workflows, envisioning a future where humans and AI work together in ways we’ve never seen before.

However, our previous post on AI ethics mentioned that AI systems have been accused of plagiarism and copyright violations, particularly when their datasets contain copyrighted or trademarked materials without attribution or compensation to the creators. As AI proliferates, this will continue to pose challenges. 

5. Arbiter of Equity

AI systems can simultaneously be logically consistent and morally abhorrent in their outputs based on how well the system was trained, what policies and guardrails were put in place, the quality of the prompt engineering, and the quality of the dataset it was trained on.

We cannot solely depend on AI for decisions impacting equity and access to healthcare and other essential services. Human beings need to provide oversight from initial AI training onward so that we can ensure equity and fairness in our client interactions.

6. Proxy for Authority

Several studies have shown that clinicians who lack an understanding of an AI system’s limitations or over-rely on it can result in misdiagnosis or improper treatment. This situation can also occur when the datasets used to train the AI are biased or incomplete. When misused, AI can cause serious harm to patients and increase liability for both the clinician and the AI vendor. 

In addition to ensuring that clinical data used to train AI is appropriately curated, debiased, and peer-reviewed, we need to train clinicians and other professionals on AI’s capabilities and limitations so that they can use it safely and effectively. When used well, AI has enormous potential to reduce false positives and negatives, thus improving patient outcomes.

7. Self-governance

Some of the issues related to AI self-governance include:

  • Lack of accountability when something goes wrong
  • Bias perpetuation and amplification, which can lead to unfair or discriminatory outcomes
  • Lack of transparency in how the AI is making decisions and what data was used, which also makes it harder to determine how to fix problems in the dataset
  • Conflicting objectives, policies, or values that cause the AI to provide unethical output
  • Unintended consequences to individuals and society that are difficult to mitigate

AI is not currently capable of resolving these problems without human intervention. As Lance Cummings said, “We need human judges to support AI.”

8. Substitute for Human Interaction and Connection

Dr. Brené Brown describes ‘connection’ as a fundamental human need in her TED Talks and books. Indeed, studies show that loneliness negatively impacts mental and physical health and contributes to cognitive decline

While AI has shown promise with companion robots like Pepper, particularly with dementia patients, it cannot fully substitute for the physical touch and emotional empathy of a genuine human connection. 

The need for empathy and human connection in our content is another reason for human oversight of AI. As Minouche Shafik, president of Columbia University, said in a New York Times article, “In the past, jobs were about muscles. Now they’re about brains, but in the future, they’ll be about the heart.” 

9. Access to Real-time Information

ChatGPT 3.5 told us its dataset was last updated in April 2023 because it was superseded by ChatGPT 4. Other AI systems get updated more frequently as new datasets are curated and the AIs are trained. However, according to DOMO, a data science company, approximately 2.7 quadrillion bytes (or ~2.7 exabytes) of data is created every day. This means that most AI systems lag behind the actual data available and have limited use when real-time information is required. 

For most use cases, this lag is not a problem because the benefit of higher confidence that the data is validated and curated before the AI receives access to it outweighs the risk of missing something. In disaster and other emergency situations, however, a well-trained AI assisting and supporting emergency personnel in quickly digesting new data and using the resulting analysis could save lives. Before activating such a system, there would be a requirement for a rigorous cost-risk-benefit analysis and well-structured inputs to mitigate liability and risk.

10. Lack of Clear Business Case

Implementing a custom AI is a costly undertaking. You need large, well-curated, debiased datasets and computational resources to store, run, and maintain. You also need a multi-functional team to develop, create content and translate it, train, and maintain the AI. 

While the barrier to entry is getting lower, and many of the tools we use regularly are starting to add some AI functionality, we must be aware of the unique regulatory and privacy requirements in regulated industries. These requirements require higher care and data security than other applications.

Without a clear business case for implementing AI, waiting until the technology is more mature may be the best option.


Artificial intelligence has immense potential for transforming regulated life sciences applications, offering innovative data analysis, automation, and decision support solutions. However, given its current limitations and evolving nature, its implementation must be approached with caution and a clear understanding of the risks involved.

To navigate the complexities of AI, companies in regulated life sciences must focus on critical areas such as human oversight, data quality, and ethical considerations. AI should not replace human judgment, particularly in critical decision-making scenarios like medical diagnostics, legal processes, or patient care. Instead, it should complement human expertise, providing support without overshadowing the need for human evaluation and empathy.

Furthermore, issues like equity, bias, and data privacy require stringent scrutiny. To address these challenges, companies must invest in training, debiasing, and transparency in AI development and implementation.

Ultimately, while AI’s capabilities continue to grow, organizations must ensure its use aligns with ethical principles and legal requirements. This approach involves ongoing monitoring, clear business cases, and a multidisciplinary team to oversee AI’s role in regulated life sciences. By doing so, we can harness AI’s power responsibly, ensuring it enhances rather than undermines safety, equity, and human connection in this vital sector.

Contact Vistatec Life Sciences to learn how we can help.