×
AWS publishes guidelines for responsible AI in healthcare systems
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AWS has published comprehensive guidelines for building responsible AI systems in healthcare and life sciences, addressing critical risks like confabulation and bias that could mislead patients or clinicians. The framework emphasizes establishing governance mechanisms, transparency artifacts, and security measures to ensure AI applications maintain the safety, privacy, and trust that healthcare users expect.

What you should know: The guidelines focus on the design phase of healthcare generative AI applications, providing system-level policies to determine appropriate inputs and outputs.

  • Each component’s input and output should be aligned with clinical priorities to maintain alignment and promote controllability.
  • Safeguards, such as guardrails, must be implemented to enhance the safety and reliability of AI systems.
  • Comprehensive AI red-teaming and evaluations should be applied to assess safety and privacy-impacting inputs and outputs.

Key risks addressed: Two primary risks require careful mitigation in healthcare AI applications.

  • Confabulation — The model generates confident but erroneous outputs, sometimes referred to as hallucinations, which could mislead patients or clinicians.
  • Bias — The risk of exacerbating historical societal biases among different subgroups, which can result from non-representative training data.

Governance framework: AWS recommends establishing specific content policies tailored to intended use cases.

  • A generative AI application designed for clinical documentation should have a policy that prohibits it from diagnosing diseases or offering personalized treatment plans.
  • Organizations should define acceptable use policies for generative AI interfaces, including criteria for queries that applications should refuse to respond to.
  • Policies should address continual improvement processes for generative AI risk measurement with regular assessments and updates.

Transparency requirements: Healthcare AI systems must provide clear documentation and communication about their capabilities and limitations.

  • AI developers should be transparent about evidence and reasons behind all outputs by providing clear documentation of underlying data sources and design decisions.
  • When building AI features with experimental models, it’s essential to highlight the possibility of unexpected model behavior so healthcare professionals can accurately assess whether to use the AI system.
  • AWS recommends publishing artifacts such as Amazon SageMaker model cards or system cards that detail intended use cases, limitations, and responsible AI design choices.

Security measures: The framework emphasizes implementing security best practices at each layer of the application.

  • Generative AI systems might be vulnerable to adversarial attacks such as prompt injection, which exploits LLM vulnerabilities by manipulating inputs.
  • Operating models should safeguard patient privacy and data security by implementing personally identifiable information (PII) detection and configuring guardrails that check for prompt attacks.
  • Organizations should continually assess benefits and risks of all generative AI features and regularly monitor performance through tools like Amazon CloudWatch.

Available resources: AWS provides several tools and resources to support responsible AI development in healthcare.

  • Amazon Bedrock Guardrails helps implement safeguards for generative AI applications based on specific use cases and responsible AI policies.
  • The AWS responsible AI whitepaper serves as a resource for healthcare professionals developing AI applications in critical care environments where errors could have life-threatening consequences.
  • AWS AI Service Cards explain intended use cases, how machine learning is used by services, and key considerations in responsible design and use.
Responsible AI design in healthcare and life sciences

Recent News

Law firm pays $55K after AI created fake legal citations

The lawyer initially denied using AI before withdrawing the fabricated filing.

AI experts predict human-level artificial intelligence by 2047

Half of experts fear extinction-level risks despite overall optimism about AI's future.

OpenAI acquires Sky to bring Mac control to ChatGPT

Natural language commands could replace clicks and taps across Mac applications entirely.