×
Pennsylvania hospitals adopt AI for lung scans as lawmakers draft regulations
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Pennsylvania health systems are rapidly integrating artificial intelligence into clinical and administrative operations, from analyzing CT scans to reviewing insurance coverage decisions. State lawmakers are now moving to regulate this technology through bipartisan legislation that would establish guidelines for AI use in healthcare while preserving human oversight and patient rights.

What you should know: AI tools are already being deployed across Pennsylvania’s healthcare infrastructure with measurable benefits for patient care.

  • At Temple University Hospital, pulmonologist Dr. Gerard Criner uses AI software called Coreline to create detailed 3D lung models from CT scans, enabling more precise treatment planning for emphysema patients.
  • “It’s more objective than the human eye to interpret things and you can quantitate things a little bit better,” Criner said, noting the technology helps catch early signs of lung cancer that might otherwise be missed.
  • Health systems are also using AI for electronic health records updates, medical documentation, and staffing assessments to reduce time-consuming administrative tasks.

The regulatory response: State Representative Arvind Venkat, an emergency physician, has introduced legislation to create a framework for AI use in healthcare without stifling innovation.

  • The proposed bill would require patients to be informed when AI is involved in their care and mandate human oversight for all final treatment and insurance coverage decisions.
  • “Right now, it is the Wild West when it comes to artificial intelligence,” Venkat said, emphasizing the need for state-level action given the absence of federal legislation.
  • The bill also calls for AI tools that prevent bias and discrimination in healthcare settings rather than reinforcing existing inequities.

Why this matters: Pennsylvania’s approach could serve as a model for other states grappling with AI regulation in healthcare, balancing innovation with patient safety.

  • Current healthcare laws covering ethics, privacy, and consumer protections don’t specifically address AI deployment, creating regulatory gaps.
  • “That’s where I think we need to take a step back and where in my role as a legislator is to say, what is our responsibility to the people of Pennsylvania in a technology that is autonomous,” Venkat explained.

Safety monitoring shows promise: Early data from the Pennsylvania Patient Safety Authority indicates AI is having predominantly positive effects on patient care.

  • Director Becky Jones noted that while AI-related safety reports are currently small in number, she expects them to grow as healthcare workers become more aware of AI’s role in care delivery.
  • “We don’t want to only focus on the negative. We want to see where it is performing well for patient safety, as well,” Jones said.
  • The authority is beginning to track cases where AI helps identify adverse events sooner than traditional methods.

The human element remains central: Healthcare providers emphasize that AI tools supplement rather than replace clinical expertise and patient interaction.

  • “It doesn’t replace other things that you do as a clinician,” Criner said. “To be able to talk to the patient, find out what’s important, do things in shared decision-making fashion with the patients.”
  • The proposed legislation would codify this principle by requiring human responsibility for all final medical and coverage decisions.
As Pennsylvania health care adopts AI, how should the technology be regulated?

Recent News

Sam Altman launches Merge Labs to challenge Neuralink with non-invasive brain tech

Sound waves could unlock brain-computer interfaces without the surgical risks plaguing Neuralink.

Musk launches Grokipedia to control AI training data and reality

Previous attempts to make Grok "anti-woke" resulted in the model calling itself "mechahitler."

ChatGPT reduces harmful mental health crisis responses by 65%

Over 170 mental health experts helped train the model to recognize delusion and self-harm signals.