jueves, 11 de agosto de 2022

How to Find the Clinical AI in Your Facility and Govern It

 News Article 

Author: 

  • ErinSparnon

E. Sparnon is a senior engineering manager for device evaluation at ECRI in Plymouth Meeting, PA. Email: esparnon@ecri.org




A few Saturdays ago, I woke up with the sun and started the day off right with a lecture at AAMI Exchange 2022 in San Antonio, TX. Working uphill against a team who sorely overestimated how late they could responsibly stay up networking, I delivered a simple message: The call is coming from inside the house.

Artificial intelligence (AI)-enabled technologies are not at your door. They are in your care areas and your noncare areas. They are not just a gleam in your caregiver’s eyes; they are running live on the cloud software that’s analyzing sonic waveforms from the pocket-sized wireless stethoscope your emergency department (ED) doc is Bluetoothing to her personal phone.

As Pat Baird stated at the beginning of this article series, AI’s promise lies in its capability to augment or free up humans to perform routine tasks better or faster. By organizing and presenting key information, identifying subtle or hard-to-spot patterns in large pools of data, or performing routine tasks (e.g., sizing and rotating images to match provider preferences), these systems can pop up in all sorts of clinical and nonclinical applications. On the clinical side, the vast majority of cleared systems are in radiology, including imaging display and segmentation, diagnosis of specific pathology, or guidance on the position or placement of patients or imaging devices (Figures 1 and 2, below).

Figure 2. Examples of artificial intelligence (AI)-enabled systems cleared by the Food and Drug Administration (FDA) by panel. Last updated Sept. 22, 2021. Source: FDA. Artificial intelligence and machine learning (AI/ML)-enabled medical devices. Abbreviations used: ECG, electrocardiogram; EEG, electroencephalography; EHR, electronic health record; HTM, healthcare technology management.

How do you establish the oversight and governance necessary to keep your patients and staff safe? And why is that so important?

AI-enabled medical technologies are both different and not different from all the other technologies you’re used to developing, buying, and managing.

How AI Is Different

First, AI is kind of sneaky in the ways it can enter your facility. It might enter as a software upgrade to radiology equipment or the software you use to view or annotate images. It might come in as an accessory or feature of point-of-care devices (e.g., wireless stethoscopes, point-of-care ultrasound scanners). It might come in as an electronic health record–based alerting system launched as a patient safety initiative to identify patients at risk of decline. A lot of these instances might bypass your centralized equipment purchase and management activities, meaning you’ll probably need to go hunting for the systems in use.

Second, AI can add new challenges when it comes to troubleshooting and incident response. Clinicians, including doctors, nurses, physician assistants and extenders, technicians, clinical engineers—literally everyone in healthcare—grew up in a world where they know (or at least they can find out) what a device or system will do given known input and what it has done in the past.

If an infusion pump comes in with a post-it-note that reads “broke,” healthcare technology management (HTM) staff can at least try turning it on and reviewing the event and alarm log. But what do you do if an AI system used to detect wrist fractures in X-ray images failed to find a fracture? Will you even detect the failure if the radiologist also misses the fracture and your patient just hits an urgent care when their wrist still hurts next week?

And how does the risk management equation change if we’re talking about an image-management software app that missed a small early-stage breast lesion? With an AI-enabled system, there are going to be some things we can and can’t find out about what happened and why.

Things we can find out:

  • How often a system should deliver a certain set of results based on an expected input range.
  • How the system will react to input that is similar to its training data.
  • How well the system performs over large groups of people or data, in aggregate, during validation.

Things we can’t find out

  • How a model’s development and decision factors lead to certain results in particular cases.
  • What will happen if unexpected input is entered.
  • How well the system will perform on this one specific patient in front of the clinician.


Figure 1. Types of artificial intelligence (AI)-enabled systems cleared by the Food and Drug Administration (FDA) by panel. Last updated Sept. 22, 2021. Source: FDA. Artificial intelligence and machine learning (AI/ML)-enabled medical devices.

How AI Is Not Different

The good news is, after you’ve found the AI-enabled technologies in your facility, you should be in a good place to manage it safely through mechanisms you’re familiar with, assuming you have the right people in the room. (Who doesn’t love a multidisciplinary committee?)

Compared with any networked medical device or software, AI doesn’t raise new types of questions when it comes to safety and effectiveness: You’ll still need to address issues such as security, information transfer, and protected health information. You can even use the same risk management framework to develop, document, and disseminate a mitigation plan for any potential failures and risks.

Although the types of questions around safety and effectiveness aren’t new, you will need to spend some extra time ensuring AI-enabled technologies are being used in a manner consistent with their design and training data. The following questions are a good place to start the discussions among your governance team, users, and vendors:

  • What are you trying to accomplish with the AI? That is, what clinical or nonclinical performance are you expecting? Does this performance expectation match the system’s stated indications and previous clinical implementations?
  • Does your clinical population match the training data set for the AI system closely enough that you feel it will produce good results with its models?
  • When should the system not be used? Are there certain populations or certain care settings in which the patients are too different from the training data or intended use?
  • What is the worst that could happen if the system provides an inappropriate recommendation? What if the system provides an appropriate recommendation but it is interpreted incorrectly or the clinician fails to act on it?
  • What impact do you expect this system to have on your clinical workflow. both positive and negative? Is there a risk of overreliance, and if so, how can you mitigate it with simulations, drills, or planned downtime?

Parting Thoughts

AI-enabled technologies are popping up in many clinical and nonclinical applications. They regularly evade your facility’s traditional capital equipment selection or value analysis processes, and in some cases, the users who buy them may not be aware of AI or cloud technology features and their implications. But HTM has a unique role here as a champion of good governance and patient safety. You can bring the tools and techniques you already use every day, along with a few more questions about system design and validation.

Now, go looking for the AI-enabled technologies in your facility. Check in with radiology and imaging to see what image display or image enhancement technologies are in use. Take a tour through the ED and see if anyone’s using pocket ultrasound or wireless stethoscopes with analysis or guidance capabilities. Finding and governing AI in use isn’t so tough once you know where to look.



Entradas populares

 
Ver DALCAME en un mapa más grande