Artificial Intelligence in Clinical Practice


Summary: Dive into the practical use of AI in the clinic.
Artificial Intelligence in Clinical Practice

Across industry sectors, the term "artificial intelligence" (AI) has become a mainstay for marketing the technological innovations of today. Why? Because it certainly sells things, and it evokes a wonderfully terrifying image of The Terminator. But AI is no longer a buzzword or a concept. It has become a practical tool in the pockets of our daily lives. AI is in your phone, your TV, your microwave, your toothbrush (OK, maybe not your toothbrush, at least not yet); it's everywhere and then some. And that's great for your personal life and maybe even your business life, but what if your business is healthcare? For us? What does artificial intelligence really mean to the healthcare professional? And why should we care about it or want it?

Seems odd to be asking that question, but think about it for a second. Healthcare professionals (HCPs) are among the most educated, motivated, and selfless people. They only want the best for their patients, to improve quality and duration of life, relieve suffering. It should be obvious that they'd want the latest and greatest in technology, and that they'd want it the quickest. Yet HCPs and our healthcare system in general are the slowest adopters of new technology Why is that? Don't they get it? It should be intuitive! To explore that a bit, I'd like to lay out for you what AI in healthcare is, what it has to offer, where it can help, and where it can hurt.

First, I'm going to say straight out that I will completely ignore the bureaucratic impediments, the lack of standardization in electronic records, and the money issues. I want to look at this purely from the point of contact between an HCP and the technology, where there are still plenty of barriers, but a place where we can exert some control and where we can help design and improve these tools. And a good place to start is with a definition.

The Merriam-Webster dictionary defines "AI" as "a branch of computer science dealing with the simulation of intelligent behavior in computers", and "the capability of a machine to imitate intelligent human behavior." That's a pretty big umbrella. From the HCP perspective, this means AI could be anything from a rule-based algorithm that flags abnormal lab values, to an automated mammogram screening program that calculates the likelihood of cancer, to a completely self-contained, closed-loop, adaptive learning diabetes management device that intakes continuous blood glucose levels and adjusts its continuous subcutaneous insulin dosing. If we stop and think about this for a second, AI has actually been used in healthcare for some time, and we are actually comfortable with it — at least in some forms. Yet we are still loath to adopt it outside of a few examples.

And so the real question is: With what forms will we be comfortable? OK, nobody wants The Terminator, but perhaps we can be a little more scientific in our decision making. Can we take all of the varied forms of AI and divide them neatly into different buckets for consideration? Regulatory science is struggling to keep up with developments in AI, so that continued safety and effectiveness of these new devices can be assured. Currently, the risk-based approach to software as a medical device (SaMD) proposed by the International Medical Device Regulators Forum is the most popular (Table).**

And AI is indeed SaMD. In this approach, the clinical condition is considered on one axis and the output of the software on the other. Four risk zones are defined, with SaMD in zone I having low impact and SaMD on zone IV having very high impact. It makes intuitive sense, and has been proven so far, that zone I AI devices make us feel the most comfortable and have been the most widely adopted.

SaMD IDMF Risk Table

We can think of lots of examples of zone I SaMD, and some have been used for years. Yet despite the use of low-risk AI, has healthcare really gotten more efficient? Cheaper? Easier? Has the quality of life in the U.S. improved? Do we live longer? Do we have better access to higher levels of care? I cannot answer in the affirmative. Medicine today is far more complex than ever, and growing more so. Our technology is getting more and more expensive and sophisticated. Specialists concentrate more and more in urban areas. The U.S. leads the world in medical technology, but we are No. 97 in access to quality health care!***

We as HCPs need to become more sophisticated in how we attack the problems of taking care of an older, sicker nation, digesting more and more medical data, needing to learn and command more sophisticated care, and reducing our expenditures. How can we do that? Well, what if we all had "intelligent assistants" — clones of ourselves and perhaps even our specialist colleagues — that could do some of our work for us? Those assistants could sift through mountains of reports and lab results to pull out only what needs our attention. Would that help us minimize the number of unnecessary, expensive tests we order? Would that help us detect diseases more quickly and earlier in their course? Of course it would. And that is the purpose of AI in healthcare — not to replace the HCP but to assist. Replacement will never happen and should never happen. The vision is to improve efficiency, accuracy, effectiveness, and (for some people) their examination skills and senses. These are the benefits of healthcare AI, and they are appreciable.

"But what about the risks?" you ask. I think this is the real reason why HCPs are slow to adopt AI. It's partly a fear of "The New," "The Unknown," but it's also the thought that imperfect AI could create new problems. Problems such as medicolegal exposure, or missed diagnoses, or unnecessary testing. The important thing here is to remember that every decision we make as HCPs involves a benefit-risk assessment. And for AI we've already discussed the benefits, which are significant. For risks, let's consider Eko's AI analysis software in terms of the IMDRF Table.

Eko's AI Analysis software ("EAS"), FDA-cleared in January 2020, is SaMD that can recognize from a single-lead ECG and phonocardiogram whether a patient has atrial fibrillation, tachycardia, or bradycardia. It can calculate QRS duration and electromechanical activation time. And we know from clinical testing that EAS can do this with 88-100% sensitivity and 80-97% specificity, depending upon the specific output In other words, EAS is a test just like any other laboratory test; it will have some false positive results and some false negative results, but EAS has demonstrated very high performance.

As a test, EAS informs clinical management; it doesn't drive management, make a diagnosis, or treat anyone. So, according to the IMDRF table, EAS is solidly in risk class I; even if EAS were used in the most critical of situations, it just barely enters IMDRF risk class II. Our clinical benefit-risk evaluation therefore comes out as "high benefit + low risk = fit for use."

At Eko, we live by this equation, and we're going hard in that direction to further increase the benefit while keeping the risk low. EAS augments the functionality of Eko's CORE and DUO advanced stethoscopes to assist the HCP. By detecting atrial fibrillation, which affects more than 10% of the elderly and is thought to cause 15% of strokes, we can alert HCPs to a problem without performing anything more than a chest auscultation. By flagging other pathology, such as murmurs, prolonged QRS duration, tachycardia, and bradycardia, Eko AI helps HCPs screen for cardiovascular disease and get their patients on the right road to definitive diagnosis and treatment much more quickly than ever before.

But that is only the beginning. For example, our low ejection fraction detection algorithm, currently under FDA review, has the potential to identify millions of patients who have heart damage but no symptoms, also known as "Stage B" heart failure Rather than waiting for decompensation, which heralds "Stage C" and "Stage D" heart failure, HCPs can get their patients into heart failure treatment right away. We know that will prolong their lives. And Eko is working hard on other AI applications aimed at the top killers such as cardiac, pulmonary, and vascular diseases. 

But to do all of this the right way, we need to understand what HCPs are up against, the problems they need to solve, and how that intelligent assistant can help. For example, let's say I have a patient who has advanced emphysema, and I need to manage them. I've already done the spirometry, the high-resolution chest CT scan, the bronchoscopy. Now I'm asking, what are my goals with treatment? Without a cure, my aim is to maximize my patient's quality of life, keep him or her out of the hospital as much as possible, and slow disease progression.

In short, I need help figuring out how to predict and prevent disease exacerbations and head them off. How can I train my intelligent assistant to do that? I need it to think like me! So, as a clinician, what are my inputs? The chest exam, listening for changes that portend trouble. Vital sign trends, looking for worrisome deviations. Spirometry, showing decreased peak flows. And once I have those inputs, and put them into my own personal gray matter computer, what is my output? An alarm — I either need to attend to this patient right away or I don't. Send her to the ED? Have her come to the office? Have a televisit with her? Just keep monitoring? After ingesting huge amounts of clinical data, and thinking about it just like I would, my AI assistant would display that alarm on my patient dashboard and put me in the right place much more quickly than I ever could have. The alarm could be as simple as a red light or flag, but it would have alerted me with high confidence that I need to attend to that patient right away, the one who's about to have an exacerbation, so I can intervene to keep her on track and out of the hospital.

By understanding the HCP's point or view —how they think about their patients, the relevant clinical data, their options for intervention, and how AI can fit into their workflow — Eko will continue to build valuable and impactful medical devices that can improve the quality and efficiency of healthcare for cardiopulmonary diseases.

Of course, there are challenges. Some say this is too burdensome, or that this takes too long, or that it is too complicated, or that it exposes us to medicolegal risk. Well, that's what we said about the Focused Assessment with Sonography for Trauma. And about bedside bladder scanning for urinary retention. And robotic surgery. The examples are numerous of where a new medical technology was originally scorned, yet only a few short years later became part of our standard of care.

And so I am quite confident when I say that AI, serving as a healthcare provider's intelligent assistant, can harness the power of auscultation, electrical signal recording, and myriad other biosignals to change the way we practice medicine. It will become that Personal Digital Healthcare Assistant of whom we will all say, "How did we ever live without it?"

 

References

* Kandel I. Crossing the Healthcare Technology Chasm. TigerConnect. Published January 14, 2016. Accessed September 13, 2020. https://tigerconnect.com/blog/crossing-healthcare-technology-chasm/

** IMDRF Software as a Medical Device (SaMD) Working Group. "Software as a Medical Device ":  Possible Framework for Risk Categorization and Corresponding Considerations. Published online 2014:30.

*** Kristof N. Opinion | ‘We're No. 28! And Dropping!' The New York Times. https://www.nytimes.com/2020/09/09/opinion/united-states-social-progress.html. Published September 9, 2020. Accessed September 13, 2020.

**** Yancy CW, Jessup M, Bozkurt B, et al. 2017 ACC/AHA/HFSA Focused Update of the 2013 ACCF/AHA Guideline for the Management of Heart Failure: A Report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines and the Heart Failure Society of America. J Am Coll Cardiol. 2017;70(6):776-803. doi:10.1016/j.jacc.2017.04.025

Yancy CW, Jessup M, Bozkurt B, et al. 2013 ACCF/AHA Guideline for the Management of Heart Failure. J Am Coll Cardiol. 2013;62(16):e147-e239. doi:10.1016/j.jacc.2013.05.019

***** Eko AI White Paper - Cardiac Screening AI | Eko. Accessed October 12, 2020. https://www.ekohealth.com/ai-documentation