Home Education An algorithm that detects sepsis reduces mortality by nearly 20 percent

An algorithm that detects sepsis reduces mortality by nearly 20 percent


Hospital patients are at risk of a number of life-threatening complications, particularly sepsis, a condition that can lead to death within hours and contributes to one in three deaths in hospital in the US Overworked doctors and nurses often have little time to spend with each patient, and this problem can go unnoticed until it is too late.

Academics and electronic health record companies have developed automated systems that send out reminders to screen patients for sepsis, but the sheer volume of alerts can cause health care providers to ignore or turn off those messages. Researchers are trying to use machine learning to fine-tune such programs and reduce the number of alerts they generate. Now, one algorithm has proven its effectiveness in real hospitals, helping doctors and nurses treat sepsis cases nearly two hours earlier on average — and reducing the death rate from the disease by 18 percent.

Sepsis, which occurs when the body reacts to an infection gets out of control, can lead to organ failure, loss of limbs and death. About 1.7 million adults in the U.S. develop sepsis each year, and about 270,000 of them die, according to the Centers for Disease Control and Prevention. Although most cases occur outside the hospital, the condition is the leading cause of death for patients in this setting. Identifying the problem as soon as possible is critical to preventing the worst outcomes. “Sepsis develops extremely quickly—like in a matter of hours if you don’t get timely treatment,” says Suchi Saria, CEO and founder of Bayesian Health, a company that develops machine learning algorithms for medical use. “I lost my nephew to sepsis. And in his case, for example, sepsis was not suspected or detected until he was already in the late stage of so-called septic shock… where it is much more difficult to recover.”

But in a busy hospital, sepsis is fast diagnosis can be difficult. According to the current standard of care, Saria explains, a health care provider should pay attention if a patient shows any two of the four warning signs of sepsis, including fever and confusion. Some existing warning systems alert doctors when this happens, but many patients show at least two of the four criteria during a typical hospital stay, Saria says, adding that this can give warning programs a high false-positive rate. “Many of these other programs have such a high false-alert rate that providers turn off those alerts without even validating them,” says Karin Molander, an emergency room physician and chair of the nonprofit Sepsis Alliance, which was not involved in the development of the new sepsis detection algorithm. . Because of how common the warning signs are, doctors must also consider factors such as the person’s age, medical history, and recent lab results. However, gathering all the relevant information takes time – time that patients with sepsis do not have.

In a well-connected electronic record system, known risk factors for sepsis are available but can be time-consuming to find. This is where machine learning algorithms come into play. Several academic and industry groups are teaching these programs to recognize risk factors for sepsis and other complications and to alert health care providers about which patients are at particular risk. Saria and her colleagues at Johns Hopkins University, where she directs the Machine Learning and Healthcare Laboratory, began work on one such algorithm in 2015. The program scanned patients’ electronic health records for factors that increase the risk of sepsis, and combined that information with current vital signs and lab tests to create a score that shows which patients the possibility of developing septic shock. A few years later, Saria founded Bayesian Health, where her team used machine learning to improve the sensitivity, accuracy, and speed of her program, called the Targeted Real-Time Early Warning System (TREWS).

More recently, Saria and a team of researchers evaluated the performance of TREWS in the real world. The program was integrated over two years into the workflow of approximately 2,000 health care providers at five sites affiliated with the Johns Hopkins Medicine system, spanning both well-resourced academic institutions and community hospitals. Doctors and nurses used the program in more than 760,000 patient encounters, including more than 17,000 patients who developed sepsis. Results from this trial suggesting that TREWS led to earlier diagnosis of sepsis and reduced mortality described at three paper published in npj digital medicine and Natural medicine at the end of last month.

“I think this machine learning model could prove to be as important to the treatment of sepsis as the EKG [electrocardiogram] the machine has proven itself in heart attack diagnosis,” says Molander. “This will allow the clinician to step away from the computer … trying to analyze 15 years of information to go back to the bedside and reassess the patient more quickly – that’s where we need to be.”

TREWS is not the first program to demonstrate its value in such trials. Mark Sendak, a physician and director of population health and scientific data at Duke’s Institute for Health Innovation, is working on a similar program developed by Duke researchers called Surveillance of sepsis. He notes that other healthcare-focused machine learning systems — not necessarily designed to detect sepsis specifically — have already undergone large-scale tests. one innovative An artificial intelligence-based system test for diagnosing diabetes complications was developed with input from the US Food and Drug Administration. Other programs have also been tested in various hospital systems, he notes.

“These tools play an important role in improving the way we take care of patients,” says Sendak, adding that the new system “is another example of that.” He hopes to see even more research, ideally standardized trials that involve research support and leadership from outside partners, such as the FDA, that have no vested interest in the results. This is a challenge because it is very difficult to design medical trials of machine learning systems, including the new TREWS studies. “Anything that uses an algorithm and puts it into practice is learning how it’s used, and the impact is phenomenal,” he says. “And to do that in the peer-reviewed literature is a huge reward.”

As an emergency physician, Molander was impressed by the fact that artificial intelligence is not making sepsis decisions on behalf of healthcare providers. Instead, it flags the patient’s electronic health record so that when doctors or nurses check the record, they see a record that the patient is at risk for sepsis, along with a list of reasons. Unlike some programs, the alert system for TRWES does not prevent the “clinician from doing any other work. [on the computer] without acknowledging the warning,” explains Molander. “They have a little reminder, in the corner of the system, that says, ‘Look, this person is at higher risk of decompensation. [organ failure] due to sepsis, and these are the reasons we think you should be concerned.’ This helps busy doctors and nurses prioritize which patients to see first, without taking away their ability to make their own decisions. “They may not agree, because we don’t want to take the autonomy away from the provider,” Saria says. “This is a means of assistance. It’s not a tool that tells them what to do.”

The trial also collected data on whether doctors and nurses were willing to use an alert system such as TREWS. For example, 89 percent of his notifications were actually evaluated and not automatically rejected, as Molander described as happening with some other systems. The willingness of health care providers to test the program may be due to the fact that TREWS reduced the high rate of sepsis false alarms by 10-fold, according to a Bayesian Health press release, reducing the barrage of alerts and making it easier to recognize which patients were in real danger. “It’s mind-blowing,” Molander says. “It’s really important because it allows vendors to build trust in machine learning.”

Building trust is important, but so is gathering evidence. Healthcare institutions are unlikely to adopt machine learning systems without evidence that they work well. “In technology, people are much more receptive to new ideas if they believe in the thought process. But in medicine, you really need rigorous data and prospective studies to back up the claims of scalable adoption,” Saria says.

“In a sense, we’re building products while building the evidence base and standards for how work should be done and how potential users should carefully study the tools we’re building,” says Sendak. Achieving widespread adoption of any algorithmic alert system is a challenge because different hospitals may use different electronic record software or may already have competing systems. Many hospitals also have limited resources, making it difficult for them to evaluate the effectiveness of an algorithmic alert tool or access technical support when such systems inevitably need repair, upgrade or troubleshooting.

However, Saria hopes to use the new trial data to expand the use of TREWS. She says she is partnering with various electronic records companies so she can incorporate the algorithm into more hospital systems. She also wants to explore whether machine learning algorithms can warn of other complications that people might encounter in hospitals. For example, some patients need to be monitored for cardiac arrest, heavy bleeding, and pressure ulcers, which can affect health during their hospital stay and recovery afterward.

“We’ve learned a lot about what ‘AI done right’ looks like, and we’ve published a lot about it. But what it’s showing right now is that AI, done right, is actually getting vendor acceptance,” Saria says. By incorporating an AI program into existing systems of record, where it can become part of a healthcare provider’s workflow, “you can suddenly start breaking through all of these preventable harms to improve outcomes—which benefits the system, benefits the patient, and benefits clinicians”.

Source link

Previous articleThe Neshoba County Fair is filled with questions about the welfare scandal
Next articleColleges are starting new programs