50% Patient Notes Duplicated, According To University Study

233

Clinical care recording resulted in a high prevalence of text duplication, according to a recent health informatics study, and systemic hazards need systemic interventions to be fixed. An artificial intelligence-based analysis of all inpatient and outpatient notes created within the UPenn Health System from January 1, 2015, through December 31, 2020 was conducted earlier this year by a team of researchers led by academics from the University of Pennsylvania Perelman School of Medicine in Philadelphia.

To determine how much repetition is present in the electronic health record and why, the researchers measured text copied from the same author versus text copied from a different author. 50.1% of the words in the total notes for more than 1.96 million distinct patients were repeated from previous notes made about the same patient.

The incidence of duplication climbed year over year, from 33% for notes taken in 2015 to 54.2% for notes produced in 2020, according to the researchers’ findings. Additionally, out of the total amount of duplicate material discovered in the study, 54.1% of copied notes were authored by the same author and 45.9% by a different author. Additionally, a record’s duplication rate increased with note count, reaching around 60%.

The researchers from Penn Medicine wrote in the investigation abstract for Prevalence and Sources of Duplicate Information in the Electronic Medical Record, which was published in JAMA on September 26, that duplicate text raises questions about the factuality of all information in the health record, making it challenging to locate and verify the data in day-to-day clinical work.

They collaborated with TrekIT Health, Inc., CareAlign, a Philadelphia start-up with a medical workflow system that interacts with any EHR, and River Records, an automated information processing company based in Jamaica Plain, Massachusetts, on the cross-sectional analysis.

River Records, founded and run by four residents, attempts to use deep learning and natural language processing to tackle the historically unchallenged notions of patient records. According to the company’s website, their AI model streamlines the process of gathering and processing data into a few simple steps, and its software may provide a user-friendly interface for interacting with the results.

The AI analysis employed a moving window of 10 neighbouring words to find sections of copied text. The algorithm was unable to detect duplicate information that was summarised or paraphrased, so if anything, the study understates the amount of duplication. The researchers recommend that scatter be used to better analyse the note model for documentation. Hundreds of different notes may contain varying amounts of patient information.

The number of words per note was used by the researchers to quantify scatter, which they then plotted against duplication values. They discovered that whereas operative notes had low scatter and low duplication, progress and assessment notes showed very high duplication.

For example, telephone occurrence notes have, on average, 42 words of novel text for every note; it would imply that a physician trying to read the record might have to view roughly 10 different notes to get 500 words of novel text, an extremely disorganised set of notations requiring many clicks to navigate, they said.

Overall, the researchers found that the challenge of finding clinical information for people who live in many places results in wasted time retrieving data or, worse, missed information since doctors do not have the time to thoroughly explore the EHR.

However, clinical note writers must constantly make copies of earlier notes and add to them rather than just updating older papers. This is a habit modified by contemporary EHRs. The researchers found that any unilateral ban on copying and pasting information in EHRs will increase scatter. The study comes to the conclusion that the way modern EHRs are set up based on time and author makes it more likely for notes to be duplicated.

When physicians make decisions about patient care without having access to updated data, such as recent lab results or new drugs, duplicate records may contain missing or obsolete information that can negatively impact the quality of care. The Sioux Falls, South Dakota-based Sanford Health system, which operates more than 46 hospitals, developed a standardised note template for use in its Epic electronic health record system to combat note bloat. As per Dr. Roxana Lupu, CMO of Sanford Health, the form encourages clinicians to write everything they need to and nothing they don’t.

It was crucial to remember that providers weren’t only writing the message for themselves—they were also writing it for other people. Because that was the purpose of reviewing notes, they wanted the evaluation and strategy to take centre stage, she said. The ability to access data more easily, enhance patient care, boost community health, and even lessen physician fatigue may be made possible through using natural language recognition and supervised learning to derive clinical insights from free text notes.

According to Gregg Church, president of 4medica, the use of AI to resolve duplicate patient records may improve real-time interoperability. He mentioned that up to 30% of the provider groups he has dealt with duplicate patient records often.

Using the COVID-19 pandemic as an example, Church stated earlier this year that the quick rise in clinical labs was partly caused by paper requisitions, which were as high as 50% for some labs. As a result, there could be three or four records for one person, increasing clinical risk and hindering billing.

However, data is standardised and balanced while systems are still live by using machine learning prediction skills, for instance, to identify and resolve records. He claimed that the Idaho Health Data Exchange now utilises such an AI model to integrate data automatically, reducing the proportion of duplicate patient records from 30% to 1%.

The researchers came to the conclusion that the note model for documenting should be further explored as a primary cause of redundancy and dispersion, and other paradigms should be evaluated.