Balancing Privacy Protection with Patient Care

850

Healthcare providing organizations have a dual duty to their patients: to provide the highest quality and safest care possible and to safeguard their patients’ privacy. This latter duty is generally addressed by laws regulating the use and disclosure of individually identifiable health information (IIHI).

Scrutiny of performance with both duties is increasing, with the proliferation of governmental and private quality scorecards and pay for performance schemes on the one hand, and increasingly strict privacy laws, regulations and consequences – from fines, public shaming, loss of professional licensure and prison – on the other.

Clearly, a balanced approach is necessary to reduce the net aggregate risk to patients. Such an approach would ideally allow access to all information needed to guide all clinical decisions and business processes, but no more. Many organizations with Electronic Health Records (EHRs) try to achieve this ideal state through the imposition of access controls. The challenge with this approach is that the source of truth about whether any particular IIHI access is being done for a legitimate reason does not reside within the EHR. Instead, the truth can only be found within the mind of the person choosing to access the information. Even patients whose information has been accessed may be unsure of whether the access was legitimate, such as with accesses by billers, coders, radiologists, clerks and many others who need IIHI but who are unknown to patients.These twin responsibilities cannot be managed independently of one another, since their risks interact.

To illustrate, an organization focused only on quality and safety could, at an extreme, provide access to all IIHI of all patients online without requiring a login. While full access to all information needed to make any medical decision could be guaranteed with such an approach, the privacy risks would be intolerable, as anyone could access anyone else’s information without accountability. An approach taking the other extreme – of making all IIHI inaccessible to anyone could guarantee that patient privacy was fully protected, but the risks to quality and safety would be intolerable, since no clinician would have any necessary information on any patient.

Access controls can only be constructed in EHRs using imperfect proxy information, such as the role, work location and prior patient’s relationships with system users. Nurses may be unable to access information on patients not currently in beds on their hospital unit, specialist physicians restricted only to patients for whom they have been formally requested to consult, medical assistants only to patients with appointments that day at the clinic in which they work.

The problem with such access restrictions is that they fail to account for the often unpredictable movement of patients and personnel. Patients are transferred from one venue to another, such as from clinic to Emergency Department to Intensive Care Unit to Medical Unit to home. Clinicians and staff cover for one another, are moved to another location and take call, often with little notice. The result is that no one can predict with certainty the identity of the next patient for whom they will be responsible.

Access restrictions, by blocking the flow of information, can increase the risk of harm to patients; and sicker the patient, greater the risk. Patients who are gravely ill or injured are often rapidly moved to higher levels of care, and need the attention of clinicians now who had no prior relationship with the patient. With patients who are not gravely ill will still have their care negatively impacted by access restrictions – through delays, errors and poorer quality medical analysis and decisions.

If restrictions are not accessed, then what should be done to protect privacy? The answer lies within the mind of the EHR users. Since they know when their intended access to IIHI is illegitimate, they can be deterred from acting on such impulses through mechanisms that increase the likelihood that they will be caught and held accountable. In most cases people tempted to snoop on the IIHI of others have a functioning conscience, and are otherwise good, skilled individuals, important to the organization. They get into trouble by constructing a rationalization to explain to themselves why it is ok to snoop: “Cathy isn’t looking well, I wonder if there is anything I can do to help?” In some cases, however, the perpetrator is acting maliciously, and knows exactly what they are doing. In either case, fear of getting caught will deter the snooping.

Managing accountability is done through a combination of forensic post hoc audit data mining (known as “system activity review” in the HIPAA Security Rule) – to find and to hold privacy violators accountable – and selective use of “Break the Glass” privacy alerts – to deter snooping on high risk privacy targets.

EHRs, unlike paper records, retain records of all information accesses, but such audit databases can quickly grow to vast size, with the number of entries representing privacy breaches tiny in comparison. The secret to finding those “needles” in the proverbial haystack is to focus on likely privacy risks. Fundamentally, the only people at risk of having their IIHI inappropriately accessed are those who are known to system users, either through direct acquaintance, or celebrity or notoriety.

The first step in data mining, therefore, is to identify those who have accessed the records of family members, neighbors, coworkers, organizational and community leaders, celebrities and people in the news. Many such accesses will be legitimate, and so will clutter reports with false positives that will waste the time of – and demoralize – investigators and investigated alike. Therefore, the next step is to filter out low risk user-patient pairings, such as when the user is the Primary Care Provider, or has an upcoming appointment or recent encounter with the patient, or is a member of the hospital percentages of false positives. This can take multiple iterations of testing and report improvements.

Since the intent of such accountability reports is deterrence of privacy breaches, not workforce reduction, the next step is a concerted communication and education campaign to alert all system users that “the sheriff is coming to town”, and that enhanced privacy reports will start being run on a date certain in the near future. As the reports start getting run, the results investigated, and the guilty sanctioned, privacy breaches – and “hits” to the privacy reports – will drop dramatically. Continued reporting and investigating will keep the rate low, as people deter themselves from giving in to temporary temptations and question their own rationalizations.

The temptation to access some patient’s information is sometimes especially high – when the patient is a prominent person, such as a celebrity or leader. In such cases, additional deterrence may be needed, and can be provided by a “Break the Glass” (BTG) alert. These alerts require users to enter a reason for entering the chart, and to re-enter their password, thereby exploding rationalizations for snooping and avoiding the “it wasn’t me” defense. BTG alerts can reduce privacy violations by as much as 100% in my experience, but at the cost of systematic delay in accessing information. This delay cost can be mitigated by suppressing the firing of BTG in low risk situations and for defined time periods after each “glass breakage”.

The combination of forensic audit data mining, selective application of Break the Glass alerts and application of access restrictions in situations where access to certain patients or information types is never needed can greatly reduce privacy risk without impairing the quality or safety of care delivery, and so can optimally reduce the net aggregate risk to patients.

Author: Eric Liederman

Author Img

Dr. Eric Liederman serves as Director of Medical Informatics for Kaiser Permanente’s Northern California region. Dr. Liederman has published many topics, and speaks on them internationally on topics including knowledge management, patient e-connectivity, collaboration with IT, and privacy and security.

Dr. Liederman previously served as Medical Director of Clinical Information Systems at the University of California Davis Health System. He received his Bachelor’s degree from Dartmouth College, his MD from Tufts University, and his MPH from the University of Massachusetts, Amherst.

Send Enquiry for this story

By submitting this form you are giving a consent to HHMGlobal.com to store your submitted information.
See our Privacy Policy to learn more about how we use data.