Tackling Issues Defining Algorithmic Equity In Healthcare

244

The use cases for AI in healthcare continue to evolve and rise as the technology also advances. But the AI potential so as to enhance chronic disease management, clinical decision support, as well as population health efforts has been marred due to challenges over pitfalls such as fairness as well as model bias.

In a scenario where the health systems across the US happen to be increasingly pursuing equity and balance, the concern of AI being fair is indeed key to advancing patient outcomes. However, this does shed light for researchers; defining fairness is indeed not as straightforward as it looks.

All this happens to be the crux of a 2023 opinion article by a group of researchers from Emory and Stanford Universities, wherein the authors posit that defining the algorithm’s fairness from an American viewpoint is ethically as well as politically fraught due to many reasons.

The article maintained that a one-size-fits-all perspective of algorithmic fairness happens to remain elusive in spite of the major healthcare organizations stressing the fundamentals of fairness within the AI models. There are indeed approaches to enhancing fairness that exist; however, they happen to be limited in numerous ways.

The researchers happen to argue that a dearth of a universal definition goes on to make regulating healthcare AI challenging and requires algorithmic fairness to be comprehended in specific use contexts.

So as to attain fairness in such context-specific models, the authors happen to suggest that partnerships between patients, model developers, and providers should be fostered and maintained from the beginning of model development.

One of the corresponding authors of the article, John Banja, and his colleagues conclude that fairness comprehension as well as applications go on to reflect a massive range of use contexts, thereby needing those impacted due to the model’s usage to come together so as to determine how fairness should get conceptualized as well as operationalized across the gamut of algorithm development.

Such kinds of collaborations would not just serve to raise transparency within the model development but, at the same time, also make sure that patient and provider perspectives get incorporated so as to enhance fairness.

Although there are a few who have gone on to express apprehension about the role groups such as these shall play within the sector as companies throughout the industries look to gain as much as possible from the boom in AI. Banja has gone on to indicate that they can go ahead and bring together the stakeholders so as to debate issues that surround transparency, justice, and fairness.

Even the ones who are looking to make money from AI innovations are likely to be forced to go ahead and consider the ethical and moral implications of the use of tech in healthcare.

Banja went on to add that nobody would want to roll out a model that is not fair, is discriminatory since what happens in that case is that reputational loss comes into play, and that fact remains that everybody cares about it.

It is well to be noted that even burnout happens to be a consistent challenge for the healthcare sector, and there are efforts made to make sure that clinical workloads are eased off, but that has had mixed success. Although EHR did show immense promise to revolutionize clinical documentation, its burden is one of the major drivers of burnout.

The fact is that clinical documentation improvement is one part where the proponents remark that AI can help in optimizing clinical workflows.

Taking into account elements such as ethical concerns and fairness at the early stage of AI oll out goes on to help in setting a stage for a transition that’s smooth. Banja says that stakeholders are the ones who should think about how providers are going to make use of such tools and what would the consequences of the same will be.

One of the considerations that has kind of sparked some debate is liability. Policies as well as regulations with regards to when the providers would be held liable for negative outcomes are necessary for making sure that clinicians go on to uphold the benchmark in care that’s anticipated of them while at the same time also safeguarding the safety of the patients.

The risks as well as the rewards of making use of these tools are important, thereby leading major medical organizations to put forth the fact that liability must be conceptualized in terms of AI usage.

Banja as well as the team argued that transparency is indeed critical when it comes to the pursuit of model fairness.

This happens to raise many queries that healthcare stakeholders have posed in the AI debate. The FSMB, in its recent recommendations on AI governance and clinical liability, says that the black box models should not essentially be avoided completely, and the providers who happen to be using them must be expected to offer a reasonable interpretation of how AI has gone on to arrive at a particular output and why not ignoring or not following it goes on to meet the standards of care.

The transparency model enables users to witness the in-depth working and also understand how a decision has been arrived upon, however there are some who opine that although one may be able to look inside the model, the tool may as well be so intricate that humans would not be able to get the insights they need for decision-making.

The fact remains that AI may as well become robust enough to evaluate data and, at the same time, also analyze patterns that happen to be too complex for humans to witness.

As AI goes on to become more prevalent and talked about in the healthcare sector, the bond between fairness, ethics, liability, transparency, and regulation is most likely going to become more intricate, and the stakeholders will need to partner so as to pass through every challenge and hence make sure that models help in advancing patient outcomes.