In a latest move, the peak medical association in Australia has called for robust rules as well as transparency when it comes to the usage of artificial intelligence within the healthcare sector, following a warning given to doctors across Perth-based hospitals not to write medical notes with the help of ChatGPT.
The Australian Medical Association said in a submission to the discussion paper on safe and responsible AI by the federal government that the country falls behind comparable nations when it comes to regulating AI and also noted that stronger rules are the need of the hour so as to protect patients as well as healthcare professionals and promote trust.
It is well to be noted that five hospitals in the South Metropolitan Health Service in Perth were asked in May not to indulge in the ChatGPT service in order to jot medical records for patients as it came to be known that some staff had been consistently making use of the language model for their practise.
The ABC went on to report that in one of the emails to staff, Paul Forden, the services CEO, remarked that there was no assurance of patient confidentiality when it came to using such systems and that the practise must stop.
AI protections must include making sure that the clinicians make their final decision and that there is also informed consent by the patient when it comes to any treatment or diagnostic that is undertaken using the support of Artificial Intelligence.
The AMA also went on to state that patient data must be safeguarded and that an apt ethical watch is required to make sure that the system does not take the path of greater inequalities in health.
Apparently, the proposed EU Artificial Intelligence Act, which would separate the risk of varied AI tech and also put forth an oversight board, must be considered for Australia, remarked the AMA. It is well to be noted that Canada’s need for human intervention when it comes to decision-making should also be taken into consideration.
As per the submission, the future regulation must make sure that clinical decisions that happen to be influenced by artificial intelligence are made with human intervention points throughout the decision-making process.
The point is that the final decision should always be made by a human and not a machine, and this decision must be a meaningful one and not just a tick-box practise.
The regulation should also make it clear that the ultimate decision when it comes to patient care must be made by a human, who is usually a medical practitioner.
Professor Steve Robson, the AMA president, said that artificial intelligence is a rapidly evolving field whose understanding varies from person to person. The fact is that one needs to look into the AI regulation issue in Australia and that too especially in healthcare, where there is a potential for injury to patients due to system errors and systemic bias embedded within the algorithms, as well as an elevated risk of patient privacy.
Dr. Karen DeSalvo, the chief health officer at Google, remarked that artificial intelligence is ultimately bound to enhance patients’ health outcomes; however, he has stressed the significance of getting the systems right. He adds that there are indeed numerous things to work out to ensure that models get to be constrained the way they should be and that they are factual, consistent, and go on to follow equity as well as ethical approaches that they intend to take; however, as per DeSalvo, he is indeed super excited about the potential that the tech has.
It is well worth noting that a Google research study that was published in July found out that its own medical large language model went on to generate answers that were on par with the solutions rendered by clinicians, 92.9% of the time, upon asking the most common medical question that’s searched online.