Italian DPA Adopts GDPR For AI-Controlled National Health

161

Making use of artificial intelligence systems across the framework of the pharmaceutical and healthcare industries is indeed going through a rapid growth. It is anticipated that the execution when it comes to these systems will yield substantial advantages for individuals as well as society. However, it is important to take into account the significant opportunities that happen to be associated with the use of AI along with the relevant risks, majorly in sensitive sectors such as pharma and healthcare.

Because of this reason, the Italian Data Protection Authority- IDPA has released a document identified as the Decalogue for the launch of national health services by way of artificial intelligence systems. The purpose of this document is to emphasise the key privacy concerns that must be taken into account when utilising AI systems in the context of national healthcare services.

According to the proposed EU AI Act, the Decalogue emphasises which AI systems affecting health, such as those that affect the right to receive treatment, healthcare service usage, medical care, as well as patient selection systems for emergency care, are all categorised as high-risk systems. The Decalogue offers the main issues and responsibilities with regards to privacy that arise when making use of AI systems in the healthcare industry.

While the Decalogue fails to include new ideas or principles, it goes on to provide clear guidance on the steps that healthcare sector stakeholders ought to take to ensure the correct application of AI, which is in sync with the GDPR. Furthermore, the principles outlined in the Decalogue are applicable to all high-risk AI systems, irrespective of their application across the healthcare sector.

Let’s look into the principles that happen to be laid out by the IDPA.

  1. Analysing the legal template

The IDPA points out that Article 9(2)(g) of the GDPR happens to be an appropriate legal basis for utilising AI systems when it comes to national healthcare services. Under Italian laws or regulations, the handling of personal data must be required for reasons of significant public interest. It is necessary to indicate the groups of data to be analysed, the kind of functionality that can be performed, the reasons for significant public interest, and the appropriate steps needed to safeguard the rights and freedoms when it comes to the data subjects.

  1. The fundamentals of accountability and privacy by layout and standard settings

The IDPA emphasises the importance that accountability, privacy by design, and privacy by default principles possess. The utilisation when it comes to AI systems within the national healthcare services gamut should align with these norms. It is crucial for stakeholders to ensure that data processing corresponds to the public interests that are being pursued. Additionally, they should incorporate safeguards for data beginning with the design phase and across the entire lifecycle of generative AI tech.

  1. Roles of Privacy

The ideas of controller and processor are functional concepts that aim to distribute responsibilities upon the roles of the actual stakeholders. Privacy roles should be given based on facts rather than otherwise. In order to ensure compliance with GDPR while making use of AI systems, it is crucial to correctly delegate privacy roles to the stakeholders that happen to be relevant. This involves allocating rights, duties, as well as responsibilities accordingly.

When it comes to national healthcare services, it is essential for stakeholders to clearly define their respective roles in terms of privacy. This is especially crucial in relation to the national AI system in the healthcare sector, which will be accessible to various entities for different reasons. Hence, it is essential to have an overall understanding of the data governance framework.

  1. The fundamentals of knowability, non-exclusivity, and algorithmic non-discrimination

These tenets go on to represent the three pillars that guide the use of AI systems in carrying out important tasks that happen to serve the public interest:

  • The principle of knowability is all about informing the data subject about the existence of a decision-making procedure that is based on automated operations as well as on the logic behind these operations.
  • The principle of non-exclusivity should involve human intervention to ensure that there is control over automatic decisions.
  • The principle of algorithmic non-discrimination suggests that data controllers ought to take responsibility for appropriate measures to minimise opacity and errors in order to prevent any potential discrimination that may arise from processing inaccurate health data or making use of incorrect statistical and mathematical techniques.
  1. Data Protection Impact Assessment- DPIA 

According to the IDPA, conducting a Data Protection Impact Assessment happens to be essential for the lawful use of AI systems in national healthcare services as it involves the systematic and extensive processing of sensitive data pertaining to individuals who happen to be vulnerable.

The DPIA should be conducted at the national level to ensure a thorough evaluation when it comes to all factors that could impact the processing of personal data, particularly the risks that come associated with a database containing health data for the entire set of population.

The IDPA’s statement is applicable not only to national healthcare services but also to any AI system that involves the organised as well as large-scale processing of patients’ health data.

  1. Data quality

Under Article 5(1)(d) of the GDPR, organisations are obligated to guarantee the relevance and accuracy of personal data. It is essential for healthcare operators to comply with this rule in order to protect the interests of patients. Processing inaccurate data can have serious repercussions for the safety and health of patients.

Therefore, it is necessary for the stakeholders to take suitable steps in order to ensure data accuracy and effectively deal with the risks associated with:

  • Relying on systems without rigorous scientific validation,
  • lacking control over processed data, and
  • making decisions based on inappropriate assumptions.
  1. Data integrity and secrecy

According to Article 5(1)(f) of the GDPR, organisations are required to process personal data in a way that ensures sufficient safety of the data, in line with the confidentiality and integrity concept. The IDPA emphasises that when using deterministic as well as stochastic analysis models based on machine learning techniques, there are substantial risks related to potential biases. These biases can lead to adverse consequences for the individuals whose data is being analysed.

Due to this reason, the IDPA places great emphasis on organisations providing detailed indications regarding:

The AI system uses algorithmic logic to train itself and generate its output.

  • The checks carried out to prevent biases
  • The corrective measures taken to address and rectify these biases; and
  • The risk associated with both deterministic and stochastic analyses
  1. Fairness as well as openness 

Organisations in the healthcare sector that utilise AI will need to implement the following measures to guarantee compliance with the principles of fairness and transparency, in addition to the commonly required measures:

  • Explain the AI system’s logic and data processing;
  • Clarifying if AI-using healthcare professionals happen to be liable;
  • Highlighting AI’s diagnostic and therapeutic benefits;
  • Ensure healthcare practitioners intervene when utilising AI systems for treatment.
  1. Human monitoring

In order to mitigate the significant risks that come with using incorrect information to train the algorithm or relying on assumptions made by the system, it is essential for humans to play a central role in both the training phase and the process of making decisions.

  1. More details on data protection rules related to the preservation of dignity and personal identity

The IDPA concludes by emphasising the significance of employing ethics as the foundation for governing the use of AI. Ethics should play a crucial role in guiding organisations when selecting suppliers and business partners who adhere to the principles outlined in the Decalogue.

Compliance with data protection legislation will be essential to companies creating, disseminating, or employing AI in order to ensure the success of their business. These organisations should be able to show their trustworthiness in terms of data protection. Putting in a privacy-centric strategy can be crucial for guaranteeing business success.