1) Francesco Vigna - "Codes of Conduct for the Healthcare System in a Big Data and Analytics Scenario". Supervisors: M. Palmirani, M. Durante, L. Ratti
The thesis will focus on the issues brought by the implementation of AI and Big Data technologies in the healthcare context, from a data protection law perspective, and whether self and co-regulation mechanisms could be helpful to face these problems.
Recently, the application of AI technologies is spreading in several medical applications, from robotic surgery to smart pills or mobile health. Nowadays, surgical robots can assist the human surgeon accomplishing very simple tasks during the operations, such as suturing tissues. ML software, on the other hand, can dig in big amount of data trying to predict the occurrence or the evolution of disease.
All these technologies work thanks to the collection and the elaboration of big amount of data gathered from different sources, i.e., they work on Big Data analytics technologies.
Especially in the healthcare, such data are also personal data gathered directly from patients or from hospitals records or even from new and innovative sources, for example wearable devices or mobile apps.
Without questioning the great opportunities that such AI technologies can bring to healthcare processes and clinical research, the elaboration of Big Data in healthcare must be object of further investigation. Several risk could stem indeed for individuals and patients, not only in relation to privacy violation but also to discriminations and unfair decision made by the AI system.
Several factors could impact the accuracy of an AI systems, from biases during data collection or dataset testing operations, to black boxes algorithm that can result in unexplainable decisions made by the AI system.
EU data protection law introduce several requirements meant to face risks of discrimination and un-fair decision adopted by AI systems. However, also other initiative in the EU (e.g., the recent AI proposal) are trying to provide a secure environment where AI systems can be developed safely.
Both the GDPR and other connected regulations introduce the opportunity to rely on self and Co-regulation mechanisms in order to give more practical enforcement to technical requirements in complex environment, such AI and healthcare. Such instruments, such as Codes of conduct, certification mechanism, standardisation, etc. are meant to provide flexibility to rules’ application, but also to help private and public institutions demonstrate compliance with legal requirements.
It should be bear in mind that self and co-regulation mechanisms are also subject to several short-comings, from legitimacy and constitutional issues to difficulty to grant proper enforcement.
In conclusion, the main goal of the thesis is to understand to what extent such self and co-regulation mechanisms could be implemented in healthcare, for facing the mentioned above issues brought by the application of AI technologies and Big Data analytics.