This programme is structured along 5 "Internet of" dimensions: Data, Things, Persons, Healthcare and Money, which all together compose the Internet of Everything. To remain innovative, Europe requires a new generation of highly specialized researchers who will deal with these topics.
As genomic data sharing involves by definition very sensitive data, adequate data protection is essential both from a legal as well as an ethical point of view. However, data protection is only part of a general framework of fundamental rights. As there are many benefits that might be achieved through the sharing of genomic data, there is a need for progressive (but responsible) data sharing. The GDPR and its national implementations therefore contain several exemptions, e.g. for research purposes, and data protection has to be balanced with other fundamental rights and interests.
The aim of the project is therefore to analyse different models of genomic data sharing in this context, including data sharing through anonymisation, data sharing through consent and data sharing through proportional measures.
The research question of my thesis is quite straightforward: I wonder whether there is a need for specific rules in the IoE environment of the house. I suggest that in order to answer this question it is necessary need to investigate whether present rules adapted to new scenarios brought forward by the ‘Digital Revolution’ can be described as ‘new rules because of new contexts’ or whether it is preferable to start from scratch.
The provisional answer is that the reasons for which liability has been structured over the centuries do not change. What changes is the ability of the legal system to adapt to new challenges. Specifically, ascertaining causality can be complicated at times by the large number of stakeholders involved and new ways for objects to function. The producer of the IoE object is not only one anymore, but operators and intermediaries (e.g. software producers and cloud services providers) could be different entities from the manufacturer and, despite that, bear, de facto, a significant degree of responsibility in creating products that can be defective. It is suggested to take a data driven perspective to understand how the IoT environment functions and go at the ‘source’ of new damages if any. Probably, the most interesting findings will concern whether new immaterial damages arise from the functioning of IoE objects for the house and the main assumption is that the current way of processing data (both personal and non-personal) will be the main source of known and unknown damages.
This theme, although mainly connected to law disciplines, has a transversal social impact: it is vital both for consumers and for businesses to understand what kind of rules apply in a new context such as the connected house in the aftermath of the pandemic.
In order to do that, I will follow a specific methodology which relies mainly on the legal analysis of EU sources and national ones.
However, the legal analysis will be possible only after a technological state of the art of domestic IoT objects as, in fact, the way these objects process data is believed to be the main new source of damages.
Moreover, the pandemic of Covid-19 has made it clear that the house will be a new centre of professional or non-professional activities, such as rehabilitation from traumas and injuries. At the same time, the environmental impact of the IoE technologies employed by the connected home will be huge and solutions are needed to create a more environmentally sustainable and better quality technology. In a word a more environmentally responsible domestic IoE.
The expected results are to outline a structure of remedies that can create an environment of trust by consumers towards new technologies. Not to mention that clarifying private law rules will also benefit companies from an intellectual property and competition law perspective.
The research project focuses on the increasingly interwoven (cyber)security and privacy & data protection issues brought about by the Internet of Things (IoT) domain.
The first part of the project aims at providing a functional and sufficiently comprehensive definition of the “Internet of Things”. The threefold analysis sheds light on i) the architecture taxonomy of IoT systems; ii) the so-called IoT verticals, that is, the application domains; and iii) the blurred concept of “resource-constrained” devices. This work relies on the assumption that IoT ecosystems always involve a constrained scenario in terms of computational power, regardless of potentially non-constrained technologies embedded in the system.
The second part of the work aims at disentangling the “security debate” from a philosophical and epistemological perspective. This reflection delves into the different scopes and rationales of blurred concepts, such as “cybersecurity”, or others, like “security” and “safety”, which are highly intertwined – especially in the context of IoT – and sometimes used interchangeably. Through a doctrinal literature review, the resulting theoretical framework shall serve as a solid foundation for the legal analysis of the EU legal frameworks regulating IoT security that follows.
The third part of the project maps out the disparate legal frameworks regulating IoT security in the EU. The “domain-agnostic” approach of the work is reflected in the legal analysis as well, as sectorial norms (e.g., the Medical Devices Regulation) fall outside the scope of the thesis: the legal analysis – which considers different legislative texts such as the NIS Directive, the Proposal for a NIS 2, the Cybersecurity Act and those directives and regulations of the New Legislative Framework that are relevant to the IoT domain (e.g., the Radio Equipment Directive) – aims at casting light on common security baseline requirements applicable to IoT manufacturers.
The legal analysis of the fourth part focuses on three normative challenges, in terms of fundamental rights, in particular, the right to privacy and the right to the protection of personal data (Arts. 7 and 8 of the EU Charter of Fundamental Rights respectively), brought about by the structural data and metadata sharing of the ubiquitous Internet of Things. These are: i) the realignment of the traditional matters of privacy and data protection that can be observed under IoT data and metadata sharing; ii) the case where the rights to privacy and personal data protection need to be balanced, or involved in trade-offs, with cybersecurity technologies designed for and deployed by IoT systems; and, iii) privacy and data protection risks that are likely to arise in the context of the analysis of encrypted IoT traffic, that is, metadata analysis.
The fifth part highlights the main IoT security threats that can exploit the privacy vulnerabilities as has been shown in previous sections, from an information security viewpoint. The goal is capturing the whole threats model. In other words, the threat landscape has been classified in relation to the consequences the attack scenarios pose either to the system (such as service compromising and sensing domain attacks) or the underlying data sharing (such as packets crafting, packets alteration, and traffic analysis, whose effects have been deepened in the previous part).
Part six, in line with the holistic methodological approach of the work, accounts for a risk analysis methodology for the IoT. Risk assessment is one of the core elements under which the entire EU legal framework behind cybersecurity and privacy & data protection is clustered. In particular, the first section of this chapter attempts to reconcile the different rationales of traditional information security risk management models (e.g., ISO and NIST frameworks) and the fundamental rights-based model of the GDPR. Then, CNIL DPIA for IoT devices is critically analysed so to investigate whether and to what extent the methodology assesses the cybersecurity risks to individuals’ rights and freedoms in the specific context of IoT devices.
In conclusion, the final part of the work presents the horizontal legislation on cybersecurity for connected devices proposed by the Council and endorsed by the Commission in the new Cybersecurity Strategy.
His research focuses on the current landscape of personal data management and its relationship to user privacy within the Internet of Persons (IoP), with the main objective of providing the design of distributed ledger-based systems to support the right to protection of personal data while fostering their portability for social good and economic exploitation.
In his study he considers the critical aspects that make specific types of personal data, such as geodata and dynamic data, difficult to protect. Meanwhile, the experimentation related to the use of decentralized architectures aims to provide systems that store and transfer personal data in a transparent and non-centralized manner. Furthermore, part of his research involves the representation and reasoning with policies in a distributed execution, for the management of the flows and uses of personal data on the basis of individual free choice and self-determination.
Sophisticated Internet of Everything (“IoE”) devices are now able to process big data to monitor undesirable events such as the crash of vital signs with wearable wireless sensors, domestic accidents involving elderly people, or distribution of ambulances and availability of hospitals. The Covid-19 pandemic has shown that big data-powered technologies can be applied in emergencies that pose a high risk to the health of the society at large or an immediate danger to the health of an individual requiring urgent intervention.
The use of big data-powered technologies in fast-paced healthcare situations such as emergencies poses several questions concerning the precise definition of an emergency, the agencies involved, and the procedures used which may require in-depth analysis on whether a situation qualifies as an emergency from an empirical and legal perspective. The latter is of particular significance as the legal qualification of an emergency may also vary depending on jurisdictions. Currently, the research in the healthcare sector focuses either on the development of various digital medical devices or on the regulatory requirements applicable to the industry. Yet, the impact of new regulatory proposals and new technologies on the privacy and data protection of individuals and communities tends to be overlooked.
Further, the healthcare research tends to overlook the two dimensions of healthcare emergencies, the public healthcare and the individual healthcare emergency dimension, and its implications to data protection and privacy. The public dimension of healthcare emergencies refers to emergencies such as global outbreaks of diseases (e.g., Covid-19, SARS). The latter instead refers to a loss of vital signs by an individual that would not qualify as a public health emergency but could have a devastating impact on an individual’s wellbeing. Therefore, the research aims at exploring the correlation between the two dimensions of emergency and the applicable regulatory and technical challenges arising.
Lastly, there is an existing gap between the appropriate translation and application of fundamental legal research into concrete scenarios with specific ICT technologies used in the healthcare sector. As the ethical-legal research world focuses on fostering a high-level discussion on big data, healthcare and IoE, the ICT sector wonders what all this means to their specific scenario.
The research, therefore, aims at exploring the complex relationship between big data-powered healthcare technologies in emergencies. It analyses how individuals’ and communities’ rights to privacy and data protection are affected by the technologies in the digital environment
The COVID-19 public health crisis has accelerated the transformation of health systems to become more closely tied to citizens/patients and increasingly dependent on the provision and use of telehealth services. The delivery of healthcare services at distance by means of information and communications technologies has become an essential tool in building resilient health systems and to facilitate access to healthcare. One of the major promises of an Internet of Everything environment in healthcare (‘Internet of Healthcare’), and especially in telehealth, is that the use of Internet of Things (IoT)-enabled devices (‘Internet of Health Things’) and concomitant enabling technologies could help to interconnect data ecosystems and leverage data concerning health. IoT-enabled telehealth systems deployed in conjunction with AI systems could facilitate the smart transformation of healthcare from a merely reactive system to a data-driven and person-centred system that provides remote health promotion, diagnosis, monitoring and treatment services, integrated real-time response solutions, as well as prospective insights. However, the realisation of these health-related benefits necessitates the processing of vast amounts of data concerning health. These operations and the use of new enabling technologies pose significant risks to privacy and personal data, and question the applicability of existing/proposed legal concepts. The research analyses the adequateness of EU privacy, data protection, data governance, AI governance and other regulatory rules in the context of IoT-enabled and AI-assisted telehealth systems. In addition, the research aims to identify technical and organisational measures (best practices), which could facilitate the effective implementation of normative, ethical and security principles in these information systems.
The proposed research seeks to elaborate on the ethical and legal issues of eHealth concerning sharing health data with an emphasis on a potential moral responsibility of patients to share data towards healthcare providers and the healthcare system in general. In particular, the project answers to the following central research question: ‘How would a moral duty of patients to transfer (health) data for the benefit of health care improvement, research, and public health in the eHealth sector sit within the existing confidentiality, privacy, and data protection legislations?’.
Care providers and patients use digital healthcare services for the improvement of the patients’ health. Such so-called ‘eHealth services’ have the potential to improve the medical treatment of individuals but also to benefit public health and the provision of healthcare in general. Consequently, the exchange of personal health data plays a vital role with regard to the enhancement of healthcare for individuals and society overall. The need for patient data may raise the question about a possible moral responsibility or moral duty of patients to share health data.
Furthermore, challenges occur as the European and national confidentiality, privacy, and data protection legislations appear to constitute an impediment to a possible moral duty of patients to share data and to the proper use of medical data overall. The project aims to examine the existing confidentiality, privacy, and data protection legislation considering that sharing health data should benefit the improvement of the healthcare system, research, and public health overall while ensuring patients’ rights. Attention will be paid in particular to the General Data Protection Regulation regarding the processing of data concerning health as well as to the recommendation of the Council of Europe on the protection of health-related data, which provides guidance on the sharing of health-related data as soft-law. Also, relevant national legislation will be explored.
Distributed Ledger Technologies (DLTs) are fairly new. An in-depth analysis is therefore needed. The first aim of this research is to provide a comprehensive account of the risks associated with the use of DLTs in transacting and managing securities, with the assessment of the risks, advantages and drawbacks of DLTs in this specific domain. Particularly, this research also conducts an analysis of the trust issues of Blockchain and smart contracts from both theoretical and practical perspectives. First of all, all the prominent characteristics lead to the argument that blockchain is a form of technology that generates trust. Instead of trusting institutions, people trust the technology. However, we argue that such trust in technology requires a form of trust displacement. Trust in the technology is only warranted on the basis of an argument that leads to confidence in the correct implementation of the software code, which again builds on trust in the platform that offers the technology, and of trust in institutions that have verified the technology, and how it relates to the real world. Specifically, institutions should verify the sustainability of the business model of the platform that hosts the smart contract, the way the mechanisms of the blockchain have been implemented, the translation of the verbal agreement into the smart contract and the reliability of the oracle that connects blockchain execution to the real world. That means that blockchain and smart contract can only generate trust when they are embedded in a governance structure, that specifies roles and responsibilities for institutions to verify and monitor reliability of the technology. This claim is then illustrated by examples of blockchain and smart contract applications in the securities industry. On the other hand, DLTs are generally used only to trace the end results. In this research, we propose that a reasoning system can be put in place for making decisions and conducting transaction executions, in order to enhance auditability, transparency, and finally to provide explanability to blockchain users, in order to comply with legal requirements. We construct the Intelligent Human-input-based Blockchain Oracle (IHiBO), a cross-chain oracle that enables the execution and traceability of formal argumentation and negotiation processes, involving the intervention of human experts. We take as reference the decision-making processes of fund managements, as trust is of crucial importance in such “trust services”. The architecture and implementation of IHiBO are based on leveraging two-layer DLTs, smart contracts, argumentation and negotiation in a multi-agent setup. We provide some experimental results that support our discussion, namely that in the use-case we have considered our methodology can increase trust from principals to trusted services.
Within the structure of the LAST-JD-RIoE program, this project belongs to the cluster “Internet of Money (IoM)”, where it covers the legal part. The research is being conducted at the Autonomous University of Barcelona (beneficiary university), at the KU Leuven – CiTiP, and at the University of Bologna. Content-wise, it addresses Distributed Ledger Technologies (DLTs) between anonymity and transparency, and it focuses on the impact of these features on the EU framework to combat money laundering and the financing of terrorism (AML/CFT). Because the IoM comprises DLT-based ecosystems whose anonymity and transparency range across a spectrum of combinations and degrees, the relevant monetary instruments – i.e., cryptocurrencies, defined as payment-type cryptoassets – inherit a hybrid nature. Purportedly, while they inherently promote anonymity, the latter is not a property of their underlying technology. This ambivalence adds to the regulatory challenge of how to best mitigate the risk that they are abused for illicit financial purposes, where this “anonymity problem” is usually addressed through AML/CFT rules both at the EU level and beyond it.
The tension between anonymity and transparency is not a concern of the IoM exclusively. On the contrary, the strain between confidentiality/privacy and auditability/transparency has long been debated both in the financial domain and in relation to exchanging information online. Likewise, the application of advanced cryptography to payments to enable “anonymous” transactions does not necessarily involve DLTs/blockchains. Nonetheless, in its preliminary phase this research has identified socio-technical peculiarities of the IoM that inform not only its AML/CFT challenges, but also possible regulatory strategies. The first part of the project focuses on terminological and conceptual disambiguation efforts. In this respect, it analyses the IoM from a phenomenological standpoint to compensate the frequent absence of legal and technical agreed-upon definitions, and the constant evolution of the domain, thus disengaging from the so-called “blockchain hype”. By addressing privacy enhancements, traceability, and enhanced disintermediation (e.g., DeFi, self-hosted wallets), it builds the background against which to examine the evolution of the international and EU-level AML/CFT framework applied to cryptocurrencies. In this respect, it heeds the activity of the Financial Action Task Force (FATF) and the role of standardization, while it elaborates on the debate on balancing different needs and regulatory requirements in Central Bank Digital Currencies (CBDCs).
Against this backdrop, the angle of this cross-disciplinary PhD project is to acknowledge specificities – i.e., differences between cryptocurrencies and between ways of managing/transacting with them –, but at the same time avert the risk of overfitting. Indeed, in the field at hand any useful regulatory output must suit an ever-evolving context. The final objective is to provide EU-level regulatory guidelines to devise a legitimate and effective prevention of the misuse of the financial system for illicit purposes by means of cryptoassets. In light of the preliminary phase of the investigation, the methodological foundations of the project are currently heeding, inter alia, socio-technical systems, co-regulation, regulation-by-design and compliance-by/through-design. Similarly, this work endorses the value of leveraging techno-legal taxonomies to devise regulatory strategies, in this case for the mitigation of money laundering and terrorist financing by means of cryptocurrencies.
©Copyright 2023 - ALMA MATER STUDIORUM - Università di Bologna - Via Zamboni, 33 - 40126 Bologna - Partita IVA: 01131710376 Privacy - Legal notes - Cookie settings