Authors:Cristina Cocito, Thomas Marquenie, Paul De Hert Abstract: This article reflects on dominant concepts used in contemporary legal discourse to understand, identify and address problems raised by AI systems, particularly in the GDPR and European Artificial Intelligence Act, that rely on concepts such as risk, harm and damage. Our study questions how far these dominant concepts sufficiently capture the problems AI presents, and whether they guarantee a comprehensive approach to identifying those problems. Building on pragmatist methodologies of problem inquiry (Dewey and Bergson), we argue that while some existing conceptual paradigms may be more suitable than others, they all are located too far ahead in the problem inquiry process, as defined by pragmatists. Existing paradigms for problem-identification, anchored to preset categories of problems, risk marginalising (other) elements, such as feelings, concerns or other problematic issues. This study eventually calls for further research to explore more critically how concepts such as risk, harm and damage are used in literature to map AI systems’ problems. This gives rise to a broader call for research to identify methodologies that can pragmatically frame the challenges of AI systems in order to better and comprehensively address the problems they raise today. PubDate: 2024-12-30 Issue No:Vol. 15, No. 3 (2024)
Authors:Andreas Häuselmann, Bart Custers Abstract: Inferred data, such as predictions and emotional states, may have limited accuracy or may even be incorrect from the perspective of data subjects. According to Article 16 of the General Data Protection Regulation (GDPR), data subjects can then invoke their right to rectification. To rectify personal data, data subjects must provide objectively verifiable evidence that the personal data envisaged to replace the data currently processed by the data controller is accurate (‘the standard of objective verifiability’). This causes three problems. First, whereas the standard of objective verifiability is easily met for factual data, predictions are not objectively verifiable, mainly because they relate to future conduct or events that have yet to happen (the verifiability problem). Second, the accuracy of subjective personal data, such as emotion data, cannot be proven objectively (the objectivity problem). Third, to effectively rectify inferred personal data, data subjects must be aware that data is inaccurate (the awareness problem). This is often not the case because inferred data are treated as trade secrets and are not shared with data subjects − even when they invoke their right of access. PubDate: 2024-12-30 Issue No:Vol. 15, No. 3 (2024)
Authors:Stergios Aidinlis, David Barnard-Wills, Leanne Cochrane, Krzysztof Garstka, Agata Gurzawska, Joshua Hughes Abstract: European law enforcement agencies (LEAs) increasingly seek to make additional use of the personal data they have gathered, particularly for the purpose of research. This raises practical data protection challenges for these agencies and their research partners. LEAs may be uncertain about which data protection instrument – if any – should govern such processing; a question best answered by disentangling the blurry boundary between operational and research activities. This article takes on that task, by examining the applicability of the General Data Protection Regulation (GDPR) and the Law Enforcement Directive (LED) to LEA research activities, in particular those based on LEA-held personal data. It also considers the practical implications of choosing one instrument over the other and gives guidance on legal choices that follow, with respect to issues such as identifying the legal basis for processing, and setting the data controllership arrangements. PubDate: 2024-12-30 Issue No:Vol. 15, No. 3 (2024)
Authors:Gaia Cavagnoli Micali Abstract: Organised crime has always adapted to different markets from which to profit. Today, one such market is that of cryptocurrency. Concurrently, EU countries have developed an interest in the Italian antimafia legal system, effective pillars of which include patrimonial preventive measures of confiscation and sequestration. But what happens if the assets to preventively confiscate are cryptocurrencies' This paper assesses whether the measures can be considered equally effective in the context of cryptocurrency. Despite the measures having existed for more than forty years, the virtual nature of cryptocurrency has been proven to not comprise an obstacle per se. However, in practice it has been argued that cryptocurrency as the object of the measures is incompatible with the existing law, the pseudo-anonymity of the blockchain, and mixing services. Some reflections and conclusions as to how these incompatibilities might be overcome are provided, aimed especially at those European legislators who are considering transposing Italian antimafia law to their jurisdictions. PubDate: 2024-12-30 Issue No:Vol. 15, No. 3 (2024)
Authors:Sofia Palmieri Abstract: The use of artificial intelligence (AI), along with its possible risks and promised benefits, has attracted much attention, filling pages of scientific literature. At the same time, the legal literature has been busy outlining the legal framework applicable to AI systems used in medicine: mainly the Medical Device Regulation (MDR) and the AI Act. The literature has already pointed out the gaps in this legal framework, emphasising its limited significance for AI systems classified as ‘minimal risk’ under the AI Act. This paper builds upon this literature and overviews the broader product safety framework applicable to medical AI. In this light, while the MDR remains the main co-regulator of medical AI, numerous other regulations interact with the AI Act while regulating medical AI. Starting from the shortcomings of the relationship between MDR and the AI Act, this paper maps the product safety framework applicable to medical AI. Referring in particular to other regulations within the EU New Legislative Framework, it offers the safety framework relevant for AI systems classified for different reasons as minimal risk under the AI Act. PubDate: 2024-12-30 Issue No:Vol. 15, No. 3 (2024)