Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Author "Cordova Ramirez, Jorge"

Sort by: Order: Results:

  • Cordova Ramirez, Jorge (2023)
    The interest of this thesis is to investigate how the transparency framework of the GDPR is able to support effective accountability of ADM systems. To do this, I pose the following question: What are the limits of the transparency framework, presented in the GDPR, to effectively achieve accountability of automated decision-making systems? ADM is nowadays used to decide on many aspects of our lives. With the employment of algorithmic technologies, such as ML, these systems are now able to use available data as a defining factor for future decisions. Compared to human decision-making, ML-based ADM can be more efficient and save resources for businesses and governments. However, these systems have their own risks. They can be opaque about how data is processed and what are the reasons behind their decisions. This opacity gives systems’ owners the opportunity to have undesirable powers over individuals. In fact, even unintentionally, sometimes algorithmic decision systems can be biased and result in unfair or discriminatory decisions due to their technically complex nature. To counteract such information and power asymmetries between decision-makers and subjects, the demanded solutions have long been, transparency and accountability. The former to access and observe systems, and the latter to justify, challenge, and correct them. These ideals have been adopted by the GDPR as guiding data protection principles underlying the regulation framework. In this work, I observe that the GDPR protects individuals’ rights and freedoms by guaranteeing accountable ADM. But at the same time, accountability goals are dependent on how the regulation supports systems’ transparency. Thus, to determine the success of accountability, the transparency platform should be assessed. For this assessment, I start by setting a theoretical baseline, namely, the necessary level of transparency required to achieve the accountability of ADM systems. Based on the work of other authors, I establish that the legislation should optimally provide for transparency to; detect and correct potential discrimination, justify decisions, and allow contestation and correction of these decisions when necessary. Additionally, this baseline contains specific elements of ADM systems that should be allowed to be evaluated for such accountability. Against this background, an analysis of the law is performed to test to what extent can the GDPR’s transparency framework attain the standards set in the baseline. The analysis includes articles 12, 14, 15, 22, 25, and 35, for considering those with the most significant transparency implications for ADM. After looking at the content of the law, in conjunction with interpretations offered by EU authorities and the legal theory. The findings of the test are that the GDPR contains important individual rights to contest and correct decisions. However, the law has some phrasing limitations that result in a constraint to offer the disclosure of the elements necessary for the proper justification of decisions. Making it difficult for individuals to enforce their rights. Furthermore, the legislation lays data controllers’ obligations to continuously evaluate systems to assess and address their potential risks. As well as an obligation to design for more transparent and accountable systems. These could aid in the detection and correction of potential discrimination. Yet, these obligations are also limited by the text of the law to effectively offer less opaque and complex ADM systems. As a result, I conclude that, while the GDPR offers significant steps towards accountability. Its transparency framework is still limited to support the evaluation, justification, and thus, correction of complex ADM systems and their decisions. Significantly diminishing the legislation’s accountability promises.