Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Subject "Privacy"

Sort by: Order: Results:

  • Reunamo, Antti (2020)
    Popularity of mobile instant messaging applications has flourished during the last ten years, and people are using them to exchange private and personal information on daily basis. These applications can be freely installed from online marketplaces, and average users may have several of them installed on their devices. The amount of information available from these messaging applications for a third-party eavesdropper via network traffic analysis has therefore grown significantly as well. Security features of these applications have also been developing over the years, and the communication between the applications and the background server infrastructure nowadays practically always employs encryption. Recently, more advanced end-to-end encryption methods have been developed to hide the content of the exchanged data even from the messaging service providers. Machine learning techniques have successfully been utilized in analyzing encrypted network traffic, and previous research has shown that this approach can effectively be used to detect mobile applications and the actions users are performing in those applications regardless of encryption. While the actual content of the messages and other transferred data cannot be accessed by the eavesdropper, these methods can still lead to serious privacy compromises. This thesis discusses the present state of machine learning-based identification of applications and user actions, how feasible it would be to actually perform such detection in a Wi-Fi network and what kind of privacy concerns would arise.
  • Siintola, Saara (2023)
    The purpose of my thesis is to study the interrelationship between privacy and competition law in the context of the digital economy by evaluating both current and possible future paths of development. A starting point for the research is the assumption that digitisation and datafication have transformed market dynamics and created a pressure to integrate perspectives from the traditionally separate regimes of privacy and competition law. The research mixes doctrinal and legal theoretical methods to figure out how the privacy-competition law interrelationship has been construed in the EU competition case law so far, and how it should ideally be developed in the future, from the perspective of the goal of non-dominance, a conception of freedom developed within the political philosophy of republicanism which I argue forms the main underlying objective of both privacy and competition law. I will argue that developments in the case law show a continuous movement away from so-called separationists accounts of the privacy-competition interrelationship towards increasing integrationism. I will further argue that this is, especially if combined with new data-centered special legislation, the likely optimal development from a non-dominance point of view.
  • Bhardwaj, Shivam (2020)
    The banking and financial sector has often been synonymous with established names, with some having centuries old presence. In the recent past these incumbents have been experiencing a consequential disruption by new entrants and rapidly changing consumer demands. These disruptions to the status quo have been characterised by a shift towards adoption of technology and artificial intelligence particularly in the service and products offered to the end customers. The changing business climate in the financial sector has risen many convoluted questions for the regulators. These complications cover a vast set of issues – from the concerns relating to the privacy of data of the end users to the increasing vulnerability of the financial market, to unproportionally increased compliance requirements for new entrants, all form part of the mesh of questions that have arisen in the wake of new services and operations being designed with the aid and assistance of artificial intelligence, machine learning and big data analytics. It is in this background that this Thesis seeks to explore the trajectory of the development of the legal landscape for regulating artificial intelligence – both in general and specifically in the financial and banking sector, particularly in the European Union. During the analysis, existing legal enactments, such as the General Data Protection Regulation, have been scrutinised and certain observations have been made regarding the areas that still remain unregulated or open to debate under the laws as it stands today. In the same vein, an attempt has been made to explore the emerging discussion on a dedicated legal regime for artificial intelligence in the European Union, and those observations have been viewed from the perspective of the financial sector, thereby creating thematic underpinnings that ought to form part of any legal instrument aiming to optimally regulate technology in the financial sector. To concretise the actual application of such a legal instrument, a European Union member state has been identified and the evolution of the regulatory regime in the financial sector has been discussed with the said member states’ financial supervisory authority, thus highlighting the crucial role of the law making and enactment bodies in creating and sustaining a technologically innovative financial and banking sector. The themes recognised in this Thesis could be the building blocks upon which the future legal discourse on artificial intelligence and the financial sector could be structured.
  • Hiilloskivi-Knox, Miina (2024)
    The Cambridge Analytica scandal placed the spotlight on how social media users’ personal data can be exploited for political gain and consequently, putting liberal democracy at risk. The scandal motivated this thesis to evaluate how the common values of the Union enshrined in Article 2 of the Treaty on European Union (TEU) are impacted by digital platforms. This thesis focuses on the rule of law, fundamental rights, and liberal democracy as the three principles of the Union’s common values. The thesis considers the closeknit, interdependent relationships between the three principles, and evaluates the threats posed by digital platforms on the framework of values established by Article 2 TEU. Through analysing caselaw on data protection and existing research on Article 2 TEU, it is concluded that the business models of the biggest digital platforms put the three principles at risk. Therefore, it is recommended that legislators, Court of Justice of the European Union as well as local data protection authorities take a firm stand against digital platforms to uphold data protection and the common values of the Union.
  • Pfau, Diana Victoria (2021)
    Surveillance Capitalism, as described by Shoshana Zuboff, is a mutation of capitalism in which the main commodity to be traded is behavioural surplus, or personal data. As the forming of Surveillance Capitalism was significantly furthered by Artificial Intelligence (AI), AI is a central topic of the thesis. Personalisation that will oftentimes involve the use of AI tools is based on the collection of big amounts of personal data and bears several risks for data subjects. In Chapter I, I introduce the underlying research questions: Firstly, the question which effects the use of AI in Surveillance Capitalism has on democracy in the light of personalisation of advertisement, news provision, and propaganda. Secondly, the question whether the European Data Protection Regulation (GDPR) and the Charter of Fundamental Rights of the European Union react to these effects appropriately or if there is still need for additional legislation. In Chapter II, I determined a working definition of Artificial Intelligence. Additionally, the applicability of the GDPR together with potential problems are introduced. A special focus here lays on the underlying rationale of the GDPR. This topic is evaluated on several occasions during the thesis and reveals that the focus of the GDPR on enabling the data subject to exercise control over his or her information conflicts with the underlying rationale of Surveillance Capitalism. In Chapter III, four steps of examination follow. In a first step,I introduce the concept of Surveillance Capitalism. Personalized advertisement together with consent as a legal basis for processing of personal data are examined. During this examination, profiling, inferences, and the data processing principles of the GDPR are explored in the context of personalisation and AI. A focus in this examination is the question how individuals and democracy can be impacted. It is found that there is a lack of protection when it comes to the use of consent as a legal basis for privacy intrusive personalized advertisement and it is likely that the data subject will not be able to make an informed decision when asked for consent. Data minimisation, purpose limitation and storage limitation as important data processing principles proof to be at odds with the application of Artificial intelligence in the context of personalisation. Especially when it comes to the deletion of data further research in AI will be necessary to enable the adherence to the storage limitation.In a second step, I examined personalized news and propaganda according to their potential impacts on individuals and democracy. Explicit consent as a legal basis for processing of special categories is examined together with the concept of data protection by design as stipulated in article 25 GDPR. While explicit consent is found to likely suffer from the same weaknesses as the “regular consent”, I proposed that data protection by design could solve some of the arising issues if the norm is strengthened in the future.In a third step, I evaluate whether the right to receive and impart information laid down in the Charter of Fundamental Rights of the European Union provides for a right to receive unbiased, or unpersonalized, information. While there are indications that such a right could be acknowledged however, its scope is unclear so far. In a fourth step, I examine the proposal for a European Artificial Intelligence Act with the unfortunate outcome, that this Act might not be able to fill the discovered gaps left by the GDPR. I conclude that, taking into consideration all findings of the research, the use of AI in personalisation can significantly harm democracy by potentially impacting the freedom of political discourse, provoking social inequalities, and influencing legislation and science through heavy investment and lobbying. Ultimately, the GDPR does leave significant gaps due to the incompatibility of underlying rationales of the GDPR and Surveillance Capitalism and there is a need to protect data subjects additionally. I propose that future legislations on the use of AI in personalization should react appropriately to the rationale of Surveillance Capitalism.