Browsing by Subject "fairness"
Now showing items 1-2 of 2
-
(2020)The use of machine learning and algorithms in decision making processes in our every day lifehas been growing rapidly. The uses range from bank loans and taxation to criminal sentencesand child care decisions. Because of the possible high importance of such decisions, we need tomake sure that the algorithms used are as unbiased as possible.The purpose of this thesis is to provide an overview of the possible biases in algorithm assisteddecision making, how these biases affect the decision making process, and go through someproposes on how to tackle these biases. Some of the proposed solutions are more technical,including algorithms and different ways to filter bias from the machine learning phase. Othersolutions are more societal and legal and address the things we need to take into account whendeciding what can be done to reduce bias by legislation or by enlightening people on the issuesof data mining and big data.
-
(2024)As privacy gains consensus in the field of machine learning, numerous algorithms, such as differentially private stochastic gradient descent (DPSGD), have emerged to ensure privacy guarantees. Concurrently, fairness is garnering increasing attention, prompting research aimed at achieving fairness within the constraints of differential privacy. This thesis delves into algorithms designed to enhance fairness in the realm of differentially private deep learning and explores their mechanisms. It examines the role of normalization, a technique applied to these algorithms in practice, to elucidate its impact on fairness. Additionally, this thesis formalizes a hyperparameter tuning protocol to accurately assess the performance of these algorithms. Experiments across various datasets and neural network architectures were conducted to test our hypotheses under this tuning protocol. The decoupling of hyperparameters, allowing each to independently control specific properties of the algorithm, has proven to enhance performance. However, certain mechanisms, such as discarding samples with large norms and allowing unbounded hyperparameter adaptation, may significantly compromise fairness. Our experiments also confirm the critical role of hyperparameter values in influencing fairness, emphasizing the necessity of precise tuning to ensure equitable outcomes. Additionally, we observed differential convergence rates across algorithms, which affect the number of trials needed to identify optimal hyperparameter settings. This thesis aims to offer detailed perspectives on understanding fairness in differentially private deep learning and provides insights into designing algorithms that can more effectively enhance fairness.
Now showing items 1-2 of 2