Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Author "Zhao, Linzh"

Sort by: Order: Results:

  • Zhao, Linzh (2024)
    As privacy gains consensus in the field of machine learning, numerous algorithms, such as differentially private stochastic gradient descent (DPSGD), have emerged to ensure privacy guarantees. Concurrently, fairness is garnering increasing attention, prompting research aimed at achieving fairness within the constraints of differential privacy. This thesis delves into algorithms designed to enhance fairness in the realm of differentially private deep learning and explores their mechanisms. It examines the role of normalization, a technique applied to these algorithms in practice, to elucidate its impact on fairness. Additionally, this thesis formalizes a hyperparameter tuning protocol to accurately assess the performance of these algorithms. Experiments across various datasets and neural network architectures were conducted to test our hypotheses under this tuning protocol. The decoupling of hyperparameters, allowing each to independently control specific properties of the algorithm, has proven to enhance performance. However, certain mechanisms, such as discarding samples with large norms and allowing unbounded hyperparameter adaptation, may significantly compromise fairness. Our experiments also confirm the critical role of hyperparameter values in influencing fairness, emphasizing the necessity of precise tuning to ensure equitable outcomes. Additionally, we observed differential convergence rates across algorithms, which affect the number of trials needed to identify optimal hyperparameter settings. This thesis aims to offer detailed perspectives on understanding fairness in differentially private deep learning and provides insights into designing algorithms that can more effectively enhance fairness.