Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Subject "hyperparameter optimization"

Sort by: Order: Results:

  • Tobaben, Marlon (2022)
    Using machine learning to improve health care has gained popularity. However, most research in machine learning for health has ignored privacy attacks against the models. Differential privacy (DP) is the state-of-the-art concept for protecting individuals' data from privacy attacks. Using optimization algorithms such as the DP stochastic gradient descent (DP-SGD), one can train deep learning models under DP guarantees. This thesis analyzes the impact of changes to the hyperparameters and the neural architecture on the utility/privacy tradeoff, the main tradeoff in DP, for models trained on the MIMIC-III dataset. The analyzed hyperparameters are the noise multiplier, clipping bound, and batch size. The experiments examine neural architecture changes regarding the depth and width of the model, activation functions, and group normalization. The thesis reports the impact of the individual changes independently of other factors using Bayesian optimization and thus overcomes the limitations of earlier work. For the analyzed models, the utility is more sensitive to changes to the clipping bound than to the other two hyperparameters. Furthermore, the privacy/utility tradeoff does not improve when allowing for more training runtime. The changes to the width and depth of the model have a higher impact than other modifications of the neural architecture. Finally, the thesis discusses the impact of the findings and limitations of the experiment design and recommends directions for future work.
  • Zhao, Linzh (2024)
    As privacy gains consensus in the field of machine learning, numerous algorithms, such as differentially private stochastic gradient descent (DPSGD), have emerged to ensure privacy guarantees. Concurrently, fairness is garnering increasing attention, prompting research aimed at achieving fairness within the constraints of differential privacy. This thesis delves into algorithms designed to enhance fairness in the realm of differentially private deep learning and explores their mechanisms. It examines the role of normalization, a technique applied to these algorithms in practice, to elucidate its impact on fairness. Additionally, this thesis formalizes a hyperparameter tuning protocol to accurately assess the performance of these algorithms. Experiments across various datasets and neural network architectures were conducted to test our hypotheses under this tuning protocol. The decoupling of hyperparameters, allowing each to independently control specific properties of the algorithm, has proven to enhance performance. However, certain mechanisms, such as discarding samples with large norms and allowing unbounded hyperparameter adaptation, may significantly compromise fairness. Our experiments also confirm the critical role of hyperparameter values in influencing fairness, emphasizing the necessity of precise tuning to ensure equitable outcomes. Additionally, we observed differential convergence rates across algorithms, which affect the number of trials needed to identify optimal hyperparameter settings. This thesis aims to offer detailed perspectives on understanding fairness in differentially private deep learning and provides insights into designing algorithms that can more effectively enhance fairness.