Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Subject "Efficient Computing"

Sort by: Order: Results:

  • Rodriguez Beltran, Sebastian (2024)
    DP-SGD (Differentially Private Stochastic Gradient Descent) is the gold standard approach for implementing privacy in deep learning settings. DP-SGD achieves this by clipping the gradients during training and then injecting noise. The algorithm aims to limit the impact of any data point from the training dataset on the model. Therefore, the gradient of an individual sample will give information until a certain point, thus limiting the chances of an inference attack discovering the data used for the model training. While DP-SGD ensures the privacy of the model, there is no free lunch, and it has its downsides in terms of the utility-privacy trade-off and an increase in computational resources for training. This thesis aims to evaluate different DP-SGD implementations in terms of performance and computational efficiency. We will compare the use of optimized clipping algorithms, different GPU processing unit architectures, speed-up by compilation, lower precision in the data representation, and distributed training. These strategies effectively reduce the computational cost of adding privacy to the deep learning training compared to the non-private baseline.