Balancing Accuracy, Fairness and Privacy in Machine Learning through Adversarial Learning

Main Article Content

Alexander Eponeshnikov
Rustem Sabitov
Gulnara Smirnova
Shamil Sabitov

Abstract

This paper investigates balancing accuracy, fairness and privacy in machine learning through adversarial learning. Differential privacy (DP) provides strong guarantees for protecting individual privacy in datasets. However, DP can impact model accuracy and fairness of decisions. This paper explores the effect of integrating DP into the adversarial learning framework called LAFTR (Learning Adversarially Fair and Transferable Representations) on fairness and accuracy metrics. Experiments were conducted using the Adult income dataset to classify individuals into high vs low income groups based on features like age, education etc. Gender was considered a sensitive attribute. Models were trained with different levels of DP noise (controlled by the epsilon hyperparameter) added to different modules like the encoder, classifier and adversary. Results show that adding DP consistently improves fairness metrics like demographic parity and equalized odds by 3-5% compared to an unfair classifier, albeit at a cost of 1-3% reduction in accuracy. Stronger adversary models further improve fairness but require careful tuning to avoid instability during training. Overall, with proper configuration, DP models can achieve high fairness with minimal sacrifice of accuracy compared to an unfair classifier. The study provides insights into balancing competing objectives of privacy, fairness and accuracy in machine learning models.

Downloads

Download data is not yet available.

Article Details

How to Cite
Eponeshnikov, A., Sabitov, R., Smirnova, G., & Sabitov, S. (2023). Balancing Accuracy, Fairness and Privacy in Machine Learning through Adversarial Learning. Advances in Systems Science and Applications, 23(4), 40-59. https://doi.org/10.25728/assa.2023.23.04.1442
Section
Articles