Bias Detection and Fairness Optimization in Machine Learning Algorithms

Authors

  • Rachna Research Scholar Author

Keywords:

Machine Learning, Algorithmic Bias, Fairness in AI, Bias Detection, Fairness Optimization

Abstract

There are serious worries about prejudice and unfairness with the growing use of machine learning algorithms in decision-making systems. This is especially true in politically charged areas like healthcare, banking, hiring, and law enforcement. Machine learning models can be biased due to historical or imbalanced datasets, poor feature selection, or algorithmic design decisions. This bias can result in discriminating outcomes, which can worsen socioeconomic disparities. examines current methods for detecting bias and evaluating fairness, as well as the kinds and origins of bias in machine learning algorithms. Methods for reducing bias in models without sacrificing performance during pre-, in-, and post-processing. To evaluate the behaviour of algorithms, bias detection techniques are examined, including disparate effect analysis, statistical parity difference, and fairness metrics across protected attributes. In this article, we take a look at the pros and downsides of several fairness optimization methods, such as data re-sampling, adversarial debiasing, and constraint-based learning.

Downloads

Download data is not yet available.

References

Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning: Limitations and Opportunities. MIT Press, Cambridge.

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35.

Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems, 29, 3315–3323.

Zemel, R., Wu, Y., Swersky, K., Pitassi, T., & Dwork, C. (2013). Learning fair representations. International Conference on Machine Learning, 325–333.

Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163.

Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. Proceedings of the 8th Innovations in Theoretical Computer Science Conference, 1–23.

Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 214–226.

Corbett-Davies, S., & Goel, S. (2018). The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023.

Kamiran, F., & Calders, T. (2012). Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1), 1–33.

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the Conference on Fairness, Accountability, and Transparency, 77–91.

Downloads

Published

2025-09-30

Issue

Section

Original Research Articles

How to Cite

Bias Detection and Fairness Optimization in Machine Learning Algorithms. (2025). International Journal of Artificial Intelligence, Computer Science, Management and Technology, 2(3), 12-16. https://ijacmt.com/index.php/j/article/view/32

Similar Articles

11-20 of 28

You may also start an advanced similarity search for this article.