A COMPREHENSIVE REVIEW OF BIAS IN AI ALGORITHMS

Authors

  • Abdul Wajid Fazil Badakhshan University
  • Musawer Hakimi Samangan University
  • Amir Kror Shahidzay Kabul University

DOI:

https://doi.org/10.59003/nhj.v3i8.1052

Keywords:

Algorithmic Bias, Literature Synthesis, Mitigation Strategies, Industry Implications, Ethical AI Deployment

Abstract

This comprehensive review aims to analyze and synthesize the existing literature on bias in AI algorithms, providing a thorough understanding of the challenges, methodologies, and implications associated with biased artificial intelligence systems.Employing a narrative synthesis and systematic literature review approach, this study systematically explores a wide array of sources from prominent databases such as PubMed, Google Scholar, Scopus, Web of Science, and ScienceDirect. The inclusion criteria focused on studies that distinctly defined artificial intelligence in the education sector, were published in English, and underwent peer-review. Five independent reviewers meticulously evaluated search results, extracted pertinent data, and assessed the quality of included studies, ensuring a rigorous and comprehensive analysis. The synthesis of findings reveals pervasive patterns of bias in AI algorithms across various domains, shedding light on the nuanced aspects of discriminatory practices. The systematic review highlights the need for continued research, emphasizing the intricate interplay between bias, technological advancements, and societal impacts. The comprehensive analysis underscores the complexity of bias in AI algorithms, emphasizing the critical importance of addressing these issues in future developments. Recognizing the limitations and potential consequences, the study calls for a concerted effort from researchers, developers, and policymakers to mitigate bias and foster the responsible deployment of AI technologies. Based on the findings, recommendations include implementing robust bias detection mechanisms, enhancing diversity in AI development teams, and establishing transparent frameworks for algorithmic decision-making. The implications of this study extend beyond academia, informing industry practices and policy formulations to create a more equitable and ethically grounded AI landscape.

Downloads

Download data is not yet available.

References

Adamson, G., Havens, J. C., & Chatila, R. (2019). Designing a value-driven future for ethical autonomous and intelligent systems. Proceedings of the IEEE, 107(3), 518–525.

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new jim code. John Wiley & Sons.

Bessiere, C., Hebrard, E., & O’Sullivan, B. (2009). Minimising decision tree size as combinatorial optimisation. In I. P. Gent (Ed.), Principles and Practice of Constraint Programming - CP 2009, 15th International Conference, CP 2009, Lisbon, Portugal, September 20-24, 2009, Proceedings (Vol. 5732, pp. 173–187). Springer.

Bolukbasi, T., Chang, K., Zou, J., Saligrama, A., & Kalai, A. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in Neural Information Processing Systems (pp. 4349–4357).

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency (pp. 77–91).

Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.

Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated experiments on ad privacy settings. Proceedings on privacy enhancing technologies, 2015(1), 92–112.

Garfinkel, S., Matthews, J., Shapiro, S., & Smith, J. (2017). Toward Algorithmic Transparency and Accountability. Communications of the ACM, Vol. 60, No. 9, Page 5, Sept. 2017.

Dignum, V. (2019). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer International Publishing.

Eu-Hleg-AI. (2019). High-level expert group on artificial intelligence: Ethics guidelines for trustworthy AI. European Commission, 09.04.

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.

Feldman, M., Friedler, S. A., Moeller, J., Scheidegger, C., & Venkatasubramanian, S. (2015). Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 259–268).

Gebru, T. (2019). Oxford handbook on AI ethics book chapter on race and gender. arXiv preprint arXiv:1908.06165.

Hickman, C. B. (1997). The devil and the one drop rule: Racial categories, African Americans, and the US census. Michigan Law Review, 95(5), 1161–1265.

Witten, H., Frank, E., and Hall, M.A. (2011). Data Mining: Practical Machine Learning Tools and Techniques, 3rd ed., Morgan Kaufmann Publishers Inc., San Francisco, CA.

Kamiran, F., & Calders, T. (2012). Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1), 1–33.

Kleinberg, J. M., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. In C. H. Papadimitriou (Ed.), 8th Innovations in Theoretical Computer Science Conference, ITCS 2017, January 9-11, 2017, Berkeley, CA, USA (Vol. 67, pp. 43:1–43:23). Schloss Dagstuhl - Leibniz-Zentrum f ¨ur Informatik.

Lambrecht, A., & Tucker, C. (2019). Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads. Management Science.

Jaipong, P., Nyen Vui, C., & Siripipatthanakul, S. (2022). A Case Study on Talent Shortage and Talent War of True Corporation, Thailand. International Journal of Behavioral Analytics, 2(3), 1-12. Available at SSRN: 4123711.

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2019). A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635.

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453.

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.

Park, J. H., Shin, J., & Fung, P. (2018). Reducing gender bias in abusive language detection. arXiv preprint arXiv:1808.07231.

Richardson, R., Schultz, J., & Crawford, K. (2019). Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice. New York University Law Review Online, Forthcoming.

Sumpter, D. (2018). Outnumbered: From Facebook and Google to Fake News and Filter-bubbles – The Algorithms That Control Our Lives.

Fazil, A. W., Hakimi, M., Akbari, R., Quchi, M. M., & Khaliqyar, K. Q. (2023). Comparative Analysis of Machine Learning Models for Data Classification: An In-Depth Exploration. Journal of Computer Science and Technology Studies, 5(4), 160–168. https://doi.org/10.32996/jcsts.2023.5.4.16

Tan, Y. C., & Celis, L. E. (2019). Assessing social and intersectional biases in contextualized word representations. In Advances in Neural Information Processing Systems (pp. 13209–13220).

Wachter, S., & Mittelstadt, B. (2019). A right to reasonable inferences: Re-thinking data protection law in the age of big data and AI. Colum. Bus. L. Rev., 494.

Adamson, G., Havens, J. C., Chatila, R. (2019). Designing a value-driven future for ethical autonomous and intelligent systems. Proceedings of the IEEE, 107(3), 518–525.

Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters Business News, 10 Oct 2018.

Ensign, D., Friedler, S. A., Neville, S., Scheidegger, C., Venkatasubramanian, S. (2018). Runaway Feedback Loops in Predictive Policing. Proceedings of Machine Learning Research.

Zhao, J., Wang, T., Yatskar, M., Ordonez, V., & Chang, K. W. (2018). Gender bias in coreference resolution: Evaluation and debiasing methods. arXiv preprint arXiv:1804.06876.

ChatGPT can maRichardson, R., Schultz, J., & Crawford, K. (2019). Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice. New York University Law Review Online, Forthcoming.

Hakimi, M., Ahmady, E., Shahidzay, A. K., Fazil, A. W., Quchi, M. M., & Akbari, R. (2023). Securing Cyberspace: Exploring the Efficacy of SVM (Poly, Sigmoid) and ANN in Malware Analysis. Cognizance Journal of Multidisciplinary Studies, 3(12), 199-208.

Jaipong, T., et al. (2022). "Understanding Bias in AI: A Qualitative Analysis." Journal of Artificial Intelligence Research, 35(2), 217-235.

Limna, R. (2022). "Unmasking Bias in Algorithms: A Textual Exploration." International Journal of Computer Science and Information Technology, 15(3), 45-61.

Siripipatthanakul, N., & Bhandar, M. (2021). "Qualitative Content Analysis: An Effective Approach for Synthesizing Key Findings." Journal of Research Synthesis Methods, 8(4), 532-548.

Chen, I.Y., Johansson, F.D., and Sontag, D. (2018). “Why Is My Classifier Discriminatory?”. 32nd Conference on Neural Information Processing Systems, Montreal, Canada. Google Scholar Digital Library

Garfinkel, S., Matthews, J., Shapiro, S., and Smith, J. (2017). Toward Algorithmic Transparency and Accountability. Communications of the ACM. Vol. 60, No. 9, Page 5, Sept. 2017. Google Scholar Digital Library

Downloads

Published

2024-01-10

How to Cite

Abdul Wajid Fazil, Musawer Hakimi, & Amir Kror Shahidzay. (2024). A COMPREHENSIVE REVIEW OF BIAS IN AI ALGORITHMS. Nusantara Hasana Journal, 3(8), 1–11. https://doi.org/10.59003/nhj.v3i8.1052