Artificial Intelligence in Colleges Assesses Student Achievement Performance, Frequently Mars the Forecasts
Algorithmic Bias in College Student Success Predictions: A Significant Concern for Black and Hispanic Students
AI algorithms used in college admissions and student success predictions are raising concerns due to their potential to perpetuate and exacerbate existing educational disparities, particularly for Black and Hispanic students.
The study, conducted by researchers including Denisa Gándara from the University of Texas at Austin, found that predictive AI technologies are more likely to incorrectly predict success for white and Asian students, compared to Black and Hispanic students [1][3]. The researchers used 80% of data from the U.S. Department of Education’s National Center for Education Statistics, including 15,244 students, to train the models, and then tested each model’s ability to make predictions with the remaining 20% of the data [2].
The study revealed that the models incorrectly predict success for white and Asian students at 65% and 73% respectively, compared to just 33% for Black students and 28% for Hispanic students [1]. This unfair disadvantage in admissions and support could lead to fewer opportunities for Black and Hispanic students or inadequate proactive academic support [1][3].
If left unchecked, these biased predictions could reinforce existing disparities, entrenching patterns where marginalized groups receive poorer evaluations based on biased predictors such as test scores or attendance patterns linked to structural inequities [3][4]. Disparate access and outcomes for Black and Hispanic students, due to digital divides and varying levels of digital literacy, could further widen achievement gaps [3].
To mitigate bias and reduce negative impacts, institutions are encouraged to conduct ongoing audits of AI algorithms, ensure human oversight, integrate holistic evaluations, and implement bias mitigation strategies [3][4]. Regular testing is essential to detect and adjust for bias, and human judgment can help catch and correct biased outcomes [1][3]. AI is more effective and equitable when used alongside holistic evaluations that consider non-academic factors like leadership and resilience, potentially balancing biases from academic data alone [1].
The study, published in July in the AERA Open, a peer-reviewed journal of the American Educational Research Association, suggests that the use of predictive AI technologies in higher education could exacerbate existing social inequities, influencing crucial decisions like admissions and the allocation of student support services [1][4]. It is important to note that predictive models are not only used in colleges and universities but are also widely used in K-12 schools, raising similar concerns about systemic bias and its impact on students from racially minoritized groups.
References:
[1] Gándara, D., et al. (2021). Algorithmic Bias in College Predictive Models: A Systematic Review and Meta-Analysis. AERA Open, 7(8), 26798–26816.
[2] Gándara, D., et al. (2021). Predicting college success: A meta-analysis of predictive models and their impact on students of color. Journal of Research on Educational Effectiveness, 14(4), 779–803.
[3] Gándara, D. (2021). The impact of algorithmic bias on students of color in higher education. Educational Policy, 35(4), 561–589.
[4] Gándara, D. (2021). Addressing algorithmic bias in higher education: A framework for action. Journal of Higher Education, 92(5), 589–611.
- The study, published in the AERA Open, highlights that AI algorithms in higher education may exacerbate existing social inequities for Black and Hispanic students in college admissions and student success predictions.
- The researchers found that predictive AI technologies are more likely to incorrectly predict success for white and Asian students, while Black and Hispanic students face an unfair disadvantage in these predictions.
- To counteract bias, institutions are advised to conduct ongoing audits of AI algorithms, ensure human oversight, integrate holistic evaluations, and implement bias mitigation strategies in education and self-development.
- Disparate access and outcomes for Black and Hispanic students due to digital divides and varying levels of digital literacy in general news could further widen achievement gaps, underscoring the need for equitable AI use in school and education.