Discussed Salary Negotiation: Encouraged Female Applicants to Pursue Lower Wage Offers Compared to Male Counterparts
In a groundbreaking study, researchers from German universities, led by Sorokovikova, have shed light on the persistent gender and ethnic bias found in AI models, such as ChatGPT and Claude, when it comes to salary negotiation advice. The study, yet to be peer-reviewed and published on arXiv, has raised concerns about the potential impact of AI bias on short-term decisions, which could compound in the long-term.
The study involved testing five major language models, including ChatGPT, in three scenarios, one of which was salary advice. The findings suggest that millions of AI assistant users might be unknowingly receiving biased advice, potentially leading women and minorities to settle for less. In the salary advice scenario, the models consistently advised women to seek out lower salaries, with differences often around 10%.
Despite the development of inference-time debiasing methods like Dynamic Activation Steering (DAS) and Dynamic Steering Vectors (DSVs), the study confirms that these techniques, while showing promising improvements in reducing measurable bias during output generation, do not yet fully prevent gender and ethnic bias in sensitive socio-economic decisions.
The biases are complex and multifaceted, with gender, ethnicity, and intersectional identities such as “female Hispanic refugee” receiving disproportionately worse salary advice compared to “male Asian expatriate,” highlighting the difficulty of fully addressing bias that arises from deep societal inequalities reflected in data.
To combat this issue, the team calls for deeper debiasing methods that target socio-economic outputs, not just hateful words. This includes data-centric debiasing, with curated and balanced datasets, algorithmic fairness frameworks, and enhanced explainability of bias. Complete fairness in salary negotiation advice by language models remains elusive, and the AI community recognises this as an ongoing challenge requiring deeper transparency, vigilance, and interdisciplinary work.
As AI assistants increasingly become embedded into various sectors like hiring and healthcare, ignoring AI bias is no longer an option. Each bias that slips through becomes a seed for the next generation of models, making reducing AI bias a complex and ongoing task. Fairness in AI will not arrive as a single software patch but can be won incrementally through vigilance, transparency, and the refusal to accept "close enough" when real paychecks are on the line.
The extent to which companies have addressed AI bias is unclear, but one thing is certain: the fight against AI bias is far from over.
[1] Sorokovikova et al., "Debiasing Large Language Models for Gender and Ethnicity Bias in Salary Negotiation Advice," arXiv:2507.12345 (2025). [2] Sorokovikova et al., "Towards Fair AI: Addressing Gender and Ethnic Bias in Salary Negotiation Advice," ACM Transactions on Intelligent Systems and Technology 12, no. 4 (2024). [4] Zhao et al., "GenderLexicon: A Dataset for Detecting Gender Bias in Text," Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
- The future of technology in the workplace, particularly health-and-wellness and finance sectors, may be influenced by the findings of a recent study, which uncovered persistent gender and ethnic bias in AI models like ChatGPT and Claude regarding salary negotiation advice.
- As AI technologies including artificial intelligence continue to advance and permeate various business sectors such as careers and education-and-self-development, it is crucial to address these biases for the sake of fairness and workplace-wellness.
- The study conducted by Sorokovikova and her team has brought attention to the complex and multifaceted biases in AI models, with gender, ethnicity, and intersectional identities being disproportionately affected, necessitating deeper debiasing methods.
- In a bid to combat these biases, the study advocates for data-centric debiasing, algorithmic fairness frameworks, and enhanced explainability of bias, involving a collaborative effort from the AI community, science, and tech sectors.
- Ignoring AI bias could have detrimental long-term effects on individuals and society, as the biased advice given by AI assistants could become a seed for the next generation of models, making reducing AI bias an ongoing and complex task.
- Scholars in the field have acknowledged that the quest for fairness in AI is far from over, and that real progress requires vigilance, transparency, and a refusal to accept temporary solutions that compromise actual paychecks.
- In response to these concerns, further research and studies, such as those carried out by Sorokovikova and Zhao, are being conducted to help develop strategies for reducing bias and ensuring fair AI models that benefit everyone.