Skip to content

Advocating for the Endorsement of AI systems that combat Human Prejudice by Regulatory Bodies in the U.S.

Enforcement of Civil Rights Laws on AI Systems by Regulators within the Biden Administration Unwavering: Laws Already in Place Sufficient.

Regulatory Bodies in the U.S. Should Favor the Implementation of Artificial Intelligence Designed...
Regulatory Bodies in the U.S. Should Favor the Implementation of Artificial Intelligence Designed to Combat Prejudice Against Humans

Advocating for the Endorsement of AI systems that combat Human Prejudice by Regulatory Bodies in the U.S.

In a bid to address the growing use of artificial intelligence (AI) in education, U.S. regulators have outlined a comprehensive plan to reduce bias, ensure successful adoption, and maintain public trust. The proposed approach, which includes ten key principles, is a response to the Boston public school system's controversial algorithmic system for school busing and reconfiguring school start times.

The plan, spearheaded by the Federal Trade Commission (FTC), Consumer Financial Protection Bureau (CFPB), Department of Justice's Civil Rights Division (DOJ), and Equal Employment Opportunity Commission (EEOC), aims to embed transparency and explainability in AI systems, proactively mitigate bias, involve diverse stakeholders in development, and establish clear accountability and governance structures.

One of the key aspects of the plan is to design AI systems with clear explainability and interpretability features. This will help users and regulators understand how decisions are made, reducing suspicion and fostering trust. Regulators must also enforce proactive and continuous measures to detect and mitigate bias, including conducting equity assessments, using diverse, representative data sets, and testing AI systems for discriminatory outcomes regularly.

Involving diverse domain experts, independent parties, and affected communities in AI system development is another crucial aspect of the plan. This will ensure multiple perspectives and reduce unintended bias before deployment. AI systems must also undergo thorough pre-deployment testing, risk assessment, and mitigation strategies, with continuous post-deployment monitoring to track performance, bias, and risks.

The plan also calls for the creation of dedicated governance roles and cross-functional oversight teams, comprehensive AI lifecycle policies, regular reporting and public communication, citizen feedback and advisory mechanisms, and the use of regulatory sandboxes and controlled testing environments.

The Boston public school system's algorithmic solution for school busing was called a "marvel" by the Boston Globe, promising cost savings, environmental benefits, and health improvements. However, the system was scrapped due to public pushback, with disruptive changes to school schedules, such as earlier bell times for some elementary school students and conflicts with extra-curricular activities for some high school students, being the primary concerns.

The controversy surrounding the Boston school system's algorithm highlights the need for a balanced approach to AI adoption. Regulators should tone down the unhelpful rhetoric that depicts AI as a threat to civil rights and instead treat this emerging technology with an even hand. By systematically applying these principles, U.S. regulators can reduce AI bias, encourage responsible innovation, and maintain public confidence in AI technologies in critical sectors such as healthcare, justice, and finance.

The American Academy of Pediatrics recommends that teenagers not start their school day before 8:30 AM, but only about 17 percent of U.S. high schools comply. If implemented, the algorithm might have created a more equitable system than what existed originally, with a majority of all students in every ethnic group enjoying start times in the desirable 8:00 AM to 9:00 AM window.

The pushback against the Boston public school system's algorithm underscores the importance of involving the public in choosing among multiple trade-offs. AI systems should provide flexibility to involve the public in these decisions, fostering transparency and promoting trust. Ellen Goodman, a Rutgers law professor, describes the pushback as "algorithmic scapegoating," emphasising the need for a balanced approach to AI adoption that considers the potential benefits and drawbacks.

The DOJ can use its visibility and platform within the civil rights community to engage with communities to help AI tools gain legitimacy. U.S. regulatory agencies can also help by identifying and amplifying automated tools that reduce bias in employment decisions (EEOC) and lending (CFPB). By respecting state-level regulatory experimentation and fostering ongoing education and competency building, U.S. regulators can continue to evolve their approach to AI governance, aligning with the White House's AI Bill of Rights and best practices in the field.

[1] Stuenkel, S. (2021). AI Governance: A Framework for Responsible Innovation. Harvard Kennedy School. [2] White House (2021). Executive Order on Promoting Competition in the American Economy. The White House. [3] White House (2021). Fact Sheet: Promoting Competition and Empowering Consumers. The White House.

  1. The plan, led by the FTC, CFPB, DOJ, and EEOC, emphasizes the importance of creating AI systems with clear explainability and interpretability, to foster trust and understanding among users and regulators.
  2. To minimize bias in AI systems, regulators must enforce proactive and continuous measures, such as conducting equity assessments, using diverse data sets, and regularly testing AI systems for discriminatory outcomes.
  3. Engaging diverse domain experts, independent parties, and affected communities in AI system development is crucial for reducing unintended bias and ensuring multiple perspectives in the AI governance approach.

Read also:

    Latest