LOGO

to ensure inclusivity, the biden administration must double down on ai development initiatives

AVATAR Miriam Vogel
Miriam Vogel
April 22, 2021
to ensure inclusivity, the biden administration must double down on ai development initiatives

The State of AI and US Global Leadership

A recent report from the National Security Commission on Artificial Intelligence (NSCAI) conveyed a stark assessment: the United States is currently unprepared for the challenges and opportunities presented by the age of artificial intelligence. This finding necessitates a critical examination of two central issues requiring prompt attention.

Specifically, can the U.S. maintain its position as a leading global power if it lags behind in the advancement and implementation of AI? Furthermore, what concrete steps can be taken to alter the current course?

The Risk of Automated Inequality

Without careful oversight, even ostensibly unbiased AI systems possess the potential to reinforce existing societal inequalities and effectively automate discriminatory practices.

Instances of such tech-driven harm have already been observed in areas like credit scoring, healthcare provision, and targeted advertising.

The Need for Clear AI Governance

To mitigate the recurrence and widespread escalation of these issues, the Biden administration must provide definitive clarification regarding existing legal frameworks governing AI and machine learning models.

This clarification should address both the evaluation of AI use by private sector entities and the regulation of AI applications within governmental systems.

Positive Steps and Remaining Challenges

The administration has demonstrated positive initial steps, including strategic appointments within the technology sector.

The issuance of an executive order on the first day of the administration, establishing an Equitable Data Working Group, has also provided reassurance to those concerned about both U.S. commitment to AI development and the pursuit of digital equity.

However, this progress will be unsustainable without a firm commitment to translating AI funding into tangible resources and establishing the necessary leadership and organizational structures to ensure responsible development and deployment.

Sustained resolve is crucial to safeguard the future of AI and its impact on society.

Shifting Focus in AI Policy and Equity

A substantial change is occurring within federal AI policy, coupled with increased declarations regarding equity within the technology sector. Several key appointments made by the Biden administration – including Dr. Alondra Nelson to the role of deputy at OSTP, Tim Wu at the NEC, and Kurt Campbell (previously a senior advisor) at the NSC – demonstrate a heightened focus on the development of inclusive AI by internal experts.

The final report from the NSCAI contains recommendations that could be vital for establishing stronger foundations for inclusive AI development. These include the creation of new talent pathways through a U.S. Digital Service Academy, designed to educate both current and prospective employees.

The Proposed Technology Competitiveness Council

The report further suggests the formation of a new Technology Competitiveness Council, to be chaired by the Vice President. This council could be instrumental in maintaining national dedication to AI leadership as a top-level priority.

Positioning Vice President Harris as the administration’s lead on AI is a strategic move, given her collaborative relationship with the President, her understanding of technology policy, and her commitment to civil rights.

Inclusive AI development requires sustained attention and leadership. The proposed structures aim to provide both.

Leading Through Responsible AI Implementation

Artificial intelligence possesses significant potential for enhancing efficiency, demonstrated by its capacity to rapidly analyze large datasets like applicant resumes. However, this power also carries the risk of amplifying existing biases, as exemplified by the Amazon recruitment tool that favored male applicants and the practice of “digital redlining” in credit scoring based on racial factors.

To proactively address these concerns, the Biden administration should initiate an executive order. This order would task federal agencies with exploring innovative applications of AI to improve governmental functions.

Furthermore, the order must require rigorous assessments of AI systems utilized by the U.S. Government. These evaluations should confirm that the systems are not inadvertently perpetuating discriminatory results.

A consistent schedule for evaluating AI systems is essential. This will ensure that any embedded biases are identified and mitigated, preventing recommendations that contradict our commitment to democratic and inclusive principles. Regular reevaluation is also crucial, given the continuous evolution and learning capabilities of AI.

Establishing a robust AI governance framework is especially vital within the U.S. Government. This is due to the legal obligation to provide due process protections when denying benefits to citizens.

For example, when AI is employed to determine Medicaid benefit allocation, and those benefits are altered or denied based on algorithmic decisions, the government must be able to clearly articulate the rationale behind the outcome – a concept known as technological due process.

Delegating decisions to automated systems without transparency, clear guidelines, and human oversight risks undermining this fundamental constitutional right.

The administration also holds considerable influence over the implementation of AI safeguards by private sector companies. This influence stems from its substantial procurement power.

Federal contract expenditures were projected to surpass $600 billion in fiscal year 2020, even prior to the inclusion of pandemic-related economic stimulus funding.

The U.S. Government could achieve significant impact by establishing a standardized checklist for the federal procurement of AI systems. This would guarantee a thorough and consistently applied process, incorporating essential civil rights considerations.

Key Considerations for AI Procurement:

  • Bias Detection: Systems must be evaluated for inherent biases.
  • Explainability: Decision-making processes should be transparent and understandable.
  • Human Oversight: Maintain human review and intervention capabilities.
  • Due Process: Ensure adherence to legal requirements for benefit allocation.

By prioritizing these elements, the U.S. can demonstrate global leadership in the responsible development and deployment of artificial intelligence.

Safeguarding Against AI-Driven Discrimination

The government possesses a significant tool for shielding citizens from the potential harms of AI: its capacity for investigation and prosecution. A directive issued by the executive branch, instructing agencies to define how existing laws and regulations – including the ADA, Fair Housing Act, Fair Lending laws, and the Civil Rights Act – apply when decisions are made using AI systems, could initiate widespread change.

Companies operating within the United States would be strongly incentivized to thoroughly examine their AI systems for biases affecting protected groups.

Individuals with lower incomes are particularly susceptible to the adverse consequences of AI technologies. This vulnerability is notably evident in the realms of credit and loan applications, as they often lack access to conventional financial services or the ability to achieve favorable scores under traditional assessment methods.

Consequently, the data derived from these circumstances is frequently utilized in the creation of AI systems that automate these very decisions.

The Consumer Finance Protection Bureau (CFPB) is uniquely positioned to enforce accountability among financial institutions for discriminatory lending practices stemming from the use of biased AI systems. An executive order would serve as a catalyst for clear statements regarding the evaluation of AI-enabled systems.

This would provide companies with notice and enhance public protection through well-defined expectations for AI implementation.

A clear legal pathway exists for holding individuals accountable for discriminatory actions, and a due process violation occurs when public benefits are denied arbitrarily or without justification. In theory, these liabilities and rights should seamlessly extend to situations involving AI systems.

However, a review of agency actions and existing legal precedents – or, more accurately, the absence thereof – suggests this is not currently the case.

The current administration has already demonstrated positive momentum, such as rescinding a proposed HUD rule that would have severely limited legal challenges against discriminatory AI practices. Moving forward, federal agencies with investigative and prosecutorial powers should clarify which AI practices will be subject to review.

They should also specify which current laws will be applied, for example, HUD concerning housing discrimination, the CFPB regarding credit lending, and the Department of Labor concerning AI’s role in hiring, performance evaluations, and terminations.

This proactive approach would also establish valuable precedents for private legal actions and complaints.

The Biden administration has taken promising initial steps to signal its commitment to fostering inclusive and less discriminatory AI. However, it must also prioritize internal reforms.

This includes directing federal agencies to ensure the development, acquisition, and deployment of AI – both internally and by its contractors – is conducted in a manner that safeguards privacy, civil rights, civil liberties, and core American values.