LOGO

Google AI in High-Risk Domains: Human Supervision Required

December 17, 2024
Google AI in High-Risk Domains: Human Supervision Required

Google Clarifies AI Decision-Making in High-Risk Areas

Recent updates to Google’s terms of service delineate that its generative AI tools can be utilized for making “automated decisions” within sensitive sectors, such as healthcare. This is permissible, however, only with appropriate human oversight.

Updated Generative AI Policy

The company’s revised Generative AI Prohibited Use Policy, released on Tuesday, specifies that Google’s generative AI may be employed to facilitate “automated decisions” that could potentially have a “material detrimental impact on individual rights.”

Human supervision is a key requirement. Customers are able to leverage Google’s generative AI for decisions concerning employment, housing, insurance, social welfare, and other similarly “high-risk” domains, provided a human element is present.

Understanding Automated Decisions

Within the realm of Artificial Intelligence, “automated decisions” denote choices made by an AI system based on both factual data and inferred information. For instance, a system could autonomously determine loan approval or initially screen applicants for job openings.

Previous Terms and Clarification

The earlier version of Google’s terms appeared to suggest a complete prohibition on high-risk automated decision-making involving its generative AI. However, Google has clarified to TechCrunch that customers were always permitted to utilize its generative AI for such decisions, contingent upon human supervision.

A Google spokesperson stated via email that the human supervision requirement was consistently part of their policy for all high-risk areas. They further explained that the updates involve recategorization and more explicit examples to enhance clarity for users.

Comparison with Competitors

Google’s leading AI competitors, OpenAI and Anthropic, maintain more restrictive guidelines regarding the application of their AI in high-risk automated decision-making processes.

  • OpenAI expressly forbids the use of its services for automated decisions related to credit, employment, housing, education, social scoring, and insurance.
  • Anthropic permits the use of its AI in fields like law, insurance, and healthcare, but only under the guidance of a “qualified professional,” and mandates disclosure of AI usage for these purposes.

Regulatory Scrutiny and Potential Bias

AI systems that make automated decisions impacting individuals are facing increased scrutiny from regulators. Concerns are being raised about the potential for these technologies to introduce bias into outcomes.

Research indicates that AI used in areas like credit and mortgage application approvals can inadvertently perpetuate existing discriminatory practices.

Concerns Regarding Social Scoring

The non-profit organization Human Rights Watch has advocated for a ban on “social scoring” systems. They argue that these systems pose a threat to access to Social Security benefits, compromise individual privacy, and lead to prejudicial profiling.

EU AI Act

The EU’s AI Act subjects high-risk AI systems, including those involved in credit and employment decisions, to stringent oversight. Providers must register in a database, implement quality and risk management protocols, employ human supervisors, and report incidents to relevant authorities.

US State Regulations

In the United States, Colorado has recently enacted legislation requiring AI developers to disclose information about “high-risk” AI systems and publish summaries of their capabilities and limitations.

New York City has also implemented a law prohibiting employers from using automated tools for candidate screening without a bias audit conducted within the preceding year.

Stay Informed

TechCrunch offers an AI-focused newsletter! Sign up here to receive it directly in your inbox every Wednesday.

#Google AI#artificial intelligence#high-risk domains#human supervision#AI safety#AI regulation