Google has changed its terms to clarify that customers can deploy its generative AI tools to make “automated decisions” in “high-risk” domains, like healthcare, so long as there’s a human in the loop.
According to the company’s updated Generative AI Prohibited Use Policy, published on Tuesday, customers may use Google’s generative AI to make “automated decisions” that could have a “material detrimental impact on individual rights.” Provided that a human supervises in some capacity, customers can use Google’s generative AI to make decisions about employment, housing, insurance, social welfare, and other “high-risk” areas.
In the context of AI, automated decisions refer to decisions made by an AI system based on data both factual and inferred. A system might make an automated decision to award a loan, for example, or screen a job candidate.
The previous draft of Google’s terms implied a blanket ban on high-risk automated decision making where it involves the company’s generative AI. But Google tells TechCrunch customers could always use its generative AI for automated decision making, even for high-risk applications, as long as a human was supervising.
“The human supervision requirement was always in our policy, for all high-risk domains,” a Google spokesperson said when reached for comment via email. “[W]e’re recategorizing some items [in our terms] and calling out some examples more explicitly to be clearer for users.”
Google’s top AI rivals, OpenAI and Anthropic, have more stringent rules governing the use of their AI in high-risk automated decision making. For example, OpenAI prohibits the use of its services for automated decisions relating to credit, employment, housing, education, social scoring, and insurance. Anthropic allows its AI to be used in law, insurance, healthcare, and other high-risk areas for automated decision making, but only under the supervision of a “qualified professional” — and it requires customers to disclose they’re using AI for this purpose.
AI that makes automated decisions affecting individuals has attracted scrutiny from regulators, who’ve expressed concerns about the technology’s potential to bias outcomes. Studies show, for example, that AI used to make decisions like the approval of credit and mortgage applications can perpetuate historical discrimination.
The nonprofit group Human Rights Watch has called for the ban of “social scoring” systems in particular, which the org says threatens to disrupt people’s access to Social Security support, compromise their privacy, and profile them in prejudicial ways.
Under the AI Act in the EU, high-risk AI systems, including those that make individual credit and employment decisions, face the most oversight. Providers of these systems must register in a database, perform quality and risk management, employ human supervisors, and report incidents to the relevant authorities, among other requirements.
In the U.S., Colorado recently passed a law mandating that AI developers disclose information about “high-risk” AI systems, and publish statements summarizing the systems’ capabilities and limitations. New York City, meanwhile, prohibits employers from using automated tools to screen a candidate for employment decisions unless the tool has been subject to a bias audit within the prior year.
Google has recently announced that it will allow customers to use its artificial intelligence (AI) technology in “high-risk” domains, such as healthcare and finance, provided that there is human supervision. This decision marks a significant shift in Google’s approach to AI, as the company had previously been cautious about allowing its technology to be used in high-risk domains.
Background
Google has been a leader in the development of AI technology, with a range of products and services that utilize machine learning and other AI techniques. However, the company has also been cautious about the potential risks and consequences of AI, particularly in high-risk domains such as healthcare and finance.
High-Risk Domains
High-risk domains are areas where the consequences of AI errors or failures could be severe. In healthcare, for example, AI errors could lead to misdiagnoses or inappropriate treatments. In finance, AI errors could lead to significant financial losses or instability.
Google’s Approach
Google’s approach to AI in high-risk domains is centered on the idea of human supervision. The company believes that AI can be a powerful tool in these domains, but that it must be used in conjunction with human oversight and review.
Google’s AI Principles
Google’s approach to AI is guided by a set of principles that emphasize the importance of responsibility, transparency, and accountability. These principles include:
1. Be socially beneficial: Google’s AI technology should be used in ways that benefit society.
2. Avoid creating or reinforcing unfair bias: Google’s AI technology should be designed to avoid creating or reinforcing unfair bias.
3. Be built and tested for safety: Google’s AI technology should be designed and tested to ensure that it is safe and reliable.
4. Be accountable to people: Google’s AI technology should be designed to be accountable to people, with clear lines of responsibility and accountability.
5. Incorporate privacy design principles: Google’s AI technology should be designed with privacy in mind, incorporating principles such as data minimization and transparency.
Benefits of Human Supervision
Human supervision is essential in high-risk domains, where the consequences of AI errors or failures could be severe. Human supervision provides several benefits, including:
1. Improved accuracy: Human supervision can help to improve the accuracy of AI systems, by providing an additional layer of review and oversight.
2. Reduced risk: Human supervision can help to reduce the risk of AI errors or failures, by providing an additional layer of review and oversight.
3. Increased transparency: Human supervision can help to increase transparency, by providing a clear understanding of how AI decisions are made.
4. Accountability: Human supervision provides accountability, by ensuring that there is a clear line of responsibility for AI decisions.
Challenges and Limitations
While human supervision is essential in high-risk domains, there are also challenges and limitations to consider. These include:
1. Scalability: Human supervision can be time-consuming and resource-intensive, making it challenging to scale.
2. Expertise: Human supervision requires expertise in the relevant domain, which can be a challenge to find.
3. Bias: Human supervision can also introduce bias, particularly if the human supervisors are not diverse or representative of the population.
Conclusion
Google’s decision to allow customers to use its AI technology in high-risk domains, provided that there is human supervision, marks a significant shift in the company’s approach to AI. While there are challenges and limitations to consider, human supervision provides several benefits, including improved accuracy, reduced risk, increased transparency, and accountability. As AI continues to evolve and improve, it is essential to prioritize human supervision and oversight, particularly in high-risk domains.
Google’s recent announcement that customers can use its artificial intelligence (AI) technology in “high-risk” domains, such as healthcare and finance, so long as there’s human supervision, has significant benefits for various stakeholders. Here are some of the benefits of this decision:
Benefits for Customers
1. Improved Accuracy: Human supervision ensures that AI decisions are accurate and reliable, which is critical in high-risk domains.
2. Increased Transparency: Human supervision provides transparency into AI decision-making processes, which helps build trust in AI systems.
3. Reduced Risk: Human supervision reduces the risk of AI errors or failures, which can have severe consequences in high-risk domains.
4. Enhanced Accountability: Human supervision ensures that there is accountability for AI decisions, which is essential in high-risk domains.
Benefits for Google
1. Increased Adoption: By allowing customers to use its AI technology in high-risk domains, Google can increase adoption and revenue from its AI offerings.
2. Improved Reputation: Google’s decision to prioritize human supervision in high-risk domains demonstrates its commitment to responsible AI development and deployment.
3. Enhanced Collaboration: Google’s decision to work with customers in high-risk domains can lead to enhanced collaboration and innovation in AI research and development.
4. Increased Competitiveness: By offering AI solutions in high-risk domains, Google can differentiate itself from competitors and establish itself as a leader in AI innovation.
Benefits for the AI Community
1. Advancements in AI Research: Google’s decision to allow AI in high-risk domains can drive advancements in AI research, particularly in areas such as explainability, transparency, and accountability.
2. Increased Adoption of AI in High-Risk Domains: Google’s decision can encourage other companies to adopt AI in high-risk domains, leading to increased innovation and progress in these areas.
3. Improved AI Governance: Google’s decision to prioritize human supervision in high-risk domains can inform the development of AI governance frameworks and regulations.
4. Enhanced Public Trust in AI: Google’s decision to prioritize human supervision and accountability in high-risk domains can help build public trust in AI systems.
Benefits for Society
1. Improved Healthcare Outcomes: The use of AI in healthcare, with human supervision, can lead to improved healthcare outcomes, such as more accurate diagnoses and effective treatments.
2. Increased Financial Stability: The use of AI in finance, with human supervision, can lead to increased financial stability, such as more accurate risk assessments and effective portfolio management.
3. Enhanced Public Safety: The use of AI in public safety, with human supervision, can lead to enhanced public safety, such as more accurate emergency response systems and effective crime prevention strategies.
4. Increased Economic Growth: The use of AI in various industries, with human supervision, can lead to increased economic growth, such as more efficient supply chains and effective business operations.
Conclusion
Google’s decision to allow customers to use its AI technology in high-risk domains, with human supervision, has significant benefits for customers, Google, the AI community, and society as a whole. By prioritizing human supervision and accountability, Google can help build trust in AI systems and drive advancements in AI research and development.