White Collar

Navigating the Emerging Regulatory Landscape for AI: A Comprehensive Guide for 2023

By: Lucosky Brookman
Navigating the Emerging Regulatory Landscape for AI: A Comprehensive Guide for 2023

Introduction

The regulatory landscape for artificial intelligence (AI) in the United States has largely been a patchwork of state-level initiatives, with no comprehensive federal oversight. However, this is set to change in 2023 as several states, including Colorado, Connecticut, Virginia, and potentially California, as well as New York City, are introducing new regulations that will directly impact AI governance. Companies operating in these jurisdictions must be proactive in updating their compliance programs to adapt to these new legal frameworks.

The New Regulatory Paradigm

States Taking the Lead: Colorado, Connecticut, and Virginia

In the past year, Colorado, Connecticut, and Virginia have passed laws that focus on automated decision-making processes. These laws share common elements and introduce two key mandates:

  1. Consumer Opt-Out Rights: The laws grant consumers the right to opt out of automated decisions in various sectors such as financial services, housing, insurance, education, criminal justice, employment, healthcare, and access to essential goods or services. In Colorado, businesses may not have to comply with an opt-out request if they can provide a detailed explanation of the decision-making process, provided that a human has meaningful oversight over the automated decision.
  2. Data Protection Assessments: Companies are required to perform and document risk assessments before deploying AI or other algorithmic tools that could pose a foreseeable risk to consumers. These risks include unfair or deceptive treatment, financial or physical harm, intrusion into personal privacy, and other forms of substantial injury.

California’s Pending Regulations

The California Privacy Rights Act of 2020 has laid the groundwork for AI regulations, although specific rules regarding automated decision-making have yet to be released. The California Privacy Protection Agency is expected to finalize these rules, but the timeline remains uncertain.

New York City’s Approach

Effective January 1, 2023, New York City will require employers and employment agencies to conduct annual bias audits if they use automated decision-making tools for employment screening. These audits must be performed by an independent auditor and focus on potential biases related to race, ethnicity, and sex.

Existing State Laws on AI

California and Illinois have already enacted laws that touch on AI. California prohibits deceptive use of chatbots, while Illinois requires employers to notify and obtain consent from applicants when AI is used in video job interviews.

Compliance Steps for Businesses

  1. Inventory of Automated Systems: The first step is to identify all AI and automated decision-making systems used in the company. This may require creating an inventory from scratch.
  2. Risk Assessment: Companies should conduct a thorough risk assessment for each automated system. This involves understanding the system's architecture, including its open-source and proprietary components.
  3. Risk Mitigation: Once risks are identified, companies should take reasonable steps to mitigate them. This includes providing explanations for decisions made by AI systems and conducting bias audits.
  4. Documentation: Companies must maintain records of data protection assessments and bias audits. This documentation could serve as evidence of due diligence in case of legal challenges.

Global Trends in AI Regulation

The U.S. is not alone in its efforts to regulate AI. The European Union’s General Data Protection Regulation (GDPR) has provisions related to automated decision-making, and the EU is expected to adopt the Artificial Intelligence Act next year. Other countries like China, Brazil, and South Africa are also working on AI regulations.

Conclusion

The regulatory focus on AI is intensifying both domestically and globally. Companies that develop, sell, or use AI systems must prepare their compliance programs for the new laws taking effect next year. Failure to do so could result in legal repercussions, including government investigations and private litigation. Therefore, it is crucial for businesses to stay ahead of the curve in this rapidly evolving legal landscape.