LOGO

AI Regulation: Will States Lead the Way?

January 25, 2025
AI Regulation: Will States Lead the Way?

AI Regulation: A Look Ahead to 2025

The previous year proved eventful for legislators and lobbyists engaged with artificial intelligence, particularly within California. Gavin Newsom signed 18 new AI-related laws while simultaneously vetoing significant AI legislation.

Mark Weatherford suggests that 2025 may witness a similar level of activity, especially at the state level. Having observed the “sausage making of policy and legislation” at both state and federal levels, Weatherford brings extensive experience as a former chief information security officer for California and Colorado, and as a Deputy Under Secretary for Cybersecurity under President Barack Obama.

The Core of the Issue: Raising the Conversation

Weatherford explains that his role, regardless of his specific title, consistently centers on elevating the discourse surrounding security and privacy to effectively shape policy creation.

He recently joined synthetic data company Gretel as its vice president of policy and standards. This prompted a discussion about the future of AI regulation and why states are anticipated to take the lead.

This interview has been condensed for brevity and clarity.

Navigating the Complexity of AI Regulation

Many in the tech industry have observed congressional hearings on social media and related topics with concern, noting a gap in understanding among some elected officials. How confident are you that lawmakers can acquire the necessary context to make well-informed regulatory decisions?

I am very confident in their potential to achieve this understanding. However, the timeline for doing so is less certain. AI is evolving at an incredibly rapid pace; issues debated just a month ago have already transformed.

Therefore, while government will eventually reach a point of informed decision-making, it requires assistance in the form of guidance, staffing, and education.

The U.S. House of Representatives recently released a 230-page report from its task force on artificial intelligence, following a year of deliberation. The process of policy creation often involves compromise between partisan organizations, leading to diluted outcomes and extended timelines. Furthermore, the transition to a new administration introduces uncertainty regarding the prioritization of specific issues.

State-Level Regulation: A Likely Trend

It appears your perspective is that we may see more regulatory progress on the state level in 2025 than at the federal level. Is this accurate?

Absolutely. California’s Governor Newsom signed 12 pieces of AI-related legislation in recent months (TechCrunch reports a total of 18). He did, however, veto a major bill that would have significantly increased testing requirements and slowed down development.

Speaking at the California Cybersecurity Education Summit, I highlighted the extensive legislative activity occurring across the U.S., with over 400 bills introduced at the state level in the past year.

The Need for Harmonization

A significant concern, prevalent in technology and cybersecurity, is the need for harmonization of regulations. The Department of Homeland Security and Harry Coker at the White House are using the term "harmonization" to describe the effort to avoid a fragmented regulatory landscape.

This fragmentation creates challenges for companies attempting to comply with diverse laws and regulations across different states. Increased activity at the state level, coupled with efforts toward harmonization, is anticipated.

Challenges and Incentives for Harmonization

Harmonization seems like a desirable goal, but what mechanisms are in place to achieve it? What incentives do states have to align their laws and regulations?

Frankly, there isn’t a strong incentive for states to harmonize regulations beyond the observation that similar language is appearing in different states, suggesting mutual awareness of each other’s efforts.

A strategic, coordinated plan among all states is unlikely to materialize.

California’s Influence and Future Steps

Do you anticipate other states will emulate California’s approach?

California often serves as a pioneer in tech legislation, undertaking the extensive research and groundwork that others can then build upon. The 12 bills recently passed by Governor Newsom covered a wide range of topics, demonstrating a comprehensive approach.

While the Governor vetoed the more prominent regulation, it’s likely California will pursue stricter measures in 2025.

Your assessment is that, on the federal level, there is interest, as evidenced by the House report, but major legislation in 2025 is not necessarily a certainty. Is that correct?

That is my expectation. The emphasis of the new Congress may be on reducing regulation. However, technology, particularly concerning privacy and cybersecurity, often enjoys bipartisan support.

While I am generally not a proponent of excessive regulation, it is essential when the safety and security of society are at stake, as is the case with AI.

The Role of Synthetic Data

Gretel operates in the synthetic data space. Do you believe that increased regulation will drive the industry toward greater adoption of synthetic data?

I believe synthetic data represents the future of AI. Without data, AI cannot exist, and the quality of data is becoming increasingly critical as existing datasets are depleted. There will be a growing need for high-quality synthetic data that ensures privacy, eliminates bias, and addresses other non-technical considerations. I am fully convinced of this.

Addressing Bias Concerns

Some argue that synthetic data could potentially amplify existing biases present in the original data, rather than solving the problem. What is your response?

Our customers believe we have addressed this concern. We utilize a concept called the “flywheel of data generation,” where controls are built in to ensure that the generated data does not worsen over time, but remains stable or improves with each iteration.

AI “Censorship” and Potential Regulation

There are concerns from some Trump-aligned figures in Silicon Valley about AI “censorship” – the guardrails companies place on generative AI content. Do you think this will be regulated, and should it be?

The government possesses administrative tools to address perceived risks to society, and it is likely to act when necessary.

Finding the right balance between reasonable content moderation and restrictive censorship will be challenging. The incoming administration’s focus on “less regulation” suggests guidance may come through non-legislative means, such as NIST guidelines or interagency statements.

The Path Forward for AI Regulation

There is a wide spectrum of opinions on AI, ranging from utopian to dystopian. How can regulation effectively encompass these divergent views?

We must carefully manage the proliferation of AI applications. The emergence of deepfakes and other negative consequences highlights the need for legislation that controls the use of AI without violating existing laws. This involves creating new laws that reinforce current regulations, specifically addressing the AI component.

It is crucial for those of us in the technology sector to remember that concepts we consider commonplace are often unfamiliar to those outside of it. We must communicate about AI in a way that is accessible to non-technical audiences.

Despite the challenges, I remain optimistic about the future of AI. While there may be some turbulent years ahead as people become more familiar with the technology, legislation will play a vital role in fostering understanding and establishing appropriate safeguards.

TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.

#AI regulation#artificial intelligence#state laws#US policy#technology policy