Washington State AI Task Force works on getting transparency and disclosure right

Members of the Washington State AI Task Force recently gathered to discuss two AI disclosure and transparency bills that are expected to be introduced in Jan. 2026. (Illustration: Getty Images for Unsplash+)

July 22, 2025 — Over the past two years a number of states have established AI task forces or working groups to advise the legislature on the best way to govern this fast-emerging technology.

These are public bodies that abide by state open-meeting laws, but for the most part they do their work uncovered by local or national media. They’re not hiding anything—it’s just that the work of debating, discussing, and creating consensus can be slow, time-consuming, and at times downright boring to the general public. At the Transparency Coalition, we salute those who serve on these task forces. When many stakeholders come at issues and proposals with hard questions based on experience and expertise, the result can be better and stronger bills.

The Washington State AI Task Force, created by the state legislature in 2024, recently met to discuss two bills high on our priority list: HB 1168 and HB 1170. These are bills modeled after California’s AI disclosure and AI training data transparency bills, SB 942 and AB 2013, respectively. Both of the California bills were signed into law last September, and take effect on Jan. 1, 2026.

The washington bills

Both bills, authored by Rep. Clyde Shavers (D-Whidbey Island), progressed in the 2025 legislative session but were not adopted.

  • HB 1168 would require developers to publish information about AI training datasets.

  • HB 1170 would require the inclusion of discovery tools in AI-generated digital content.

Revised versions of the bills are expected to be re-introduced in the 2026 legislative session.

Last week’s AI Task Force meeting featured Rep. Shavers offering an overview of both bills, followed by questions and a discussion of concerns and potential improvements to the language in the respective bills.

video of full meeting

A video of the full one-hour task force meeting is available below.

discussion highlights

Rep. Shavers gave an overview of the two bills within the context of AI governance locally and globally.

HB 1168 and HB 1170, he emphasized, “are closely modeled on two laws that have already been enacted in California.”

“In January [2026], these two California laws will come into effect, and AI companies will have to comply with them. California’s AB 2013 establishes training data transparency requirements for AI developers. And SB 942, the California AI Transparency Act, requires AI generated content labeling and detection tools.”

Shavers’ bills “are consistent with the regulatory environment that's already developing in California,” he said.

“With these bills, we are really ensuring there's public insight, that we’re building public trust, there's accountability and fairness.” Shavers mentioned that he’s heard concerns about “policy consistency,” and is crafting his bills to create a consistent governance standard for AI, not a conflicting law.

drawing the line between ai developers and deployers

Task force member Leah Koshiyama, Senior Director of Responsible AI & Tech for the company Salesforce, offered her perspective as an executive with a company using and adapting AI models for a service like Salesforce.

Definitions, she said, “are very critical in this conversation. The words ‘developer’ and ‘services’ are within the bill and I want to make sure that we have a really concrete distinction of what those two are. Because they are vastly different. The accountability for each of them should would be very, very different. Services are typically on top of these AI systems that you are describing, and they are fully different.”

In other words, she said, service providers building their offerings on top of AI models they themselves did not create should not be subject to the same transparency requirements as the developers of the AI models.

“It is more for the developers—the OpenAIs, the Anthropics—that are doing the model creation. I just want [gain some] clarity on how we might address that distinction in the bill.”

Task force member Crystal Leatherman, representing the Washington Retail Association, seconded Koshhiyama’s concerns.

“Somebody who is utilizing AI tool,” she said, “if they do some form of back-end modification…they could unintentionally be considered a developer. So I think that could benefit from tightening up the definition of developer.”

clarifying language: AI model or AI system?

Ryan Harkins, a task force member and Senior Director of Public Policy at Microsoft, suggested clarifying the language around “AI model” and “AI system.”

“I think there's a conversation to be had about whether the California law that passed contains a structural error in that it refers to providing transparency with respect to data sets that were used to train systems rather than models,” he said.

This is an ongoing challenge in public policy. To most legislators and members of the public, “AI model” and “AI system” are used interchangeably. But in the technical world they have more specific meanings. An AI model is the basic foundation upon which AI systems are built. An example is ChatGPT, which is an AI system built upon the AI models GPT-3.5 and GPT-4, developed by OpenAI.

privacy settings: a fine needle to thread

Two task force members raised separate concerns about privacy requirements and settings within HB 1170.

Katy Ruckle is a task force member and State Chief Privacy Officer at Washington Technology Solutions, the state agency that oversees the state government’s digital infrastructure. She mentioned that “the definition of personal information [in HB 1170] is narrower than it is in HB 1168. And from the privacy perspective, I think the broader definition of personal information is a better way to go.”

Microsoft’s Ryan Harkins raised a different question about privacy in HB 1170. “The law that passed in California last year would actually prohibit the inclusion of personal information,” he said. That is: personal information in the disclosures, or accompanied with the metadata that is cryptographically signed with content.

“There are a variety of scenarios in which content creators may want or choose to include their personal information with the content,” Harkins said. That information “can in fact help enhance the broader trust, because a consumer of the content can know, oh, this was created by The New York Times or The Wall Street Journal or what have you.”

ai ‘detection tools’ or ai ‘provenance tools’?

Harkins also raised pertinent questions about the AI disclosure required in HB 1170 and its California model, SB 942.

“The bill has referred to ‘AI detection tools,’” he said, “and we think it’s actually more accurate to refer to ‘provenance detection tools.’”

The issue, Harkins said, is that many probabilistic identifiers “that purport to determine whether something has been generated by AI are just not accurate.”

It would be more accurate, he said, “to determine whether cryptographically signed metadata is, in fact, present in a particular file or not. We also think rather than requiring disclosures to be permanent or extraordinarily difficult to remove, it can, in fact, be, you know, in some cases fairly easy to remove disclosures. We would suggest that instead you require them to be difficult to remove or recoverable because there are techniques you can use to if metadata is removed to recover information.”

clarifying who is covered

Sean DeWitz, representing the Washington Hospitality Association, asked for clarification around which AI developers would be covered by the transparency requirements in HB 1168.

The threshold of “one million users monthly is actually a very low number, especially worldwide,” he said. “So maybe confining it to within Washington State or looking at increasing that number.”

Rep. Shavers clarified that the threshold for AI developers covered by HB 1168 is one million monthly users in Washington State, not nationally or globally. “So we’re talking about OpenAI, Meta, and Google,” companies of that size. “We're not talking about small businesses,” Shavers added. “As this moves forward, I will have my team ensure that we get data points on” finding the right monthly-user threshold.

DeWitz also asked about the exclusion of state and tribal agencies from the requirements in HB 1170.

Shavers said: “In terms of the government and tribal communities, so my understanding is…we're looking at developers like OpenAI” creating AI models for use on a large public scale. “Washington State is not creating [those AI models]. So in our mind, when we exempted government and tribal communities, it was just a common sense exemption. It doesn't make sense to include them in this when they're not going to be involved in the first place.”

what’s next

Subcommittee meetings of the Washington State AI Task Force happen roughly once a week and are usually open to the public via video. For more information about the Task Force and its past and upcoming meetings, see the group’s web page.

The Washington State Legislature is scheduled to re-convene on Jan. 12, 2026, for a 60-day session.

Subscribe to our free newsletter

Next
Next

AI Legislative Update: July 18, 2025