Progressives for AI

Same technology, opposite purposes

Issue #7 · March 2026

Quick Take · News · Put AI to Work · Looking Ahead

In this issue

  • AI scribes are giving doctors back their evenings. Health insurers are using the same technology to deny your claims faster. Washington state just drew a line.
  • Washington passed five AI bills in a single session. Oregon and California are moving too. The White House wants to override all of them.
  • POLITICO journalists proved that AI contract language actually works when tested
  • This week's tools: two organizing apps built in a weekend that are changing how campaigns listen to voters

Quick Take

Here's a thing that's actually happening right now: doctors at Kaiser Permanente are looking their patients in the eye again. AI scribes (tools that listen to the conversation and write the clinical notes afterward) saved Kaiser's physicians the equivalent of 1,794 full workdays of documentation time last year. Doctors report getting back about an hour a day they used to spend typing into a screen. Patients say the visits feel more human.

That's a genuine, measurable good.

Here's the other thing that's happening right now: health insurance companies are deploying AI to process prior authorization requests — the step between your doctor ordering something and your insurer agreeing to pay for it. Less human review. Faster denials. Same underlying technology.

AI helping your doctor listen to you. AI helping your insurer ignore you. Same technology, opposite purposes.

That's the whole AI fight in one frame. The question was never "is AI good or bad?" It was always "who's pointing it at whom, and who gets to decide?" States are starting to answer that question. Washington just passed five AI bills in a single legislative session, including one that forces insurers to report how they're using AI to deny claims. But eight days later, the White House released a framework designed to wipe out every state-level AI protection in the country.

Let's get into it.

1,794

Working days of documentation time saved by AI scribes across 2.5 million patient visits at Kaiser Permanente last year. Doctors report saving about an hour a day. Patient satisfaction went up.

Source: NEJM Catalyst, 2025


AI News Roundup

AI is giving doctors back their evenings. It's also helping insurers say no.

A doctor talking to a patient in a medical office

Photo by Vitaly Gariev / Unsplash

What happened: A wave of peer-reviewed studies published in the past year shows AI clinical scribes are delivering real results. A University of Wisconsin randomized trial found providers saved 30 minutes a day and saw measurable drops in burnout. A UCLA trial across 14 specialties found a 7% improvement in burnout scores, with fewer than 10% of patients declining to use the technology. At UI Health in Chicago, patient satisfaction scores for "my provider explained things clearly" jumped from 91% to 97% after AI scribes were deployed.

The context matters: doctors currently spend nearly half their workday on electronic health records and desk work, and almost two hours every evening catching up on documentation after their kids are in bed. Nearly half of primary care physicians are burned out. AI scribes are making a specific group of people's lives measurably better.

But the same AI capabilities that help doctors are being deployed very differently by health insurers. AI-powered prior authorization systems can process claims faster, which in practice often means denying them faster, with less human review. That's why Washington passed SB 5395, requiring health insurers to report quarterly on how AI is involved in claim decisions.

Why this matters: This is the clearest illustration of why blanket AI regulation — whether "ban it all" or "deregulate everything" — misses the point. AI helping a burned-out doctor spend more time listening to you is good. AI helping an insurer process your denial without a human ever looking at it is bad. The technology is the same. The difference is who it serves and whether anyone's watching.

Washington's approach (don't ban the technology, require transparency about how it's being used) is the kind of regulation progressives should be championing everywhere. And it's exactly the kind of state-level protection that the federal preemption framework (see below) would override.

What you can do

Next time you deal with a prior authorization denial, ask your insurer directly: "Was AI involved in this decision? Can I see the criteria?" They probably won't answer. That's the point. Transparency shouldn't require a state law, but right now it does. If your organization works on healthcare access, connect AI transparency to your existing advocacy, and help your members ask these questions on a daily basis. If you're in a state considering healthcare AI legislation, the Brennan Center tracker can help you find it. And if you want to go deeper on the doctor side: the AMA's research on EHR burden is a good primer on what AI scribes are actually solving.


States are passing AI protections. The White House wants to wipe them out.

What happened: Before adjourning on March 12, Washington's legislature passed five AI bills in one session: content disclosure, chatbot safety for kids, restrictions on AI in health insurance prior authorizations (SB 5395), bans on AI-generated child sexual abuse material, and property rights in digital likenesses. Oregon's chatbot safety bill passed 52-0 in the House. California has two workplace AI bills moving through committee with bipartisan support.

Then on March 20, the White House released a national AI framework directing Congress to preempt state AI laws. The administration calls it preventing "a patchwork of fifty discordant rules." The framework calls for no new oversight body, reduced liability for AI companies, and shifts child safety responsibility from platforms to parents. House Republican leadership immediately backed it.

Why this matters: Last issue, we noted that legal analysts think state AI protections are more durable than anyone expected — executive orders alone probably can't preempt state law. That's still true. But this framework isn't an executive order. It's a roadmap for Congress to do the preempting instead. If it works, every state-level protection that progressives spent years building — Colorado's AI Act, Illinois' disclosure requirements, California's worker protections, and now Washington's health insurer transparency rules — gets overridden by a single federal standard written with industry input.

The states that moved fastest are proving that AI regulation works and has bipartisan support. Oregon's chatbot bill didn't pass 52-0 because it was controversial. Washington's health insurer transparency bill passed because people understand what it means when a computer denies your insurance claim. That real-world track record is exactly what the preemption push is trying to short-circuit.

What you can do

Contact your U.S. senators and representative. The specific ask: oppose any federal AI legislation that preempts stronger state protections. If you've never written to a legislator before (or even if you have), text RESIST to 50409. Resistbot will walk you through drafting and sending a letter in about two minutes. The Brennan Center's AI legislation tracker can show you what protections your state has already passed or is considering. If your state has AI legislation in progress, contact your state legislators too. Tell them their work matters and you're paying attention.


Progressive AI Win

POLITICO journalists won an AI fight — because their union contract was ready

Here's a story you probably didn't hear about. Last December, the PEN Guild (the union representing POLITICO journalists, part of the NewsGuild-CWA) won a landmark arbitration against their employer over AI. POLITICO had deployed two AI-powered products: a "Live Summaries" feature during the 2024 Democratic National Convention and a "Capitol AI Report-Builder" for paying subscribers. Neither involved consulting the journalists whose reporting the AI was summarizing. No notice. No bargaining. No human oversight requirements.

The union had negotiated AI-specific contract language before any of this happened. When POLITICO deployed the tools anyway, the union filed a grievance. The arbitrator ruled that POLITICO violated the collective bargaining agreement and ordered a 60-day bargaining period with a negotiated remedy.

Here's why this story matters: corporate AI ethics policies are voluntary — companies can change them whenever they want (see: every AI safety pledge that's been quietly dropped in the past year). Contract language is enforceable. The CWA now has 58 newsroom contracts with AI provisions. ZeniMax workers at Microsoft negotiated the first video game industry AI contract. Frontier Communications workers in California won a requirement for joint deliberation before any AI tool gets deployed.

The pattern: workers who wrote the rules before AI arrived had leverage. Workers who didn't are catching up.

What you can do

If you're in a union, ask your rep whether your current contract addresses AI. If it doesn't, the CWA has published model provisions that other unions can adapt. If you're not in a union but your employer is rolling out AI tools, this is a concrete example of why collective bargaining matters. Individual employees couldn't have won this case. Share it with anyone who thinks unions are outdated.


Put AI to Work

Practical ways progressives can use AI this week

Field organizing tools built in a weekend

The Cooperative Impact Lab ran a generative AI hackathon with progressive organizations last year. Two of the tools that came out of it are worth knowing about, not because they're polished products, but because they show what a small team can build in a couple of days when they understand their community's actual needs.

Fair Count's "Community Voice" tool started with a simple insight: canvassers know things that databases don't capture. Fair Count, a Georgia-based census and voting rights organization, had canvassers in Mississippi record 30-to-90-second voice memos right at the doorstep after each conversation. They collected 120 of these and ran them through AI transcription and sentiment analysis.

What came out: county-level breakdowns of what people actually care about. Voting intention mapped by neighborhood. Pre-canvassing "topic primers" so the next organizer who knocks on that door walks in prepared. The kind of strategic intelligence that well-funded campaigns pay consultants six figures for — built by hackathon participants in a weekend.

Dr. Jeanine Abrams McLean, Fair Count's president: "The fact that hackathon participants were able to create a functioning tool in that amount of time was really mind-blowing."

AAPI Victory Alliance's "Truth Tea" tackles a different problem: disinformation targeting communities in languages that English-speaking rapid response teams can't monitor. The tool identifies political disinformation in video content, tested first with Hindi-language material, analyzes the narratives, and generates shareable counter-messaging in the target language. Thirty-four percent of AAPI voters have limited English proficiency. A two-person comms team can't monitor Hindi and Tagalog and Mandarin social media feeds manually. AI can.

If you want to explore what's out there: The Higher Ground Labs AI Resource Guide (updated March 15) is a solid collection of vetted AI tools for progressive campaigns and advocacy organizations, organized by use case with case studies and prompt templates. The AI Campaign Stack directory is community-powered and growing.


From our friends

Change Agent

Your org deserves its own AI. Not Big Tech's.

Change Agent is a private AI platform built for nonprofits, unions, and advocacy orgs. Your data stays yours, it plugs into tools you already use (Google Drive, Slack, ActBlue), and it handles the tedious stuff so your team can focus on the mission. Starts at $35/month. Small nonprofits under $1M can apply for discounted pricing.

Learn more

Looking Ahead

The AFL-CIO's inaugural Workers First AI Summit is this Thursday, March 26, in Washington. AFL-CIO President Liz Shuler is delivering the opening keynote, with MIT's Max Tegmark giving the lunch keynote. Rep. Ro Khanna and Randi Weingarten from the American Federation of Teachers are on panels. The summit centers the AFL-CIO's demand for enforceable AI guardrails, worker inclusion in how AI gets deployed, and protection from algorithmic surveillance and displacement.

This is the biggest organized labor response to AI to date. And it's happening the same week the White House is trying to strip away state-level protections.

The through-line of everything in this issue: the people who showed up prepared got better outcomes. Washington legislators who passed AI bills before the preemption push have a track record to defend. POLITICO journalists who negotiated AI contract language before management deployed AI tools had legal standing to fight. Kaiser doctors who adopted AI scribes are spending more time with patients. Fair Count organizers who experimented with voice memos during a hackathon have field intelligence their competitors don't.

The people who waited got AI pointed at them by someone else.

This is still early. The tools are still being built, the laws are still being written, and the people who show up now get to shape what comes next. If you're reading this newsletter, you're already closer to the front of this than most people in our movement. Use that.

Until next time,
Jordan

Know someone who should read this?

Share the issue that resonated most.

Bluesky LinkedIn Email

Read past issues on the web · Subscribe via RSS · Website