Progressives for AI
The rules are working
Issue #8 · March 2026
Quick Take · News · Put AI to Work · Looking Ahead
In this issue
We've been saying since issue one: the answer to bad AI isn't less AI. It's enforceable rules, human oversight, and showing up with real ideas.
This week, that argument landed in a courtroom. A federal judge blocked the Pentagon from blacklisting Anthropic over its refusal to drop ethics guardrails on military AI. She called it "classic illegal First Amendment retaliation." Meanwhile, the White House AI czar position is vacant with no replacement planned, which means the real AI regulation fight is at the state level, where progressives have the most leverage right now.
Let's get into it.
300+
Google and OpenAI employees signed an open letter supporting Anthropic's red lines against the Pentagon. This week, a federal court proved their solidarity was on the right side. Collective action still works, even in tech.

Photo by Brett Sayles / Pexels
What happened: David Sacks, the venture capitalist Trump appointed as Special Advisor on AI and Crypto, hit his legal 130-day term limit and moved to an advisory role. The administration says it won't appoint a replacement. There is no longer a White House AI czar.
Sacks was behind the push to preempt state-level AI protections that we covered in issues four and seven. With his departure, the administration's AI agenda loses its public champion. But the real action was already moving to state legislatures, and the pace is accelerating.
Since we started this newsletter in January, Washington passed five AI bills in a single session, including transparency requirements for AI in health insurance decisions. Oregon's chatbot safety bill passed 52-0. California has two workplace AI bills moving with bipartisan support. Virginia advanced an AI regulation bill 39-1. Colorado's AI Act is in effect. Montana wrote the right to use AI into law. And we tracked in issue six how New York is debating whether AI should be allowed to answer questions in licensed professions, with progressives on both sides pushing for thoughtful access, not gatekeeping.
Federal AI policy is stalled or hostile. State policy is where the real work is happening, and it's where progressive voices can have the most impact right now.
What you can do
Find out what AI legislation is active in your state using the Transparency Coalition's tracker or the Brennan Center's AI tracker. Then contact your state legislators. The specific ask depends on where you are: if your state has AI protections, tell them to hold the line against federal preemption. If it doesn't, point them to Washington and Oregon as models. If you work at an organization that uses AI, invite your legislators to see it in action. Nothing is more persuasive than a constituent showing how AI helps their mission and explaining why responsible regulation matters from firsthand experience.

Photo by Bastian Riccardi / Pexels
What happened: Samsung ran AI-generated video ads on TikTok without the disclosures required by TikTok's own policies. The exact same videos were labeled as AI-made on YouTube, where Google actually enforces its rules. The kicker: both TikTok and Samsung are members of the Content Authenticity Initiative, an industry group that exists specifically to promote AI transparency standards.
This isn't a one-off. Companies join voluntary AI transparency initiatives, put out press releases about responsible AI, and then ignore their own rules when nobody's watching. YouTube labels the same ads because it has enforcement mechanisms. TikTok doesn't because it doesn't.
Why this matters: We've been beating this drum since issue three: voluntary AI commitments evaporate without enforcement. Anthropic's ethical position held this week because binding legal standards backed it up. TikTok's transparency policy collapsed because nothing did. Every story we've covered points the same direction: companies do the right thing when the rules require it. Not before.
What you can do
This is a concrete example you can use. When AI companies or industry groups in your state argue that self-regulation is sufficient, point to the TikTok/Samsung case: same ads, same companies, different outcomes based solely on whether anyone enforces the rules. If your state is debating AI disclosure or transparency legislation, bring this to your testimony or your legislator meeting. The argument writes itself.
A federal court backed enforceable AI standards, and it matters. Judge Rita Lin granted Anthropic a preliminary injunction against the Pentagon's attempt to blacklist it as a "supply chain risk" for maintaining ethics guardrails on military AI. Her 43-page ruling called the designation "Orwellian" and the blacklisting "classic illegal First Amendment retaliation." The Pentagon CTO's office says the ban "still stands" despite the ruling, setting up a potential contempt fight. The takeaway: voluntary promises are rewritable. (Anthropic itself rewrote its safety pledge weeks earlier.) Court orders aren't. The progressive case for enforceable rules just got 43 pages of federal case law backing it up.
Progressive AI Win
The SPLC's own staff organized for AI protections — and won
The Southern Poverty Law Center's union, a NewsGuild-CWA unit, ratified a new contract this month that includes explicit AI protections alongside remote work preservation. Nine months of bargaining. A progressive nonprofit whose mission is defending civil rights, and whose own staff used collective bargaining to make sure AI doesn't undermine their work.
They're not alone. Fifty-eight NewsGuild-CWA bargaining units have ratified contracts with AI protection language. ProPublica's guild authorized a strike over AI protections, the first in U.S. journalism history. The workers who write the rules before AI arrives have leverage. The ones who don't are catching up.
If your organization uses AI (and it should), the question isn't whether to adopt it. It's whether the people doing the work get a voice in how. The CWA's model AI provisions are public and adaptable. Share them with anyone negotiating a contract right now.
Practical ways progressives can use AI this week
If your organization tracks policy issues, you know how this goes. Searches pile up. Google Alerts send you junk. Important stories slip through because they didn't use the right keywords.
Here's a practical setup that takes about 30 minutes and uses free tools:
Use Bluesky's custom feeds. Bluesky's open protocol means anyone can build algorithmic feeds, no corporate gatekeeper. The Bluesky Feed Creator lets you create keyword-based feeds without writing code. Build one for your issue area: "housing justice" + "AI" + "automation," or "reproductive rights" + "legislation" + "2026." Pin it and check it daily alongside your main timeline.
Set up RSS monitoring with AI filtering. Most news sites still publish RSS feeds. Use a free reader like NetNewsWire (Mac/iOS) or Feeder (web/Android) to subscribe to outlets that cover your issues. For higher-volume feeds, paste your latest batch of headlines into Claude or ChatGPT and ask it to "flag the 3 most relevant stories for an organization focused on [your issue], and explain why each one matters for our work."
Create a weekly digest prompt. At the end of each week, paste your collected bookmarks and feed items into an AI with this prompt: "You're a policy analyst at a progressive advocacy org focused on [issue area]. Review these articles and produce a one-page briefing: what happened this week, what our supporters need to know, and one action item we could promote." That output can go straight into a staff Slack channel, board update, or supporter email.
The whole system is free and replaces hours of manual scanning each week. That's time back for actual organizing.
From our friends
Your org deserves its own AI. Not Big Tech's.
Change Agent is a private AI platform built for nonprofits, unions, and advocacy orgs. Your data stays yours, it plugs into tools you already use (Google Drive, Slack, ActBlue), and it handles the tedious stuff so your team can focus on the mission. Starts at $35/month. Small nonprofits under $1M can apply for discounted pricing.
Learn moreI keep coming back to a line from issue two: "What if regulating AI actually makes AI better?"
This week gave us a real answer. A federal court said the Pentagon can't punish a company for maintaining ethical red lines. States are passing AI bills with bipartisan support while the federal government's AI chair sits empty. And the SPLC's own workers bargained for AI protections and won. Enforceable standards create the trust that makes adoption possible. Without them, you get a race to the bottom where the company willing to do the most reckless thing wins.
Progressives have been making this argument about every other industry for a century. Clean air regulations didn't kill manufacturing. Seatbelt mandates didn't stop people from driving. Rules make systems work better for more people.
AI is no different. And this week, a federal judge agreed.
Until next time,
Jordan

