Progressives for AI

Your staff are already using it

Issue #11 · April 2026

Quick Take · News · Put AI to Work · Looking Ahead

In this issue

  • A panel question at ClientCon, and the number that's been rattling around in my head since: 78% of progressive nonprofits used AI last year; 42% have a policy.
  • The app launch boom that's really a permission story. Why your next volunteer dashboard could ship in two weeks.
  • A WGA East contract that shows what institutional AI governance actually looks like, and why it's a win worth copying.
  • Put AI to work: cybersecurity upgrades your org can evaluate this week.

Quick take

Last week at ActionKit ClientCon, I sat on a panel about how progressive organizations should approach AI. Someone in the audience asked whether we should be using these tools at all, given the ethical concerns: data scraping, labor displacement, environmental cost, corporate consolidation. The concerns are real. I said:

I don't think the people we're fighting for will thank us in 20 years for standing the moral high ground and not using the best tools available to us.

Here's what I've been thinking about since: the moral high ground is already a fiction inside our own movement.

78% of progressive nonprofits used generative AI in their fundraising, marketing, or advocacy last year. Only 42% have a policy. That's 216 progressive orgs in the M+R Benchmarks data, and we see similar governance gaps in other nonprofit sector surveys too.

In org after org, the practical debate about whether to use AI is already over. Staff are using Claude, ChatGPT, Otter, Canva AI, whatever their work requires, sometimes on personal accounts, often without clear guidance, because clear guidance doesn't exist. The gap isn't adoption. It's permission. The costs of that gap are real, and they're growing.

This issue is about closing it. Let's get into it.

$33 vs. $20

Average hourly wage of U.S. workers in jobs with high AI exposure, versus jobs with low AI exposure. Workers in the most AI-exposed jobs earn about 65% more per hour on average. That's who's capturing the productivity gains right now, while progressive institutions that hold back from these tools sit out and watch the pay gap widen for the workers we say we're fighting for.

Source: Pew Research Center, 2025


AI news roundup

The app launch boom is a permission-gap story in disguise

What happened: App Store and Google Play releases jumped 60% year-over-year in Q1 2026, and the pace has accelerated since. April is up 104% worldwide, with iOS alone up 89%. Apple's marketing chief Greg Joswiak summed it up: "Rumors of the App Store's death in the AI age may have been greatly exaggerated." TechCrunch points to AI coding tools like Claude Code and Replit as the main driver, and the builders leading the surge aren't professional developers. They're people with ideas and a problem to solve, writing software for the first time because they finally can.

Why this matters: For the first time, a nonprofit organizer with a clear problem and two weeks of focus can ship a real tool. A volunteer dashboard. A case intake form. A phone-bank that routes calls based on your state's voter file. Custom software used to require a developer hire or a five-figure vendor contract. It doesn't anymore.

But the opportunity lives or dies on institutional permission. Staff can't deploy a tool their org hasn't sanctioned. Data policies written for a 2015 tech stack quietly block what 2026 tools make possible. The orgs that adapt will ship tools that fit their actual work. The ones that don't will keep paying enterprise vendors for software designed for someone else's problem.

What you can do

Audit whether your org has a real path for staff to prototype and ship internal tools. Not "write a 40-page proposal and wait six months." A real path: experiment, evaluate, deploy, iterate. If your answer is "we don't," that's the permission gap showing up somewhere new. Name it, bring it to your leadership, and offer to co-draft the approval process.

Source: TechCrunch, April 18


Dairy Queen is rolling out AI drive-thru workers across the US and Canada

What happened: Dairy Queen announced it's deploying AI voice agents from a company called Presto to handle drive-thru ordering at locations across North America. The AI is designed to speed up service and, per Presto's own marketing, increase "upselling conversion." DQ joins McDonald's, Wendy's, Taco Bell, White Castle, and a growing list of chains replacing front-line workers with conversational AI.

Why this matters: Food service employs millions of workers in the United States, disproportionately workers of color, immigrants, and young people. The "efficiency" frame is cover for wage suppression. Why raise drive-thru pay when you can swap the worker for a voice model that never asks for a raise? And this is happening with almost no public input. Municipalities aren't running AI impact assessments before these systems go live. State legislatures aren't tracking it. Most of us only find out it happened when we pull up to the drive-thru and a different voice takes our order.

There's a progressive labor lane here that almost nobody is running in. Worker voice in AI deployment shouldn't be a luxury reserved for knowledge-worker unions. It should be a baseline demand for any sector getting restructured by this technology.

What you can do

If your org works with food service unions, fast food campaigns, or living wage coalitions, put AI drive-thru deployment on your 2026 policy agenda. Push for state and municipal AI impact assessments before deployment, worker retraining requirements tied to AI layoffs, and mandatory public hearings. The chains are rolling this out faster than regulators are paying attention, and that window is exactly where advocacy can shape the norm.

Source: The Verge, April 18


Cerebras filed for IPO, and AI infrastructure is becoming a consolidation story

What happened: Cerebras, an AI chip startup built around massive wafer-scale processors, filed for an IPO on the strength of a run of high-profile deals, reportedly including a major OpenAI contract and a new AWS distribution arrangement. The filing positions Cerebras as one of a small group of companies with the compute capacity to serve frontier AI training workloads.

Why this matters: The companies that own the compute own the roadmap. AI infrastructure is consolidating around a small group of players with deep hyperscaler relationships: Nvidia, Cerebras, Broadcom, AMD, and the custom silicon programs inside Google, Amazon, and Microsoft. No public option. No municipal compute cooperative. No antitrust pressure on the vertical integration between model companies and their chip suppliers. This is how every other technology consolidation has gone, and progressives have mostly watched it happen quietly because we haven't treated AI as infrastructure worth fighting over.

It is. If AI becomes as important to our clients and communities as we think it will, the question of who owns the rails matters as much as the question of which tools we use.

What you can do

Progressive policy agendas for 2026 and beyond should include public cloud computing options, AI antitrust enforcement, and infrastructure transparency requirements. If your org does tech, economic, or antitrust policy, this is a lane with very few advocates in it right now. Start by supporting groups like Public Knowledge and the AI Now Institute that track market concentration and push for alternatives.

Source: TechCrunch, April 18


Progressive AI Win

WGA East at CBS News 24/7 just ratified a contract with real AI guardrails

On April 14, 60 media workers at CBS News 24/7 unanimously ratified a new contract with the Writers Guild of America East. This came after the first work stoppage at CBS News in decades. The AI language is specific and enforceable:

  • Advance notice before any new generative AI system gets deployed
  • Right to remove bylines and credits from AI-assisted work
  • Semi-annual union-management meetings dedicated to AI
  • Mandatory bargaining over AI's operational impact
  • 1.5x standard severance for staff laid off due to AI

From the bargaining committee: "Because of our members' solidarity, we won industry-leading gains in compensation, better severance and overtime compensation, protections around artificial intelligence, and important quality of life improvements."

This is the permission-gap thesis as a positive story. The union didn't waste energy debating whether AI should exist at the workplace. They spent that energy winning institutional governance over how it gets deployed: who gets notified, who gets paid, who has a voice at the table. That's the progressive move. Fight for authority over how AI gets used, not for the fiction that it won't be used at all.

If you're in a media union, pull this contract language into your next bargaining cycle. If you're anywhere else, the principles still translate: notice before deployment, effects bargaining, severance floors, disclosure rights. Pick the ones that fit and bring them to the table.

Source: WGA East, April 14


Put AI to work

Practical ways progressives can use AI this week

Upgrade your org's cybersecurity with AI tools

Progressive nonprofits are high-value targets. Voting rights groups. Reproductive health orgs. Immigrant rights legal services. Tenant unions. Labor organizations. The data you hold is exactly what hostile state AGs, litigants, and actual attackers want. Enterprise security tooling used to require enterprise budgets. That's changing fast, and AI is the reason.

Last week Anthropic unveiled Claude Mythos Preview, a new cybersecurity-focused model, and early reports suggest it's thawing the company's relationship with the Trump administration. The deeper story is that every major AI player is shipping security tooling right now, and the nonprofit pricing tiers are finally catching up. If your security posture was built in 2019, this is a real opening.

Tools worth evaluating this week (all have real nonprofit pricing or free tiers):

Steps to take this week:

  1. List the sensitive data your org actually holds. Donor records, client case files, legal matters, voter contacts, internal board documents. Be specific. Generalities hide the real risks.
  2. Map what protects each category. Endpoint antivirus, multi-factor auth, email filtering, backups, disk encryption, access logs.
  3. Identify the weakest link. There's always one.
  4. Get one demo from a nonprofit-friendly AI security vendor by Friday.
  5. Bring the findings to your next ops meeting with a concrete recommendation.

Your real AI-era risk has nothing to do with the tools taking your job. It's that your 2019-era security posture will leak donor records or client data while the org is still debating whether to pay for ChatGPT. Fix the thing that will actually hurt people first.


From our friends

Change Agent

Your org deserves its own AI. Not Big Tech's.

Change Agent is a private AI platform built for nonprofits, unions, and advocacy orgs. Your data stays yours, it plugs into tools you already use (Google Drive, Slack, ActBlue), and it handles the tedious stuff so your team can focus on the mission. Starts at $35/month. Small nonprofits under $1M can apply for discounted pricing.

Learn more

Looking ahead

The question inside progressive institutions isn't "should we use AI." That one's settled, whether leadership has caught up or not. Your staff decided. The live question is whether your org catches up on purpose: policies that protect sensitive data, approval paths that let staff actually ship tools, bargaining rights that put workers at the deployment table, security budgets that belong in 2026 instead of 2019.

The people we're fighting for aren't going to thank us in 20 years for holding moral high ground while the best tools available sat on the shelf. What they'll remember is whether we picked the tools up and used them well.

That's the permission gap closed. That's the work.

Until next time,
Jordan

Know someone who should read this?

Share the issue that resonated most.

Bluesky LinkedIn Email

Know someone who should be reading this?

Forward this email or send them the signup link:

Subscribe at progressivesforai.com

Read past issues on the web · Subscribe via RSS · Website