Notable links: May 1, 2026
AI and society; and sustaining innovation has failed us.
Most Fridays, I share a handful of pieces that caught my eye at the intersection of technology, media, and society.
Did I miss something important? Send me an email to let me know.
The people do not yearn for automation
This piece is important to internalize — particularly for the terminally AI-pilled and people who might want to force everyone into using LLMs to do work they were previously doing themselves.
AI is incredibly unpopular, and it’s not because it’s bad at marketing. These are multi billion dollar companies that have attracted some of the brightest talent from across Silicon Valley across all disciplines. AI vendors are not underdogs who just need to get their message across.
Indeed:
“You can’t advertise people out of reacting to their own experiences. This is a fundamental disconnect between how tech people with software brains see the world and how regular people are living their lives.”
“Software brain” is a fantastic name for a worldview that sees everything as databases that can be controlled, normalized, and optimized. As Nilay Patel puts it: “the idea that we can force the real world to act like a computer and then have AI issue that computer instructions.” This is not a new problem that has arrived with AI: we’ve been talking about people who were very good at making software who therefore thought they were geniuses who could take on any global challenge for a very long time.
Taking human experience, which is beautifully ambiguous and nuanced and nondeterministic, and trying to fit it into a database shape, is inherently extractive. Nilay points out that it flattens people, which is totally true, but it also transfers ownership of that experience from their subjective truth into a centralized database that someone else controls, sets the standards for, and profits from.
And yes: computers should support people. People shouldn’t support computers. The idea that we’ll all be left behind if we don’t pour our experiences, information, source material, communications, creativity, and all the rest of it into a computer system is absurd and offensive. By extracting that experience, flattening it, and changing ownership of it, it inherently devalues us, the humans who were its previous custodians. It certainly devalues labor, which is a problem in itself, but it also devalues all of the frictionful, living, breathing parts of being an actual human being.
The tools are useful. I think software development has probably changed forever. But they’re not useful for everything, and they’re not going to change everything. Everything isn’t a database. And if we think the world becomes better if we turn everything into one, we probably weren’t all that excited about humanity to begin with.
A three horizons framework for government reform
Important analysis from Jennifer Pahlka, founder of Code for America, that is about government technology and services but could just as easily be about news and journalism.
She introduces the Three Horizons framework for thinking about change and building towards a shared vision of the future. Here, Horizon 1 is the status quo, Horizon 2 represents improvements to that system, and Horizon 3 represents an improved system rather than an optimized present.
There are four kinds of innovation: research, sustaining, breakthrough, and disruptive. The first two don’t lead us anywhere new on their own; they might provide extra capacity and create more headroom, but they aren’t systemic change. Any fundamental problems with the status quo probably won’t go away. In contrast, breakthrough innovation brings in fresh ideas to solve problems in a new way, and disruptive innovation creates new systemic models that serve people in new ways.
Jennifer’s point is that a lot of government reform work — including Code for America — has been sustaining or incremental at best, which has relieved some pressure but hasn’t really changed anything. The same problems persist.
Philanthropic funding has compounded the problem by funding that kind of innovation instead of more radical solutions. This, for me, is the key sentence in her piece:
“Funders need to ask not just whether an investment does good but whether it changes the conditions under which good can be done at scale.”
And there’s a finite window for more aggressive change. This has been created by the AI shift, changes in the US government, the COVID-19 pandemic, and other changes that have highlighted how poorly our current system has adapted.
In government, that need has become rather obvious, but it’s true in news too — another key part of our civic framework. (And this is also true for social media!) These same factors apply, and philanthropic funding has been similarly risk-averse, aiming for sustaining innovation that builds capacity rather than changing how everything works to serve people better. The fundamentals aren’t changing and they haven’t been serving us. We need to think much more radically, and we need to fund much more radically.
In that framework, it’s incredibly important to articulate what the more radical futures we could work towards actually are. Jennifer points out that there are multiple, potentially contradictory, possible futures — the point is not to coalesce into one agreed-upon Horizon 3 end state, but to be able to describe where any current change might be leading to. Where is this taking us, and why?
Let’s allow ourselves to imagine something better. And then, let’s finally go there.
Why AI alone cannot fix social problems
From the AI is a tool for people and not a replacement for them dept:
“AI is often framed as a tool for efficiency, but efficiency alone does not strengthen public systems without the underlying capacity being improved. Even when tasks are completed faster, the deeper constraints of the system do not automatically disappear. In many cases, AI ends up addressing the symptoms of these problems rather than their causes.”
If an institution — or an industry — is declining, adding AI won’t magically make it better. In the cases that these Cornell researchers highlight in this piece, there were only meaningful improvements when the underlying systems were working well and the human infrastructure around the software was well-developed.
Even beyond the lack of support for some regional needs (languages, dialects, accents) that created issues here, these systems worked best when the software was designed to support existing well-functioning human systems. If the human systems don’t work, if there isn’t human support, or if people are expected to adapt their processes to the needs of the software, the projects weren’t successful.
It isn’t a magic wand. There are important lessons here for news and other declining industries: adding software doesn’t absolve you of figuring out your underlying problems, and it will not solve them for you. It might even paper over them and make them worse.
It’s just another tool. Invest in your people.
Matt Mullenweg says “the wheels have fallen off” in wide-ranging WordPress critique
I’m going to put my neck on the line on this story about Matt Mullenweg’s criticism of Wordpress’s open source release culture:
“WordPress co-founder Matt Mullenweg has delivered a wide-ranging critique of the WordPress project, saying it has spent years doing damage to itself and calling out a release culture he says produces ‘boring or mediocre crap.’”
It goes on to describe Mullenweg’s frustrations with an open source culture that prevents anything being released without a wide-ranging discussion that brings dozens of people into the thread.
“We are not being killed by competition, I believe we have done this to ourselves. We did it by blindly following rules and ideals to a point when they became iatrogenic. […] By definition the things that will give us the biggest wins will be the most non-consensus, so we have to accept the occasional failure or mistake otherwise we will never have any wins.”
So here’s my controversial statement in 2026: on these points, Matt Mullenweg is completely right.
This bureaucratic, consensus-driven culture has also been a blight on other large open source projects, for example at Mozilla. Contributions should be made quickly, and product design should be opinionated rather than consensus-driven. The more a project seeks consensus, the less able it is to innovate.
That doesn’t mean it should be a fiefdom or a dictatorship. Governance structures have been well-established by co-operatives and similar organizations that allow people to be elected into key roles; if they underperform, the voting base can support someone else. But it’s far better to put your trust in an architect — and achieve consensus about that trust — than it is to try and reach broad consensus about every change. Otherwise it’s not just that nobody wants to try bold new ideas; they literally can’t.
This is distinct from web standards, for example, which need a consensus basis to prevent a single vendor from dominating how interoperability works. For example, Mozilla’s objection to the web Prompt API that Google proposed is good; that’s how those systems should work. But for an individual software project, moving quickly and genuinely innovating are vital.
Dave Winer has another take: that WordPress should be more of a platform and allow different people to build opinionated interfaces on top of it. I think that makes a ton of sense too; in that world, WordPress can be an ecosystem monolith, and the opinionated innovation is left to smaller entrepreneurs. That, to be honest, might work a lot better.
Apple fixes bug that let FBI extract deleted Signal messages after 404 Media coverage
You may remember the story about the bug in Apple’s on-device notifications database that allowed the FBI to retrieve the content of Signal messages. It’s good to see that it was treated as a genuine bug — and fixed.
Signal announced the change on Bluesky:
“We are very happy that today Apple issued a patch and a security advisory. This comes following 404 Media reporting that the FBI accessed Signal message notification content via iOS despite the app being deleted.”
That’s good, because as the linked post notes, this had been actively used in court:
“They were able to capture these chats bc [because] of the way she had notifications set up on her phone—anytime a notification pops up on the lock screen, Apple stores it in the internal memory of the device.”
There’s no doubt in my mind that the widespread coverage and outrage over the issue helped encourage Apple to fix it quickly. I’m grateful for the journalism and glad it was resolved.