Notable links: March 27, 2026

Progress on the open social web; not so much in the world.

Notable links: March 27, 2026
Photo by Solen Feyissa / Unsplash

Most Fridays, I share a handful of pieces that caught my eye at the intersection of technology, media, and society.

Did I miss something important? Send me an email to let me know.


Why Knight Foundation Invested in Bluesky

This was exciting news at the exact intersection I’m interested in:

“Last year, Knight Foundation joined in a $100 million Series B funding round for the social media company Bluesky.”

The funding round was led by Bain Capital Crypto. What strange bedfellows: the crypto investment arm of a famous global investment firm, and a national private foundation dedicated to fostering informed, equitable, and engaged communities. But Bluesky lives at that intersection; fostering better communities in a pro-social way while building an open platform that could potentially build a lot of value.

What’s most exciting to me is that Knight cares about the open social web at all. I think it’s vying for the most important technical development for newsrooms: the most promising way for them to build first-party connections with their audiences, subscribers, and communities. Most of the newsrooms themselves don’t seem to care all that much, to be frank, but it’s good to see that one of the most important support organizations is paying attention. Hopefully that can be a signal for others to tune in.


I built a CLI for Ghost

One of the most fascinating things about the AI transformation is how quickly it’s pushed against the siloed model of web applications (not to mention mobile apps!) we’ve been used to for decades now. Claude Cowork and systems like OpenClaw use the inherent connectedness of our desktop computers to get stuff done. Command lines, the UNIX philosophy, and the openness of the filesystem itself are all plusses again.

Ghost’s founder John O’Nolan built a CLI for his platform — a UI that he probably knows better than anyone else on the planet — and found that it sped him up in ways he wasn’t expecting:

“[…] Within about ~1hr of using Ghost via Claude/CLI, it was hard to imagine going back to caveman-clicking around a browser to get something done. […] I know Ghost’s UI extremely well, and know exactly where to go and what to click to do the thing I want – and even for me, using Claude is significantly faster/easier than clicking myself.”

A fun thing about this is that an update that’s aimed at making AI use faster and easier is also potentially very useful for humans that don’t want anything to do with AI. A command-line interface can be used to script updates, connect with other systems, and integrate in ways that just aren’t as easy if you’re limited to a web UI.

For AI users, CLI tools are turning out to be more token-efficient than AI-native integrations like MCP. (And even MCP is encouraging every service to build an API at an accelerated rate.) For someone who has advocated for open systems with easy off-ramps for your data for decades, it’s slightly surreal. Suddenly what might have seemed like a niche thing to want to do is the mainstream, hottest thing in tech. It’s Web 2.0 mashups all over again — but on steroids.

Everything AI can do, a human can do too. There’s no reason in the world why a non-AI system couldn’t use MCP. There’s certainly no reason why they couldn’t use a command-line interface, one of the most venerable user experiences in computing. Whatever you might think about LLMs themselves, this is quite a lovely thing.


We should reclaim LLMs, not reject them

This is very close to how I feel:

“I want my code to be used for LLM training. What I don't want is for that training to produce proprietary models that become the exclusive property of AI corporations. The problem isn't the technology or even the training process itself. The problem is the enclosure of the commons, the privatization of collective knowledge, the one-way flow of value from the many to the few.

[…] The question is: who owns the models? Who benefits from the commons that trained them? If millions of F/OSS developers contributed their code to the public domain, should the resulting models be proprietary?”

The problem with LLMs, in my mind, isn’t that they exist to begin with. They’re genuinely useful. It’s an issue of power dynamics and consent. Training material shouldn’t be taken from creators without their knowledge or participation; the resulting models should not be owned by a small number of wealthy corporations instead of a collectively by the people who put their work and creativity into them.

Rather than pretending that LLMs will go away and we’ll revert to a pre-AI state of being, what would it look like for these things to be built and run equitably and sustainably?

I’m inspired by ETH’s Apertus model — a fully open, multilingual LLM that is built with shared openness and representation in mind. More will likely follow (and I’m beginning to hear similar stories in specific fields like journalism). Can that be part of our future? Possibly, and the first step is not necessarily rejecting AI but rejecting how it’s run today. Software, as we’ve learned, is best built for the people, and open has historically won eventually. Open, consensual models will almost certainly win too.


AI is changing the style and substance of human writing, study finds

Repeat after me: an LLM cannot replace a human’s skill, judgment, or taste.

Another exhibit for the jury:

“The research team found that users who heavily relied on large language models (LLMs) produced responses that diverged significantly in meaning from the answers of participants who only partially relied on LLMs or avoided their use altogether, suggesting heavy AI use alters the substance of humans’ arguments in addition to changing writing style.”

It might seem easy or fast, or a neat way to push out more content with less effort. But the software really does change the substance of your writing in what I would call objectively bad ways: it makes it less personal and less emotional, and it actively changes its underlying meaning in the process. It’s less human. Sure, ask it to give you feedback, if you want, but don’t actually let it polish your writing itself.

For the record, my posts aren’t written or conceived with an LLM, although I know an increasing number of people who use one to write a first draft and then edit. I’m not a fan. The whole point of the web — its beauty — is that it’s unrelentingly human and diverse. There are lots of ways in which AI is eating away at that core (fewer search engine referrals, more automated content, more spam), but this is the most insidious: through people who believe they are writing the piece themselves but are actually handing over their creativity to the model.


Make Space for Every Voice

I think this principle is absolutely core:

“Punting on facilitation leads to outcomes so predictable that you might as well be making a deliberate choice. We all know how it plays out. A question is asked to the group in a team meeting. The same people tend to be the first to speak up, they might be the loudest, the most extroverted, the most senior, the most powerful, or the most confident. And what they put out there tends to shape the rest of the meeting. The framing they introduce ends up dominating the conversation.”

In the venture capital investment meeting that Corey goes on to describe, I was the Director of Investments in the room. Everyone would reflect first, then state their opinion, and we’d go round the room so that everyone had spoken their mind, starting with the minority opinions. The people in the room with the most power went last. It was consultative, but the expectation was that I would listen carefully to everyone’s opinion, and I did. It felt to me like a safe environment where everyone could truly participate; some of our most important contributions came from our interns and producers, so it meaningfully affected who received investment, what strategies we pursued, and how we thought about building community.

I’ve also seen two other versions of this play out many times. One is where the framing is presented by a leader and other people react to it; typically, inevitably, the leader’s framing and perspective are maintained. The other is a laissez-faire attitude to discussion, which always creates a world where the people with the loudest voices and the most power dominate. Both are monocultures that, whether consciously or not, serve to enforce existing power structures.

Having a small number of voices dominate hurts the work. By making it clear that everyone’s voice matters and making it safe for everyone to contribute, we widen the gene pool of ideas. By nurturing a more inclusive community and set of processes, all of our work is elevated, because we’re exposed to more perspectives and more expertise. And by not putting our thumb on the scales as leaders, we make the most of the smart, insightful people we’ve hired.

This is a smart, simple process designed to avoid premature anchoring – and it really works.


Meta and YouTube found negligent in landmark social media addiction case

This is a meaningful result that has the potential to undo Section 230 protections.

“The jury in a landmark trial testing claims about social media addiction against Meta’s Instagram and Google’s YouTube determined that the two companies failed to warn users about the risks of using their products. The jury found the companies’ negligence was a substantial factor in harms like the mental health issues sustained by a now 20-year-old woman Kaley G.M., who used Instagram and YouTube.”

Section 230 has traditionally shielded platforms from liability for what their users post. Under 230, platforms are distributors of third-party speech, not publishers, so they shouldn't be held responsible for individual pieces of content. That shield has been durable across decades of litigation, and has allowed generations of social media platforms to be built and thrive.

Here, the argument was that the design of the product itself caused harm, and that the platforms were negligent for failing to warn users about risks of the platforms' addictive qualities. Instead of arguing that the platforms hosted harmful content and didn't take it down, which would have been protected under 230, they argued that the slot machines themselves were dangerous.

If courts broadly accept that product design claims fall outside 230's protection, it opens a huge new category of liability that platforms haven't had to contend with. 230 wouldn't apply because the theory of harm would be about algorithmic design and the overall architecture of the product. While platform accountability might seem positive at first blush, consider that some attacks may be motivated by regressive politics and very restrictive point of view. For example, what if the design of a platform structurally protects gay teenagers who want to talk to each other?

If this result survives an appeal, it creates a pathway around 230 that doesn't require Congress to amend the statute. Legislatures have struggled for years to reform 230 directly, but courts distinguishing between "content liability" (protected) and "design liability" (not protected) could achieve a similar practical effect through litigation. That's a pretty major change for any platform whose business model depends on engagement-maximizing design. And a potential line of attack for bad actors who want to harm people’s ability to connect and learn from each other online.

One interesting side note: YouTube has tried very hard to characterize itself as a streaming platform rather than a social media site. That’s why the button says “subscribe” and not “follow”, and why it talks about “channels”. It’s an attempted legal defense against being held accountable by an increasing number of social media safety laws — something that’s front and center in its official statement on the case:

“This case misunderstands YouTube, which is a responsibly built streaming platform, not a social media site.”

It’s interesting to think about the implications for YouTube specifically if this defense doesn’t work. How might it change if it (1) has to accept that it is a social media site, (2) has to comply with an increasing set of compliance rules?

Also see Mike Masnick on this:

If you care about the internet — if you care about free speech online, about small platforms, about privacy, about the ability for anyone other than a handful of tech giants to operate a website where users can post things — these two verdicts should scare the hell out of you. Because the legal theories that were used to nail Meta this week don’t stay neatly confined to companies you don’t like. They will be weaponized against everyone. And they will functionally destroy Section 230 as a meaningful protection, not by repealing it, but by making it irrelevant.

Ageless Linux

Ageless Linux is an act of open source activism designed to take the issue of age verification to the courts. I love it.

As lots of people have noted, while age verification laws are ostensibly being proposed to protect children, they create an authentication layer that deanonymizes the internet. The effect is a surveillance layer that will both chill speech and disproportionately harm vulnerable communities by requiring people who may already be targeted by conservative governments to fully reveal their identities.

As Rindala Alajaji and Molly Buckley wrote in Techdirt recently:

“The coalition of organizations that filed amicus briefs in support of Texas’s age verification law tells us everything we need to know about the true intentions behind legislating access to information online: censorship, surveillance, and control. After all, if the race to age-gate the internet was purely about child safety, we would expect its strongest supporters to be child-development experts or privacy advocates. Instead, the loudest advocates are organizations dedicated to policing sexuality, attacking LGBTQ+ folks and reproductive rights, and censoring anything that doesn’t fit within their worldview.”

Enter Ageless Linux, which makes its position clear as it describes its hardware product:

“A physical computing device designed to satisfy every element of the California Digital Age Assurance Act's regulatory scope while deliberately refusing to comply with its requirements. The device costs less than lunch and will be handed to children.“

The site maintains a list of age verification laws they are violating, as well as states that have proposed laws that they would violate. It’s sobering: these aren’t just the usual suspects (although that would be bad enough in itself, of course). California has a pending law that will take effect in 2027. New York and Illinois have laws under discussion.

The named contact at the bottom of the site, John McCardle, has an explanatory blog post, where he concludes:

“Since the California AG can charge me $7,500 per child that uses my OS, I may need to lawyer up. If you'd like to give me some money, I'd appreciate support on Patreon.”

Mike Godwin has published a riveting post about the unraveling of the Communications Decency Act in the nineties. I hope these laws, which are cut from the same ideological cloth, suffer the same fate. Perhaps efforts like Ageless Linux can be part of the road to get there.


Earth being ‘pushed beyond its limits’ as energy imbalance reaches record high

My mental model is that a lot of what’s happening in the world right now (see the next link, below, for example) is driven by this:

“The United Nations body confirmed 2015 to 2025 were the hottest 11 years ever measured, but a still bleaker message was that the rising temperature experienced by humans on the surface was only 1% of the faster-accumulating heat in the wider Earth system.

More than 90% of that excess is absorbed by the oceans, which experienced the highest heat content in history last year. The rate of ocean warming has more than doubled over the past two decades, compared with the average over the previous 45 years.”

We’ve been experiencing the tip of the iceberg and making policy based on that, while the oceans have been absorbing the vast majority of the heat difference. That heat will continue to radiate back to us, committing us to more warming even if we drastically change now.

The result is inevitably fewer resources. Fewer livable areas on Earth, plus agricultural upheaval, lead to increased migration, more competition for less land, less food, and, as an obvious outcome, more conflict. For those that seek to maintain control over resources, authoritarian governments are an easier path: a rigidly-controlled population is easier to keep in line. When we could be co-operating and building more equitable societies to deal with the crisis in collaboration with each other, many of the very wealthiest are encouraging the opposite in order to shore up their own positions.

You can see it in right-wing governments moving away from renewable energy back to fossil fuels; in erosion of democratic norms; in backlashes to equity and inclusion policies that could lead to more equity and collaboration; in the surprising embrace by politicians of end times rhetoric from religious extremists. And you can see it in some of the very wealthiest buying land in places like New Zealand where they can maintain their own control and safety.

We can still say no. There are people on the ground who are fighting for something better. Activists — often from vulnerable communities that have the most to lose — are showing the way. I hope that we can demand the more equitable societies that will bring more of us through the crisis and allow us to get through it together by sharing resources and working on science-based solutions. But there’s a window of opportunity, and it won’t stay open forever.


Democratic Backsliding Reaches Western Democracies, with U.S. Decline “Unprecedented”

File under “oh, fun”, from the V-Dem Project at the University of Gothenburg:

“The U.S. democracy is currently in a much faster deterioration process than any other democracy in modern times. Within only one year, the USA’s score on the V-Dem Liberal Democracy index has declined by 24 percent, while its world rank dropped from 20th to 51st place out of 179 nations.”

The report compares the USA’s rate of decline to Hungary, Serbia, India, Russia, and Türkiye, and shows that what took Orbán four years and Modi a decade, the current administration accomplished in one. Only six countries have registered larger one-year drops on the LDI since 2000, and five of those were military coups.

As the report points out, the midterms will be a litmus test. Will it be a free and fair election — or something else? Time will tell. Election-specific indicators in the report are only reassessed in election years, so the 2025 LDI score is actually held up by the quality of the 2024 elections. If the 2026 elections are compromised, our score will plummet.

And it’s not just us.

“Nearly a quarter of the world’s nations are going through democratic backsliding, or autocratization, in 2025, and six out of the ten new autocratizing countries identified in the 2026 Democracy Report are in Europe and North America. Among them are large and influential countries like Italy, the United Kingdom, and the USA, according to the report authored by a team led by Professor Staffan I Lindberg at the V-Dem Institute, University of Gothenburg.”

The UK doesn’t get away scot-free. It’s listed as a new autocratizer this year, driven primarily by declines in freedom of expression and media freedom. But as sobering as that fact is, the fall is nowhere near as fast or precipitous as the United States.

It’s worth saying that this is the University of Gothenburg, an international university collaborating with other institutions, and not some random organization or mouthpiece. It should be taken seriously, and it's particularly the people who believe American ideals are infallible that need to wake up.