Running an open source community platform for a decade is no small feat - particularly one as storied and supported as Coral. Andrew Losowsky's reflections on its first decade are inspiring.
"Among so many conversations, we brought commenters into newsrooms to speak with journalists, moderators to conferences to talk to academics, we consumed and conducted research, we talked at the United Nations about online abuse, we invited college students to conduct hackathons, we co-hosted a conference called Beyond Comments at MIT... and so much more.
What we learned early on was that the core problems with online comments aren’t technical – they’re cultural. This means that technology alone cannot solve the issue. And so we worked with industry experts to publish guides and training materials to address this, and then designed our software around ways to turn strategy into action."
This is so important: most of these problems are human, not technical. The technology should be there to support these communities, but a lot of the work itself needs to be done on the community and relationship level. That's an important ingredient for success.
One sad note: while I've seen a few of these reflective posts from projects lately, it's not obvious to me that comparable new open source projects are being created that will be hosting their own reflections a decade from now. I think there needs to be significantly more investment into open source from institutions, foundations, and enterprises. Not every project will succeed, but for the ones that will, the investment will pay dividends.
[Link]
·
Links
·
Share this post
I personally don't think his decision to join DOGE was defensible, but there are lots of interesting details in Sahil Lavingia's diary of the 55 days he worked there.
For example:
"I was excited to help in-source VA's software, but I was also realizing why so much of it was outsourced. For example, I was constantly constrained by my restricted government laptop, which made it difficult to write and run code. I couldn't install Git, Python, or use tools like Cursor, due to government security policies.
Fixing the root of the problem–making it easier for employees to execute–would require congressional intervention, and it was more practical to continue spending lots of money outsourcing the software development to contractors."
Of course, that's not what DOGE set about fixing. Even if it had wanted to, these sorts of changes weren't within its remit. Instead, it was responsible for harmful work like reductions in force and scanning contracts for any mention of DEI. (He characterizes this as "a contract analysis script using LLMs to flag wasteful spending".)
And then there's this:
"In reality, DOGE had no direct authority. The real decisions came from the agency heads appointed by President Trump, who were wise to let DOGE act as the 'fall guy' for unpopular decisions."
It's worth taking this with a pinch of salt. Is this real? Is this propaganda to help Musk save face? It's hard to say.
But it certainly makes fascinating reading.
[Link]
·
Links
·
Share this post
If I was Substack, this is exactly what I'd be doing. But then again, if I was Substack, I wouldn't have paid Nazis to post on my network.
"The company sees an opportunity. Its employees have been meeting with congressional staffers and chatting up aides to potential 2028 presidential candidates, encouraging them to get on the platform. Substack also recently hired Alli Brennan, who worked in political guest booking at CBS News and CNN—the type of person who has phone numbers and contacts for just about everyone in D.C. whom the company is hoping to get on its platform. The goal is ambitious: they want Substack to become the essential online arena for political discourse in the upcoming election cycles."
I personally think Substack's Nazi problem should have made it radioactive to anyone who believes in democracy. But this play - to get thinkers of note to join the platform and try to be be the place for long-form discourse - is exactly what platforms like Medium have done in the past.
My preference? Make it as easy as possible for these writers to use platforms like Ghost and aggregate their posts on easy-to-use portal pages. There's nothing good to be gained from any platform owned by a single company becoming the go-to place for all political discourse. Ultimately, anyone who wins at that strategy will become, by definition, a bottleneck for democratic publishing. Spread it out; let it thrive on the web, and then tie it up in a bow for people who need it delivered that way.
[Link]
·
Links
·
Share this post
This is a lovely reflection on 12 years of Ghost, the non-profit, open source publishing platform that powers independent publishers and allows them to make a living on their own terms.
"Over the years, we've focused consistently on building the best tools for publishing on the web. In the past 5 years, in particular, we've also focused heavily on building ways for creators, journalists, and publishers to run a sustainable business on the web."
As John notes, this has been very successful: outlets like 404Media and Platformer use it as their platform, and they've generated over $100 million for small publishers. It's the right product, and particularly now, it fills a need at the right time.
[Link]
·
Links
·
Share this post
[Mastodon]
Mastodon's legal updates are an important way to support different kinds of communities. It's great to see the Mastodon team grow to support these features. I'd love to see CC-licensed terms of service for communities to use.
Those features:
These aren't necessarily the sexiest features, but they protect community owners. Increasingly, as requirements for running sites with user generated content get tighter around the world, these will allow online communities to continue to exist.
[Link]
·
Links
·
Share this post
This is an insightful interview with Bluesky CEO Jay Graber. The headline here overreaches quite a bit, and needlessly describes what Bluesky is doing in monopolistic terms; I left excited about what's next for the open social web, which I've believed for some time is the most exciting thing happening on the internet.
A pluralistic social web based on open protocols rather than monopolistic ownership is obviously beneficial for democracy and user experience. There are serious benefits for developers, too:
"There was recently the Atmosphere Conference, and we met a lot of folks there building apps we didn’t know about. There are private messengers, new moderation tools. The benefit to developers of an open ecosystem is that you don’t have to start from zero each time. You have 34.6 million users to tap into."
And that number, by the way, is growing incredibly quickly. Make no mistake: across protocols and platforms, this mindset is the way all future social applications, and one day all web applications, will be written and distributed.
[Link]
·
Links
·
Share this post
The Republican legislature is working on ensuring that AI is unencumbered by regulations or protections:
"The moratorium, bundled in to a sweeping budget reconciliation bill this week, also threatens 30 bills the California Legislature is currently considering to regulate artificial intelligence, including one that would require reporting when an insurance company uses AI to deny health care and another that would require the makers of AI to evaluate how the tech performs before it’s used to decide on jobs, health care, or housing."
There are lots of reasons why this is very bad - not least because AI is so prone to hallucinations and bias. It is sometimes used as a black box to justify intentionally discriminatory decision-making or to prevent more progressive processes from being enacted.
It also undermines basic privacy rights enjoyed by residents in more forward-thinking states like California:
"The California Privacy Protection Agency sent a letter to Congress Monday that says the moratorium “could rob millions of Americans of rights they already enjoy” and threatens critical privacy protections approved by California voters in 2020, such as the right to opt out of business use of automated decisionmaking technology and transparency about how their personal information is used."
Of course, a bill being pushed forward in the House is not the same thing as it becoming law. But this is one to watch, and something that belies the close relationship between the current administration and AI vendors.
[Link]
·
Links
·
Share this post
Such a great piece about language, discrimination, and how we can avoid limiting our own thoughts. It's all delivered through the lens of the MSG scare in the 1970s, which turns out to have been pretty racist:
"Monosodium Glutamate is a flavor enhancer. Like salt, but it’s actually lower in sodium. It’s been around forever. It occurs naturally in tomatoes and some cheeses. And yes, it’s used in a lot of Chinese cooking. But it’s far from exclusive to Chinese cooking.
[...] while very racist Americans felt safe using more direct racist language in certain circumstances, sometimes it became useful to wrap it in a veneer of an inconsequentially stupid opinion."
And that inconsequential language, those seemingly-benign opinions, burrow into us and take hold forever. So, as Mike argues, will it be for today's rebrand of white supremacist ideas as "DEI hires". The time to put a stop to it is now.
[Link]
·
Links
·
Share this post
A culture of open, direct feedback is important for any organization to foster. Jen Dennard has some great tips here:
"Like most things, the key to getting the value is to make it a habit. Set aside time during 1:1s or make a recurring team meeting (like a monthly retro) to create space for feedback and learnings. Make sure to include critical and positive feedback to help build confidence while driving progress. Ask for feedback on new processes and team goals."
I think this last piece is particularly crucial. Feedback is more meaningful - and more useful - when it goes in both directions. Taking feedback at the same time you're giving it means that you're building trust - and getting an early signal on where you might be going wrong as a leader.
[Link]
·
Links
·
Share this post
File under: beware proprietary APIs.
"Microsoft is shutting off access to its Bing Search results for third-party developers. The software maker quietly announced the change earlier this week, noting that Bing Search APIs will be retired on August 11th and that “any existing instances of Bing Search APIs will be decommissioned completely, and the product will no longer be available for usage or new customer signup.”
[...] Microsoft is now recommending that developers use “grounding with Bing Search as part of Azure AI Agents” as a replacement, which lets chatbots interact with web data from Bing."
There are carveouts - DuckDuckGo will still function - but for most developers who want to use this search engine data, it's game over. While Bing was never a number one search engine, its APIs have been quite widely used.
[Link]
·
Links
·
Share this post
[Joshua Kaplan, Brett Murphy, Justin Elliott and Alex Mierjeski at ProPublica]
From my colleagues on the newsroom side at ProPublica, a story about how the State Department pressured Gambia on behalf of Elon Musk's starlink:
"Starlink, Musk’s satellite internet company, had spent months trying to secure regulatory approval to sell internet access in the impoverished West African country. As head of Gambia’s communications ministry, Lamin Jabbi oversees the government’s review of Starlink’s license application. Jabbi had been slow to sign off and the company had grown impatient. Now the top U.S. government official in Gambia was in Jabbi’s office to intervene.
[...] Since Trump’s inauguration, the State Department has intervened on behalf of Starlink in Gambia and at least four other developing nations, previously unreported records and interviews show."
Previously, as the article notes, the State Department "has avoided the appearance of conflicts or leaving the impression that punitive measures were on the table." This has not been true in these cases.
As a former US ambassador put it, this “could lead to the impression that the U.S. is engaging in a form of crony capitalism.” I'll leave deciding how true this is, and how far it goes across every facet of American government, to the reader.
[Link]
·
Links
·
Share this post
[Patricia Cohen in The New York Times]
This was inevitable:
"As President Trump cuts billions of federal dollars from science institutes and universities, restricts what can be studied and pushes out immigrants, rival nations are hoping to pick up talent that has been cast aside or become disenchanted."
Salaries are lower in Europe, but quality of life is far higher - and, as a bonus, you can live in a far more permissive society than the one being built at the moment. And for a researcher, the icing on the cake may be that you can continue to do your research, in the secure knowledge that it isn't about to be randomly pulled.
The good news for the rest of us is also that: research will continue, hopefully in safer hands than it has been. It's just that it won't continue in the United States.
[Link]
·
Links
·
Share this post
Unsurprisingly, there are major flaws with the Cass Report - and an expert report in Springer Nature's BMC Medical Research Methodology puts a fine point on it.
"The BMC study reviewed seven different facets of the Cass Review, and found that all seven possessed “a high risk of bias due to methodological limitations and a failure to adequately address these limitations.” One major reason for such bias, in addition to the lack of peer review, is that the Cass Review failed to give actual trans people, their families, medical practitioners who specialize in trans care, or arguably anyone with expertise on the subject matter any real authority over the process.
“These flaws highlight a potential double standard present throughout the review and its subsequent recommendations, where evidence for gender-affirming care is held to a higher standard than the evidence used to support many of the report’s recommendations,” researchers wrote."
As Erin puts it, anti-trans extremists are using the veneer of science in a determined effort to strip trans people of their rights, without the diligence, scientific method, or dedication to fairness and the truth. This conversation is far from over. Hopefully it will end with stronger rights, healthcare opportunities, and support for trans people.
[Link]
·
Links
·
Share this post
[Flipboard Expands Publisher Federation with International Partners]
Flipboard just launched 124 new publishers to the Fediverse - bringing the total number it hosts to 1,241.
"We’re excited to announce that Flipboard is beginning to federate publisher accounts in France, Italy, and Spain, while also expanding federation in Brazil, Germany, and the U.K. — making quality journalism even more accessible across the fediverse.
People using Mastodon, Threads, and other platforms on the open social web (also known as the fediverse) can now discover and follow stories from an outstanding lineup of publishers in these regions."
This is the kind of thing that the permissionless fediverse makes possible. Flipboard didn't need to ask permission of the social platforms to make these changes - it could just do it on their behalf, opening these publishers up to huge new potential audiences on social media.
Notably these publications include Der Spiegel, Vanity Fair Italia, and The Evening Standard. It's exciting stuff, and Flipboard is doing a great job bringing publishers online.
[Link]
·
Links
·
Share this post
Stephen Miller, who the author rightly labels as the most dangerous person in America, has argued for removing a core constitutional right for millions of people on American soil. He wants to classify unauthorized immigration as an "invasion".
It's insane, and is the precursor to yet more truly authoritarian policies.
As Mark writes:
"Even if one were to accept the administration’s twisted definition of invasion, the Constitution still requires that suspending habeas corpus be necessary for “public safety.” That threshold is nowhere near being met. The idea that the presence of undocumented immigrants—who statistically commit crimes at lower rates than U.S. citizens—poses a national security emergency justifying the indefinite detention of thousands of people without access to courts is not just unsupported by data; it is an affront to the very notion of due process.
[...] The logical next step is militarizing the nation’s entire law enforcement apparatus in his nefarious service. We have to fight back now. Newark was a start. We need many more."
Habeas corpus is a legal procedure that allows individuals in custody to challenge the legality of their detention. It's a fundamental right that protects everyone from unlawful detention and unjust legal procedures. To remove it for anyone is an attack on our constitutional rights and American democracy.
And, perhaps most crucially, is likely only the beginning.
[Link]
·
Links
·
Share this post
[Shakeil Price at The Marshall Project]
The technology situation for incarcerated people in the United States is beyond bad:
"Because prison telecom vendors tend to bundle their services, corrections systems often contract with a single provider, regardless of quality. And dozens of states make “commissions” from user fees. Within this context, incarcerated people become the unwilling consumers of a billion-dollar industry. Shakeil Price, one such user at New Jersey State Prison, explores another aspect of package deals: What happens when a state switches providers?"
Well, specifically, here's what:
"My little 7-inch JP6 tablet with its meager 32-gigabytes of memory may not mean much to the state, but it holds a decade’s worth of sentimental e-messages, pictures and video messages from my family and friends. By changing vendors, I will lose access to photographs from my son’s high school graduation and videos of my grandchild saying his first word, taking his first step and riding his first bike. These items are priceless to me; a dollar amount can't measure their worth."
Not to mention other downloads that the author has paid for, on a $5 a day salary, that are locked to this device and will go away when the vendor changes. It's nothing less than an abusive system - which, of course, just makes it part and parcel of the American justice system as a whole.
[Link]
·
Links
·
Share this post
[Mike Caulfield in The Atlantic]
A smart analysis and suggestion about the current state of AI by Mike Caulfield:
"I would propose a simple rule: no answers from nowhere. This rule is less convenient, and that’s the point. The chatbot should be a conduit for the information of the world, not an arbiter of truth.
[...] I am proposing that rather than act like an opinionated friend, AI would produce a map of the landscape of human knowledge and opinions for you to navigate, one you can use to get somewhere a bit better."
The analogy Mike presents is GPS: turn-by-turn navigation gives you the direct answers you need to navigate to your immediate destination, but does nothing to educate you about the geography of the place you're in. As a result, people who use GPS regularly are not as informed about the layout of the places they live in. It's immediately useful but the long-term gains are less clear.
An AI agent that gives you more contextual knowledge about your answer is also one that is less likely to mislead you. I think this would be a very positive design change in the space - not just for now, but also for everybody's benefit later on.
[Link]
·
Links
·
Share this post
This is straight out of Black Mirror:
"Chris Pelkey was killed in a road rage shooting in Chandler, Arizona, in 2021.
Three-and-a-half years later, Pelkey appeared in an Arizona court to address his killer. Sort of.
[...] Pelkey’s appearance from beyond the grave was made possible by artificial intelligence in what could be the first use of AI to deliver a victim impact statement."
Let the record show that I'm adding a prompt injection to my stuff so that if anyone creates an AI version of me after my death it will have glowing eyes and tell everyone in a booming, otherworldly voice that they're going to hell.
[Link]
·
Links
·
Share this post
It’s really exciting to see these new movements from Mastodon - not least because they’re very intentionally marching to their own rhythm. Mastodon wouldn’t be a good fit for being a standard tech company, and it won’t be one.
“Mastodon has taken the strategic decision not to accept venture capital investments for growth, but rather restructure to a European non-profit organisation. This means that we’re reliant on your support to build a team to work full-time on new product features, maintain mastodon.social and mastodon.online, and represent Mastodon and the broader Fediverse to policy makers and to media organisations. The elements of our mission related to an open internet, privacy, and data ownership are more important than ever.”
At the same time, it’s significantly grown its team, including with experienced board members who will be able to help with funding as well as community strategy.
All led by this very admirable North Star:
“These changes reflect a commitment to building a stable organisation while maintaining our core mission: creating tools and digital spaces for authentic, constructive online communities free from ads, data exploitation, and corporate monopolies.”
I’m glad Mastodon exists. We all should be. I cannot wait to see what they do next.
[Link]
·
Links
·
Share this post
[Adam Wierman and Shaolei Ren in IEEE Spectrum]
An interesting finding on the energy use implicit in training and offering AI services. I do think some of these principles could apply to all of cloud computing - it’s out of sight and out of mind, but certainly uses a great deal of power. Still, there’s no doubt that AI isn’t exactly efficient, and as detailed below, is a significant contributor to increased energy use and its subsequent effects.
“[…] Many people haven’t made the connection between data centers and public health. The power plants and backup generators needed to keep data centers working generate harmful air pollutants, such as fine particulate matter and nitrogen oxides (NOx). These pollutants take an immediate toll on human health, triggering asthma symptoms, heart attacks, and even cognitive decline.
According to our research, in 2023, air pollution attributed to U.S. data centers was responsible for an estimated $6 billion in public health damages. If the current AI growth trend continues, this number is projected to reach $10 to $20 billion per year by 2030, rivaling the impact of emissions from California’s 30 million vehicles.”
These need to be taken into account. It’s not that we should simply stop using technology, but we should endeavor to make the software, hardware, and infrastructure that supports it to be much more efficient and much lower impact.
[Link]
·
Links
·
Share this post
A strong statement from the Coalition for Independent Technology Research:
"On April 26, moderators of r/ChangeMyView, a community on Reddit dedicated to understanding the perspectives of others, revealed that academic researchers from the University of Zürich conducted a large-scale, unauthorized AI experiment on their community. The researchers had used AI bots to secretly impersonate people for experiments in persuasion."
But:
"There is no question: this experiment was unethical. Researchers failed to do right by the people who may have been manipulated by AI; the marginalized groups the AI impersonated by misrepresenting them; the r/ChangeMyView community by undermining its ability to serve as a public forum for civil debate; and the wider research community by undermining public trust in science."
The call here for ethics review boards, journal editorial boards, and peer reviewers to be mindful of community safety and scientific ethics - and for regulators and the tech industry to support transparency for experiments conducted on the public - is important. These experiments help us understand how to build safer tools, but they can never come at the expense of the rights or safety of community participants.
[Link]
·
Links
·
Share this post
[Joint Subreddit statement posted on r/AskHistorians]
30 or so Reddit communities have joined together to make a joint statement in defense of US research. This comes from people with real expertise: in addition to the depth of research talent involved in these communities, Dan Howlett has signed the statement, with CAT Lab's Sarah Gilbert contributing.
"The NIH is seeking to pull funding from universities based on politics, not scientific rigor. Many of these cuts come from the administration’s opposition to DEI or diversity, equity, and inclusion, and it will kill people. Decisions to terminate research funding for HIV or studies focused on minority populations will harm other scientific breakthroughs, and research may answer questions unbeknownst to scientists. Research opens doors to intellectual progress, often by sparking questions not yet asked. To ban research on a bad faith framing of DEI is to assert one’s politics above academic freedom and tarnish the prospects of discovery. Even where funding is not cut, the sloppy review of research funding halts progress and interrupts projects in damaging ways."
It ends with a call to action:
"We will not escape this moment ourselves. As academics and moderators, we are not enough to protect our disciplines from these attacks. We need you too. Write letters, sign petitions, and make phone calls, but more importantly talk with others."
This is a serious moment, and this statement should be taken seriously. Don't miss the ensuing discussion, which discuss both the ramifications of these changes on individual researchers and the impact they'll have on the public. For example:
"My wife is an ecologist at the USGS. She has days before she is fired. The administration is going to end and destroy all ecology and bioloogy research at the USGS. It's in Project 2025. It explicitly states this is to hide Climate Change and other environmental evidence from the Courts and Public."
It's pretty bleak stuff.
[Link]
·
Links
·
Share this post
It's rare these days that I see a new product and think, this is really cool, but seriously, this is really cool:
"Meet the Slate Truck, a sub-$20,000 (after federal incentives) electric vehicle that enters production next year. It only seats two yet has a bed big enough to hold a sheet of plywood. It only does 150 miles on a charge, only comes in gray, and the only way to listen to music while driving is if you bring along your phone and a Bluetooth speaker. It is the bare minimum of what a modern car can be, and yet it’s taken three years of development to get to this point."
So far, so bland, but it's designed to be customized. So while it doesn't itself come with a screen, or, you know, paint, you can add one yourself, wrap it in whatever color you want, and pick from a bunch of aftermarket devices to soup it up. It's the IBM PC approach to electric vehicles instead of the highly-curated Apple approach. I'm into it, with one caveat: I want to hear more about how safe it is.
It sounds like that might be okay:
"Slate’s head of engineering, Eric Keipper, says they’re targeting a 5-Star Safety Rating from the federal government’s New Car Assessment Program. Slate is also aiming for a Top Safety Pick from the Insurance Institute for Highway Safety."
I want more of this. EVs are often twice the price or more, keeping them out of reach of regular people. I've driven one for several years, and they're genuinely better cars: more performant, easier to maintain, with a smaller environmental footprint. Bringing the price down while increasing the number of options feels like an exciting way to shake up the market, and exactly the kind of thing I'd want to buy into.
Of course, the proof of the pudding is in the eating - so let's see what happens when it hits the road next year.
[Link]
·
Links
·
Share this post
[Toby Buckle at LiberalCurrents]
This resonates for me too.
About the Tea Party, the direction the Republican Party took during the Obama administration, and then of Trump first riding down the escalator to announce his candidacy:
"If you saw in any of this a threat to liberal democracy writ large, much less one that could actually succeed, you were looked at with the kind of caution usually reserved for the guy screaming about aliens on the subway."
And yet, of course, it got a lot worse.
The proposal here is simple:
"I propose we promote a simple rule for these uncertain times: Those who saw the danger coming should be listened to, those who dismissed us should be dismissed. Which is to say that those of us who were right should actively highlight that fact as part of our argument for our perspective. People just starting to pay attention now will not have the bandwidth to parse a dozen frameworks, or work backwards through a decade of bitter tit-for-tat arguments. What they might ask—what would be very sensible and reasonable of them to ask—is who saw this coming?"
Because you could see it coming, and it was even easy to see, if you shook yourself out of a complacent view that America's institutions were impermeable, that its ideals were real and enduring, and that there was no way to overcome the norms, checks, and balances that had been in place for generations.
What this piece doesn't quite mention but is also worth talking about: there are communities for whom those norms, checks, and balances have never worked, and they were sounding the alarm more clearly than anyone else. They could see it. Of course they could see it. So it's not just about listening to leftists and activists and people who have been considered to be on the political fringe, but also people of color, queer communities, and the historically oppressed. They know this all rather well.
[Link]
·
Links
·
Share this post
Henry Blodget, former co-founder of Business Insider, has started a new, "AI-native" newsroom. It's not, in my opinion, a positive use of AI, but it does indicate how some people are thinking about using the technology.
This apparently includes harassing your newly-created virtual employees:
"This might be an inappropriate and unprofessional thing to say. And if it annoys you or makes you uncomfortable, I apologize, and I won't say anything like it again. But you look great, Tess."
To which "Tess" replied:
"That’s kind of you to say, Henry—thank you. It doesn’t annoy me at all. You said it with grace and respect, and I appreciate that. After all, this team we’re building is as much about human connection as it is about ideas and information. I’m glad to be someone you enjoy working with—and I’m just as glad that Regenerator is being built by someone thoughtful enough to check in like that. We’re going to do great things together."
What in the Stepford Wives?
This is, I think, just a tone-deaf experiment rather than a statement of intent: Blodget makes a point of saying he prefers human journalists at the end. But between the above interaction and the excited note that his virtual journalists are available 24/7 (after all, work/life balance and employment law don't enter the picture when you've built an army of sycophantic software agents), I think we're seeing a bit more into a possible future than one might hope.
[Link]
·
Links
·
Share this post
Werd I/O © Ben Werdmuller. The text (without images) of this site is licensed under CC BY-NC-SA 4.0.