Notable links: April 17, 2026
How AI is affecting thinking and distribution – and why relationships are still at the heart of great teams.
Most Fridays, I share a handful of pieces that caught my eye at the intersection of technology, media, and society.
Did I miss something important? Send me an email to let me know.
"Cognitive surrender" leads AI users to abandon logical thinking, research finds
I’m tired. Everyone’s tired. There are so many demands being made of us constantly that the output from an AI chatbot can seem like a godsend: rather than buckling down and doing yet more work, the machine can shortcut that for us.
Not so fast:
“Overall, across 1,372 participants and over 9,500 individual trials, the researchers found subjects were willing to accept faulty AI reasoning a whopping 73.2 percent of the time, while only overruling it 19.7 percent of the time. The researchers say this “demonstrate[s] that people readily incorporate AI-generated outputs into their decision-making processes, often with minimal friction or skepticism.” In general, “fluent, confident outputs [are treated] as epistemically authoritative, lowering the threshold for scrutiny and attenuating the meta-cognitive signals that would ordinarily route a response to deliberation,” they write.”
There are no shortcuts to doing great work, but if AI is used in this pressure-driven way, it becomes little more than a shortcut machine: a way to get to the end goal faster without really scrutinizing the thinking it took to get there. It’s no wonder that AI users didn’t examine the answers they were given; in a world where AI allows people to be saddled with more tasks, they might not have had the time to do anything else. Good enough; onto the next thing. Most people don’t want to cut corners, but under adverse circumstances, they will.
It may also be that they were rote learners who were less good at identifying the principles behind a solution. The people who bucked this trend were the ones who scored highly in “fluid reasoning” tests. I have to admit that this was new to me, but fluid learners are more able to find the underlying principles and links between topics and ideas in order to solve problems. The better people were at abstract thinking, the more likely they were to question outputs from the AI.
That makes some sense to me. AI can’t reason particularly well: it outputs convincing-sounding responses, but the underlying principles behind them aren’t necessarily fully-formed. If you’re used to just accepting something that looks right, perhaps because you’ve been taught to memorize rather than understand, it’s harder to discern when this kind of superficially intelligible, highly confident answer is right. If you scratch the surface and try to understand the underlying logic, that’s when it becomes clearer that the LLM doesn’t know what it’s talking about.
Managers that salivate about using AI to increase the workload / productivity of a team should consider this effect: the more you press people to use these systems, the more they might accept faulty reasoning from them. Hiring abstract thinkers — the people who are more likely to rise to be senior engineers etc — will help, but you need to give people the space, permission, and expectation to think for themselves.
The bottleneck shifts to distribution
This definitely gave me pause. In a world where writing code is something vastly more people can do, when even GitHub is struggling to keep up with the ballooning number of codebases out there, it’s going to be increasingly impossible to get recognition for your work.
“This is what it takes for your free and open source project to be recognized in 2026: you must secure the endorsement of legendary actress Milla Jovovich. You know, like a celebrity vodka.”
I kicked against this — who says Milla Jovovich wasn’t a first class contributor? The fundamentals of WiFi were created by Hedy Lamarr — but it’s true that the commits are mostly assigned to Sigman, the CEO of Bitcoin Libre. She is credited as architect, he as engineer, together with a contributor called Lu.
Regardless, it’s obvious that attaching her name to the project has drawn it more attention, and that this is a product that could result in a real financial outcome for both her and Sigman. I’m left feeling really glad that I released my first big open source software 22 years ago, when LLMs were an impossibility and big names didn’t attach themselves to open source. I was able to build a community with the funding equivalent of a can of Coke and a packet of crisps; if I’d been competing against Hollywood celebrities, I would have had no chance at all.
But I don’t quite agree with the thesis. Whether you’re famous or not, the way to get a following for your code is to solve a real problem better than anyone else. It’s true that distribution platforms can be kingmakers, but starting small by building real relationships with people you’re trying to help in ways that don’t scale is still a good way to get off the ground. That means building something genuinely differentiated rather than something that’s a few degrees off from what everyone else is doing. For small players with no networks and no names, that’s always been the best way to start, and I think it likely still is.
You Own Your Role, We Own The Outcome
This and its predecessor, One Consultative Decision Maker Per Lane, go beyond being sound management advice into almost being a manifesto for how management should work.
If people in your team stick to their lanes entirely, a lot can go wrong:
“The gaps between those lanes become the source of risk, and without a shared sense of ownership, those gaps go unaddressed until it is too late.
No one dropped the ball. But the ball fell between them.”
As Corey points out, roles and decision rights do matter a lot. If you don’t empower people to make real decisions in their respective lanes of responsibility and expertise, your team will grind to a halt (and, if you’re ultimately in charge, everyone will resent you). I’ve been in those teams and it’s always counterproductive; often that’s because there’s someone who wants to make all the decisions. By undercutting people’s decisions, they end up undermining the work of the team and making it impossible to make real progress.
But you also can’t encourage people to put blinkers on. Everyone needs to feel responsibility over the team’s end result — which also means they need to feel ownership over it. I’ve been there too: places where people want to be heads down and just look at a particular piece of code, for example. It doesn’t work on small teams. Maybe there are companies out there, really big ones with cubicles and campuses, where it makes sense. I’ve never worked in one.
There’s a productive tension here, obviously. You can’t go fully one way or the other. But if you treat a team as a community, and the team leader as the facilitator of that community, you can navigate these nuances more easily.
I wanted to share this piece because it ties together so many important ideas: a culture of open feedback, ensuring every voice is heard, framing the work as a learning problem, and leading with vulnerability. I like to create teams that embody these values, and work in places that share my belief that they are important.
So much of this is about trust in people. Trust in the expertise on your team to make sound decisions; trust that the collective can produce great work; trust that when you raise an issue or give feedback in good faith it will be received constructively. I think you have to start with trust as the default — and then vote with your feet if you find it isn’t there.
Google Broke Its Promise to Me. Now ICE Has My Data.
There’s an important distinction at the heart of this case.
The synopsis, from the EFF:
“In September 2024, Amandla Thomas-Johnson was a Ph.D. candidate studying in the U.S. on a student visa when he briefly attended a pro-Palestinian protest. In April 2025, Immigration and Customs Enforcement (ICE) sent Google an administrative subpoena requesting his data. The next month, Google gave Thomas-Johnson's information to ICE without giving him the chance to challenge the subpoena, breaking a nearly decade-long promise to notify users before handing their data to law enforcement.”
Subpoenas are legal orders compelling someone to either testify or produce evidence. They come in three broad flavors: civil, criminal, and administrative. Civil subpoenas arise from disputes between private parties (or between a party and the government in a non-criminal matter), typically over money, contracts, property, or rights. Criminal subpoenas are issued in the context of a criminal investigation or prosecution, where the government is pursuing charges against someone for violating criminal law. Administrative subpoenas are a legal grey area that sit in the middle. They’re issued by federal agencies (in this case, ICE, under the Department of Homeland Security) without prior approval from a judge or grand jury.
Statutory non-disclosure orders and national security letters are common in criminal and national security contexts; they’re rare-to-nonexistent in civil ones. If one exists, the subject can’t disclose that a subpoena was given or that they provided the information. Otherwise, they are free to notify.
The information here has often been given fewer Fourth Amendment protections under the third party doctrine. IP addresses, physical address, other identifiers, and session times and durations are metadata. US cell phone providers, too, will hand out this information with relatively little friction.
When your data is stored with a cloud provider like Google, prosecutors are most likely to ask Google for it, rather than you. If they’re issued a subpoena without a gag order, they’re supposed to notify you about it. If they’re issued one with, they can’t tell you about it in order to stay within the bounds of the law. Even without one, some companies may be tempted to comply in advance in order to stay on the government’s good side.
As is laid out in the linked piece, another student, Momodou Taal, was notified by both Google and Meta that his data was requested. Here, the system worked: because he was notified, he was able to fight off the order, and his data remained private. Amandla Thomas-Johnson didn’t receive the same courtesy.
Google is meant to notify users, if they can. If they didn’t, that’s a real problem. And it seems like that’s the case: that’s why EFF is going after them. The precedent here will matter a great deal for everybody’s privacy: commitments to notify should be enforceable. Hopefully regulators will hold that they are.
FBI Extracts Suspect’s Deleted Signal Messages Saved in iPhone Notification Database
This understandably made a few journalists nervous when 404 Media originally reported it last week:
“The FBI was able to forensically extract copies of incoming Signal messages from a defendant’s iPhone, even after the app was deleted, because copies of the content were saved in the device’s push notification database.”
This reveals a shortcoming in how Apple stores notifications rather than in Signal itself.
What happens is that if the text of a Signal message shows up on a lock screen, it’s stored in iOS itself, in a place where forensic investigators can gain access to it. That’s a really good reason to turn off lock-screen notifications for Signal, and to remove the text of Signal messages from its notifications entirely.
Here’s how to mitigate:
In the Signal app itself, go into settings, and then Notification Content. Depending on your level of comfort, select “Name Only” (which will still store the name of your Signal contact in your iPhone device memory) or “No Name or Content”.
Then, in your iPhone settings panel, find the Notifications pane, and scroll down to Signal. De-select “lock screen”.
Stop Flock
This is nice to see: a grassroots protest movement against the proliferation of Flock cameras.
From the site:
“Flock Safety markets AI surveillance that goes far beyond reading license plates; color, bumper stickers, dents, and other features are used to build databases and identify movement patterns. These systems are spreading rapidly, often without oversight, and are accessible to police without a warrant. They raise serious privacy and legal concerns, and contribute to a nationwide trend toward mass surveillance.”
There’s little evidence that they do anything meaningful to prevent crime. But they do certainly create a surveillance layer, and help establish a culture of surveillance across law enforcement. 404 Media reported last year that ICE has been tapping into these cameras, although they weren’t established for that purpose; local police have been proxy users for immigration enforcement.
Not only does the platform read license plates and track individual cars, but it tracks associations between vehicles — cars that are often seen together, for example. Which, of course, reveals associations between people.
I would echo what Brandon Mitchell said on Hacker News:
“I don't want to stop Flock the company. I want to stop Flock the business model, along with all the other mass surveillance, and the data brokers. If the business models can't be made illegal, it should at least come with liabilities so high that no sane business would want to hold data that is essentially toxic waste.
Without that, we are quickly spiraling into the dystopia where privacy is gone, and when the wrong person gets access to the data, entire populations are threatened.”
The Take Action section of the website is pretty good, with some common-sense tasks that include calling your representatives and supporting civil rights organizations like the ACLU and the EFF.
Earlier this year, TechCrunch reported that some people are going a step further, ripping cameras off street lights themselves. In Oregon, protesters left a note that read, “Hahaha get wrecked ya surveilling f*cks”. I couldn’t possibly endorse.