Notable links: March 20, 2026

Agentic engineering and burnout.

Notable links: March 20, 2026
Photo by Gabriel Heinzer / Unsplash

Most Fridays, I share a handful of pieces that caught my eye at the intersection of technology, media, and society.

Did I miss something important? Send me an email to let me know.


Agentic Engineering Patterns

Simon Willison’s work-in-progress deep dive into agentic engineering is predictably good.

From the introduction, distinguishing agentic engineering from vibe coding:

“Some people extend that definition to cover any time an LLM is used to produce code at all, but I think that's a mistake. Vibe coding is more useful in its original definition - we need a term to describe unreviewed, prototype-quality LLM-generated code that distinguishes it from code that the author has brought up to a production ready standard.”

I’ve been using the term AI-assisted engineering, but standardizing around agentic seems more precise for the kind of activity we’re talking about.

And from the anti-patterns page:

“Don't file pull requests with code you haven't reviewed yourself.

If you open a PR with hundreds (or thousands) of lines of code that an agent produced for you, and you haven't done the work to ensure that code is functional yourself, you are delegating the actual work to other people.

They could have prompted an agent themselves. What value are you even providing?”

The temptation is to write and push code that you haven’t reviewed personally, but technology leaders need to enforce a human-review process. You are responsible for all code you push, and you are responsible for not wasting your colleagues’ time.

From writing code is cheap now:

“Delivering new code has dropped in price to almost free... but delivering good code remains significantly more expensive than that.”

Simon is building a really great guide to not just the process but the underlying mindsets behind good agentic engineering. It’s worth reading and following.


When Using AI Leads to “Brain Fry”

Interesting research about the interaction between AI use and burnout, studying 1488 (an incidentally unfortunate number) US-based workers. Burnout is real:

“Participants described a “buzzing” feeling or a mental fog with difficulty focusing, slower decision-making, and headaches. This AI-associated mental strain carries significant costs in the form of increased employee errors, decision fatigue, and intention to quit.

There’s some nuance here, however. We also found when AI is used to replace routine or repetitive tasks, burnout scores—but not mental fatigue scores—are lower. This highlights the subtle-but-important distinction between the types of stress that AI can alleviate, and those that it may worsen.”

So the kind of AI use matters. The researchers found that AI use cases that required increased oversight (coding is one, using AI with sensitive internal data is another) increased the risk of burnout. This was particularly true because the people who used these tools were more likely to take on more work, pushing their total cognitive load beyond their limits. But using it for more straightforward repetitive tasks reduced the risk of burnout.

The high-risk activities cluster around certain teams:

“After marketing, people operations, operations, engineering, finance, and IT were the functions with the highest prevalence of AI brain fry.”

Legal teams, who presumably use AI on evidence sets and on contract analysis using tools like Harvey but not their actual legal analysis, were the least likely to suffer from this problem.

This should inform how managers think about AI use and how to set humane norms internally.


Four things about Yahoo News that may surprise you

I still think Yahoo is undervalued: clearly not a tech darling, it’s quietly been chugging along, running one of the most popular news sites in the world alongside a raft of other services. None of it is pushing the envelope, particularly, but it does seem to be executing very well, and the team behind it is on an explicit mission to revitalize the brand.

This is the right approach, in my opinion — and clearly it’s working:

“Lanzone says Yahoo, owned by private equity firm Apollo Global Management since 2021, has billions in revenue. “It is very profitable,” he told Decoder’s Nilay Patel.

“Having direct deals with publishers to have their content aggregated with us has actually been part of the history of the company going back two-plus decades,” Lanzone says at one point. “We send them traffic and, in many cases, share revenue.””

It’s more recently made that strategy cleaner: it’s not trying to do its own reporting, but instead is surfacing other peoples’ and providing reach.

The underlying ethos seems to be to point to great content on the web rather than being the originator of it. In a world where other platforms, Google included, are trying to be the all-encompassing destination, it’s a web-first way to look at the world. That revitalization is right on time.


BuzzFeed Nearing Bankruptcy After Disastrous Turn Toward AI

This looks like a cut and dry story about a media company turning from producing content using writers to using AI and suffering the consequences:

“Peretti said BuzzFeed would be using the software to enhance the company’s infamous quizzes by generating personalized responses.

[…] Now, three years after its AI pivot, the writing is on the wall. The company reported a net loss of $57.3 million in 2025 in an earnings report released on Thursday. In an official statement, the company glumly hinted at the possibility of going under sooner rather than later, writing that “there is substantial doubt about the Company’s ability to continue as a going concern.””

The content was underwhelming, and this shift coincided with Buzzfeed shutting down its award-winning news division.

There’s a lot AI can do, but it can’t replace the judgment and taste of human beings. It can’t be a great writer or a great creative. It can take on drudge-work and be a good copilot, like a grammar checker or other supportive tools can be good copilots, but it’s not a replacement for a skilled workforce. (I would also argue that there’s no such thing as unskilled labor: almost every job you can think of benefits from human nuance and judgment.)

But there are a lot of people who see dollar signs and hope that replacing people with predictive engines will help them scale and increase their margins. All that means is that there will be a lot more Buzzfeeds.


Businesses rush to rehire staff after regretted AI-driven cuts

Not a surprise. Careerminds polled 600 HR professionals from organizations that had made layoffs in the last year.

“It found that 32.7% of organisations that conducted AI-led layoffs had already rehired between 25% to 50% of the roles they initially let go.

Another 35.6% said they had already rehired more than half of the roles that they cut.”

Say it with me: AI can’t replace the skill, judgment, creativity, and taste of real people. Replace the word “AI” with “spreadsheet” and the nonsense behind AI-led layoffs becomes even clearer. AI is a potentially very powerful tool, but it’s just a tool, and it works better when more highly-skilled people are using it.

Which organizations are beginning to find out:

“According to the findings, more than half of HR leaders said AI required more human insight than anticipated.”

It’s worth saying that around 21% of respondents did report that their layoffs went okay. It’s possible that they’re lying. They could also have been employing people to do very manual, repetitive data work without any degree of insight, which seems like a poor use of a workforce. But generally speaking, the study found hundreds of orgs that saw the potential to save costs and were so blinded by dollar signs that they didn’t go beyond the marketing claims about what AI could actually do:

“What ties all these findings together is that the organisations that struggled the most were making significant, irreversible decisions without the full picture of AI capabilities and what a reduction would do to their workforce.”

These organizations treated people poorly. The real tragedy is that they seem not to have understood the skills and value of their own employees. There’s a deeper problem there than just AI.


Trump is using immigration policy to suppress speech, lawsuit claims

This lawsuit, filed by The Knight First Amendment Institute at Columbia University and Protect Democracy on behalf of the Coalition for Independent Technology Research (CITR), is important:

“The suit accuses the administration of violating the First Amendment with an official policy to deny visas to or deport noncitizens who work on or study social media platforms, fact-checking or other activities the government deems "censorship" of Americans' speech. It argues that amounts to unconstitutional viewpoint discrimination.”

The work conducted by researchers into social media is vital: it helps us build safer communities that allow democratic discourse to take place. Unfortunately, the Trump administration has decided that this safety work is a radical act — and in particular that research into Trump ally Elon Musk’s X is verboten. I would assume that Ellison’s American TikTok will receive the same preferential treatment.

My friend Dr J. Nathan Matias, who runs the amazing Citizens and Technology Lab, is a named declarant. His experiences are laid out in the suit:

“[…] the Censorship Policy has deprived Dr. Matias of significant contributions from his noncitizen collaborators in the United States. Because of the fear that they will be denied reentry to the United States under the Policy, some of Dr. Matias’s U.S.-based noncitizen collaborators have decided not to travel abroad, including to attend meetings with new community partners who have important experiences related to online safety and freedom of expression. As a result, Dr. Matias has been unable to pursue collaborations with those partners, who would have added significant value to his research. Because of the same fear, one of Dr. Matias’s noncitizen collaborators felt compelled to make extensive contingency plans in connection with international travel, requiring more flexibility in their work and less visibility on their projects, which has significantly delayed progress on those projects. Additionally, because of the fear that they will be targeted for detention or deportation under the Policy based on their work, at least one of Dr. Matias’s noncitizen collaborators has decided not to speak to journalists or answer questions from policymakers on topics related to their work. As a result of these chilling effects, Dr. Matias has lost important opportunities to develop new research, obtain expert feedback on research, and bring visibility to his work and the work of his lab.”

This is unacceptable. We should all hope that the lawsuit is successful.


The Last Quiet Thing

This is a useful reframe of our relationship to technology, presented in an arresting way:

“What if the exhaustion everybody feels isn't a moral failure but the completely rational response to being made responsible for an ecosystem of objects that never stop asking?”

I’m less enamored with the author’s reframe of the work to be done to maintain these devices as shifting from an IT department. That’s not the point, and it distracts from a stronger argument about finished products vs continuously-updated devices that want to have a codependent relationship with us. Whether it’s with an end user or an IT department, it’s still dysfunctional.

It reminds me of Amber Case’s Calm Tech Institute, which aims to promote these kinds of values. The quote it features on its homepage — “What matters is not technology itself, but its relationship to us.” from Xerox PARC technician Mark Weiser — is completely on-point. I hope we have more of these sorts of conversations.