Skip to main content

Open source startup founder, technology leader, mission-driven investor, and engineer. I just want to help.

Subscribe to get updates via email.

benwerd

werd.social/@ben

 

Re-introducing comments

This blog has had kind of a weird relationship with comments since I started it ten years ago. My previous blogs, in contrast, have always intentionally been spaces that can be homes for conversations. Over the years lots of people have asked me to fix this situation.

So, okay! Here’s what I’ve chosen to do:

As of today, you can comment on every blog post. I’ve chosen to use Commento, an open source comments platform. You can leave anonymous comments, authenticate independently, or use a few common SSO providers.

As an indieweb platform, the underlying Known software that powers this site supports webmentions. These haven’t displayed well on my site for a little while, so I’m committing to fixing them by next Monday, October 16. At this point every webmention that’s been sent will be displayed.

· Posts · Share this post

 

Spinning a tech career into writing

I have a lot of admiration for Eliot Peper, who has spun a career in tech into a career in writing science fiction novels rooted in the intersection of technology and society. They’re fun reads, first and foremost, but there’s always an insight into how technology is made, and what that means for the rest of us.

His latest, Foundry, is a kind of spy novel about semiconductors that takes you on a knockabout ride before arriving at a satisfying conclusion that could — if he wanted — be the start of a series that I would happily read. Along the way, small details betray an interest in just about everything. (I particularly appreciated a discussion of how people of partial-Indonesian descent are treated in the Netherlands.) His books are very much in the tradition of pageturners by authors like Michael Crichton and John Grisham. I’ve enjoyed them a lot.

One of the reasons I admire Eliot’s work is that this is absolutely where I want to take my life, too. Writing was always my first love: there was a Sliding Doors decision point where I could have chosen an English / journalism or computer science route. Despite a career in technology that has taken me to some interesting places, it’s a testament to that original love that I still don’t know if I picked the right path.

I ended up going into computers specifically because the nascent web was so perfect for storytelling. My computer science degree has been a useful bedrock for my work in software, but there was far less exploration of computing in intersection with the humanities (or any kind of humanity at all) than I would have liked. Over the last few years I’ve allowed myself to pursue my original interest, and it’s been rewarding. Lately, I’ve been getting 1:1 mentorship through The Novelry, which has helped me to overcome some imposter syndrome and put a more robust shape to the plot I’m working on. Eventually, I’d like to try for a creative writing MA, once I can demonstrate that I’m more than some computer guy.

I’ve been lucky to have people in my life who have made a living through writing stories. (I wrote about this recently with respect to opening up possibilities for our son.) My childhood friend Clare’s dad was the author and Tolkien biographer Humphrey Carpenter. I remember being enthralled that he could sit and write stories for a living. I was similarly enthralled, years later, when my cousin Sarah became a wildly successful young adult author. (She’s just started blogging again, and it’s quite lovely and worth subscribing to.) They demonstrated that it’s possible. It’s reductive to say that you’ve just got to sit down and do it — there is a craft here, which needs practice and attention — but that is, indeed, the first step, for them and every writer.

Giving myself the permission to just sit and do that has been difficult. Blogging is second nature for me: I can take an open box on the web, pour out my thoughts, and hit publish. An intentional long-form work requires a leap of faith, a great deal more craft and editing, and significantly less of a dopamine rush from people commenting and re-sharing. It’s possible that nobody else will see what I’ve written for years. It’s equally possible that it’s terrible and very few other people will ever see it. But I’ve decided that giving myself permission to sit down and write means giving myself permission to fail at it. In turn, I’ll learn from that failure and try again, hopefully writing something better the next time. I do want it to be a work that other people enjoy, but there’s also value in allowing myself to create without needing an immediate follow-up.

In the meantime, I have huge admiration for people like Sarah, Eliot, and Humphrey, who gave themselves the space and cultivated the dedication to write.

You should check out Eliot’s work and go subscribe to Sarah’s blog.

Now, onto today’s word count.

· Posts · Share this post

 

AI summarization and the open web

Arc, my default browser for a year now, recently launched a set of AI-driven features. I’m finding two to be particularly useful — and one of those is problematic in a way I want to discuss. They’re worth considering because, while Arc has a relatively small userbase for now, they’re likely to come to other browsers before too long.

The first is AI-enhanced search. If I hit command-F, the browser will try and find my search term in the page as it normally would. If it can’t, it’ll answer a question about the content of the page using AI.

As an illustration, here’s Arc answering a question based on a Verge article:

Arc summarizing an article on The Verge

The second is AI summaries of links. If you hit shift and hover over a link, it’ll tell you what the page is about. Here’s Arc previewing a link from my website:

Arc previewing a link from my website

This is both useful — I don’t necessarily want to open a new tab to look at a cited source — and potentially really problematic for a lot of the web. This isn’t unique to Arc: the feature is not markedly different from, say, ChatGPT’s browser capabilities, which is similarly problematic. Here’s ChatGPT answering questions about my website:

ChatGPT answering a question about my website

If you’re getting an automated summary of an information source, you’re extracting the content without thought for how that source sustains itself. For some, that will be display ads. I don’t really care for ad-driven business models, but they exist, and if a significant number of people suddenly start looking at AI summaries instead of an actual page, ad revenues will drop proportionately. For others, it’ll be donations — and AI summaries don’t have any calls to action to contribute. And some, of course, sit behind a paywall. The AI summaries appear to even summarize content that would otherwise be irretrievable without payment.

Here’s Arc summarizing a paywalled article from the Atlantic, for which I don’t have a subscription:

Arc summarizing a paywalled article from The Atlantic

It’s honestly really useful for users, but not super-great for the web ecosystem or the survival of those platforms.

I’m not necessarily suggesting that browsers discontinue these sorts of features. But I do think there needs to be some consideration for platform health and ensuring that the information sources we use on the web can continue to exist. So here are some ideas:

Inline calls to action. Browsers could look for markup in the page that indicates a call to action that a user could take — for example to subscribe or to donate. This could be an ad.

A universal basic paywall. Publications register to receive aggregate payments from browsers that use their content to create summaries. (Itself problematic because it essentially requires every publisher on the web to reveal their identities — unless you use crypto, which has its own issues.)

Allow publishers to set their own summary content. Not every summary needs to be written using AI; metadata in the head could provide a publisher-written summary, giving them control over what is displayed.

A general small web publisher fund. Rather than direct micropayments, browsers pay into a general fund that small web publishers can withdraw from.

Just accept that this is what the web is now. Last but not least: passive acceptance. It’s not great, particularly when browsers are largely manufactured by tech companies like Google that already make a ton of money extracting value from the web. The drop in direct pageviews could adversely affect smaller publishers in particular. But it’s also early — perhaps it will have a different effect on site visits than I think?

But these are my opinions. I’m aware that my lens here is oriented around the perceived needs of publishers on the open web. What do you think should happen? How will the ecosystem adapt?

· Posts · Share this post

 

An open rubric for technology assessment

A laptop showing some dashboard

I’ve written and open sourced a rubric for assessing new technologies as part of your organization. It’s written for use in non-technical organizations in particular, but it might be useful everywhere. The idea is to pose questions that are worth asking when you’re selecting a vendor, or choosing an API or software library to incorporate into your own product.

I originally wrote a version of an assessment template when I was CTO at The 19th. Because they have a well-defined equity mission, I wanted to make sure the vendors of technologies and services being chosen adhered to their values. I’d never seen questions like “has this software been involved in undermining free and fair elections” in a technology assessment before, but it’s an important question to ask.

This new assessment is written from scratch to include similar questions about values, as well as a lightweight risk assessment framework and some ideas to consider regarding lock-in and freedom to move to another vendor.

Some of these questions are hard to answer, but many will be surprisingly easy. The idea is not to undertake a research project: most prompts can be answered with a simple search, and the whole assessment should be completable in under an hour. The most important thing it does is add intention to questions of values, business impact, and how well it solves an important problem for your organization.

It’s an open source project, so I invite contributions, edits, and feedback. Let me know what you think!

· Posts · Share this post

 

Why I hate flags

A hand waving a dinky little American flag

In her latest (excellent) book Doppelganger: A Trip Into the Mirror World, the author Naomi Klein makes an offhand comment that, as a leftist, flags make her itchy. I feel the same way, in a way that goes beyond the Stars and Stripes or the Union Jack.

At its worst, a national flag becomes a kind of uniform that you wield to declare loyalty above all else to your nation of origin. For me, it’s a statement of nationalism: of belonging not to the human race but to a particular subset that you hold to be greater than the rest. Rather than a diverse plurality, it’s a uniform that stands for homogeneity; it’s a way of saying, we are all this one thing. The flag, and the anthem alongside it, is about national pride rather than human pride; pride in a set of administrative borders and legislative rules rather than ideals. It’s idolatry.

Back in 2012, the athlete Leo Manzano was roundly criticized for carrying the Mexican flag alongside the American one after winning an Olympic silver medal. People were outraged: how dare you align yourself with two nations? Manzano was honoring his heritage, but the idea that people can be more than one thing and be a part of more than one context and community didn’t sit well with flag worshipers. In 2016, American footballers started to kneel for the national anthem to make the point that America didn’t care for its people equally; for many, this was, again, a violation. Respect the anthem! Respect the flag! Stand to attention! Conform!

Once this set of patriotic norms has been established — as it has since the dark days of McCarthyism in the 1950s — it’s easy to cast doubt on people who call a country’s acts into question. “He hates America,” someone might say about someone who questions America’s foreign policy, casting real questions or criticisms into a projection of irrational blind hatred. This is the genuine definition of fascism: the creation of in-groups and out-groups and a demand for complete devotion to the nation from the populace. Not only is it fiercely repressive, but it’s inherently counter-productive. How can you strive to improve a place if questioning it is frowned upon?

Companies use similar tricks. A company’s brand is very much like its flag, and employees are encouraged to display blind devotion to its mission, vision, and strategy. By encouraging blind faith rather than independent, individual questioning, company bosses hope to maintain an obedient workforce. Loyal, valued employees aren’t the ones who start unions or share around salary spreadsheets to document wage inequality. They’re the ones who proudly wear the company gear, decked out in their logos, and are excited to follow the strategy du jour. These companies don’t want employees to question their executives’ ideas; to highlight ethical lapses; to point out harms enacted in the name of profit or a higher share price.

To the extent that companies are able to achieve this, it’s in part because this sort of blind fascism is already a core part of American culture. It’s an extension of what’s already in the water.

Americans are encouraged not to think too hard about what’s happening outside their borders. For some, particularly rural Republicans, that might be the borders of their town, or their state; for others, it might be the nation as a whole. Regardless, a holistic view of the world — we are all connected, we are all human, one nation’s actions affect the peoples of another — is not commonly held. The flag is a tool to that end: it demands that we should be loyal to America (or our state, or our town). Those other people are less important. “What happens in other places doesn’t really affect me, so I don’t pay attention,” I’ve been told, again and again. Yet ask the people in other countries what they think about their waters rising, their air getting dirtier, their democratically-elected governments being removed through coups in order to secure resource rights. We are all connected.

Consider the Meta workers who blindly allied themselves with their employer when it emerged that it had been actively complicit in a genocide in Myanmar. While some employees certainly did call out Mark Zuckerberg and other executives, many more sided with the company. They were loyal to their community no matter what, even in the face of evidence it had allowed atrocities to be committed using the platform they all built together. While Rohingya were dying, people enabling it proudly wore their Facebook T-shirts and worried about proving themselves at their performance reviews.

It would be much harder to dismiss the plight of an entire people if they were considered to be people in the first place. They’re out there over some border that most people have never seen, living in some other place, and are therefore lesser. We don’t see them as being us, in part because of our worship of nationhood, of flags, of anthems. There’s a reason why all of the worst movements in history centered a reverence for those elements.

I remember, as a child growing up in England, hearing the patriotic song Rule Britannia sung over and over again. It goes like this:

Rule, Britannia! rule the waves:
Britons never will be slaves.

That second line is a little rich if you know what, exactly, Britain was doing out on those waves in the 1700s, when the lyrics were written. If we cared about the legacy of our actions, and in this case the impacts that slavery had over generations, we might not continue to sing it. But the desire for blind loyalty through patriotism continues to overwhelm the need to actually confront and question that history and inhibits the discussion of those actions. I only learned in the last year that Lloyds of London, the oft-cited, very famous insurance broker, made its money by being the insurance center for the global slave trade. There is a need for a much greater reckoning, which blind loyalty impedes.

The third verse of the Star Spangled Banner, the American national anthem, goes as follows:

No refuge could save the hireling and slave
From the terror of flight or the gloom of the grave,
And the star-spangled banner in triumph doth wave
O’er the land of the free and the home of the brave.

This is a derogatory reference to Black people in the Revolutionary War who fought with the British because American rule meant living in slavery. Its author described Black people as “a distinct and inferior race, which all experience proves to be the greatest evil that affects a community”.

There is no need in the world to revere this old world fascism. There is no need in the world to perpetuate the myth of national superiority; of the goodness of military might; of pride in homogeneity. We are all one people, and our strength is in our diversity.

One of the greatest things the internet has given us is a post-national connectivity. We can speak with people in other nations as easily as we can with our neighbors down the street. The only real impedances are timezones and language barriers; the latter is being broken by AI, and the former is greatly aided by asynchronous communication. No visas are required to discuss, collaborate, and share ideas. In a world where most people have cameras and connections, nobody needs to be seen as inhuman. We can see each other; we can converse; we can know each other despite geographic separations in a way that we could never have before. I still believe that the internet can be a great force for peace: as we learn more about each other as humans, the less we can dismiss the lived experiences as others. They become real.

The thing is, we have to do it. We have to overcome the forces that tell us we should only care about the people in our local communities; the ones that say that you should be loyal to a single nation no matter how it conducts itself. We actually have to stand and say that we are welcoming and inclusive, which also means actively reckoning with the past.

And the same goes for people who are allied with their employer. There is no need to work on adverse policies unquestioningly. Organizing, advocating, thinking for yourself and bringing your whole, individual identity to your work community should be encouraged. Plurality, rather than a singular way of being, should be the expected norm.

Flags, patriotism, nationalism, anthems? I see those as anti-human ideas. As Naomi Klein says, they make me itchy. I’d rather we consider our humanity and our connectedness, and ditch the parochial horizons once and for all.

· Posts · Share this post

 

The notable list: October 2023

A robot drawing with a glowing orb

This is my monthly roundup of the links and media I found interesting. Do you have suggestions? Let me know!

Apps + Websites

DALL·E 3. Once again, this looks completely like magic. Very high-fidelity images across a bunch of different styles. The implications are enormous.

Photoshop for Web. Insanely good. It blows my mind that this can be done on the web platform now.

Privacy Party. This is really good: a browser extension (for Chrome-based browsers) that goes through your social networks and helps you update your settings to optimize for privacy and security. Really well-executed.

Notion web Clipper - Klippper. I’m a heavy Notion web clipper user, but this is far better for my needs. I was worried I’d need to build it myself. Luckily: no!

Mastodon 4.2. Lots of good new changes here - and in particular a much-needed search overhaul. My private instance is running the latest and I like it a lot.

Notable Articles

AI

How the “Surveillance AI Pipeline” Literally Objectifies Human Beings. “The vast majority of computer vision research leads to technology that surveils human beings, a new preprint study that analyzed more than 20,000 computer vision papers and 11,000 patents spanning three decades has found.”

California governor vetoes bill banning robotrucks without safety drivers. The legislation passed with a heavy majority - this veto is a signal that Newsom favors the AI vendors over teamster concerns. Teamsters, on the other hand, claim the tech is unsafe and that jobs will be lost.

ChatGPT Caught Giving Horrible Advice to Cancer Patients. LLMs are a magic trick; interesting and useful for superficial tasks, but very much not up to, for example, replacing a trained medical professional. The idea that someone would think it’s okay to let one give medical advice is horrifying.

AI data training companies like Scale AI are hiring poets. These poets are being hired to eliminate the possibility of being paid for their own work. But I am kind of tickled by the idea that OpenAI is scraping fan-fiction forums. Not because it’s bad work, but imagine the consequences.

John Grisham, other top US authors sue OpenAI over copyrights. It will be fascinating to see the outcome of this - which, in turn, will set a precedent for how commercial data can be used to train AI (and other software systems) going forward.

Who blocks OpenAI? “The 392 news organizations listed below have instructed OpenAI’s GPTBot to not scan their sites, according to a continual survey of 1,119 online publishers conducted by the homepages.news archive. That amounts to 35.0% of the total.”

Microsoft announces new Copilot Copyright Commitment for customers. “As customers ask whether they can use Microsoft’s Copilot services and the output they generate without worrying about copyright claims, we are providing a straightforward answer: yes, you can, and if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.”

Our Self-Driving Cars Will Save Countless Lives, But They Will Kill Some of You First. “In a way, the people our cars mow down are doing just as much as our highly paid programmers and engineers to create the utopian, safe streets of tomorrow. Each person who falls under our front bumper teaches us something valuable about how humans act in the real world.”

Climate

EVs are a climate solution with a pollution problem: Tire particles. Another reason why the really sustainable solution to pollution from cars is better mass transit.

Revealed: top carbon offset projects may not cut planet-heating emissions. “The vast majority of the environmental projects most frequently used to offset greenhouse gas emissions appear to have fundamental failings suggesting they cannot be relied upon to cut planet-heating emissions, according to a new analysis.”

Earth ‘well outside safe operating space for humanity’, scientists find. “This update finds that six of the nine boundaries are transgressed, suggesting that Earth is now well outside of the safe operating space for humanity.” No biggie.

Why the United States undercounts climate-driven deaths. Another way the effect of the climate crisis is understated: climate deaths are undercounted. Changing this state of affairs is possible but requires effort, training, and resources. In the meantime, many people still don’t understand how serious the crisis actually is.

Culture

Nature TTL Photographer of the Year 2023: Winners Gallery. Every image here is stunning.

‘The scripts were the funniest things I’d ever read’: the stars of Peep Show look back, 20 years later.Before there was Succession, there was Peep Show. A brilliant piece of TV that launched a bunch of careers. If you haven’t seen it, give yourself the gift of checking it out.

The Berkeley Hotel hostage. I know people who worked with Douglas Adams and I’m incredibly envious of them. He seems like someone I would have really enjoyed meeting - and his books (all of them) were a huge part of my developing psyche. This story seems so human, so relatable. Trapped by his success, in a way.

Refusing to Censor Myself. A less-discussed problem with book bans: publishers will self-censor, as they did here by requiring the removal of the word “racism” in the context of internment camps.

Writer Sarah Rose Etter on not making things harder than they need to be. I found this interview fascinating: definitely a writer I look up to, whose work I both enjoy and find intimidatingly raw. And who happens to have a very similar day job to me.

Democracy

FTC Sues Amazon for Illegally Maintaining Monopoly Power. “Amazon’s ongoing pattern of illegal conduct blocks competition, allowing it to wield monopoly power to inflate prices, degrade quality, and stifle innovation for consumers and businesses.” Whatever happens here, it will be meaningful. It’s also nice to see the FTC actually wielding its antitrust powers.

Intuit Pushing Claim That Free Tax-Filing Program Would Harm Black Taxpayers. Intuit has a stranglehold on how taxes are filed in America. For what? Many other countries just have an easy to use tax portal of their own. This is a business that shouldn’t even need to exist.

Migrants tracked with GPS tags say UK feels like ‘an outside prison’. I had no idea Britain was fitting migrants and asylum seekers with ankle bracelets and surveilling them to this level. It seems impossible that this is something people would think is right and just. The dystopian cruelty is mind-boggling.

An endless battle for the rights of the underclass. Every word of this, but particularly: “Cultural warfare was a political ploy designed to keep workers from recognizing our common ground and banding together against corporate abuses and thefts.”

US economy going strong under Biden – Americans don’t believe it. It’s how we measure the economy, stupid.

What Mitt Romney Saw in the Senate. A fascinating read that makes me want to check out the full book, which seems to me like an attempt by Romney to save the Republican Party from Trumpism (as well as, let’s be clear, his own reputation). Wild anecdote after wild anecdote that highlights the cynicism of Washington political life.

Never Remember. The best thing I read on the anniversary of 9/11 by far. It feels cathartic to read. But it’s also so, so sad.

New Elon Musk biography offers fresh details about the billionaire's Ukraine dilemma. If I was building technology to let people watch Netflix and check their email from remote locations, I would also be upset about it being used for drone strikes. But if that’s the case, you shouldn’t be deploying your tech to the military in the first place. Nor should you be making strategic military decisions of your own.

Majority of likely Democratic voters say party should ditch Biden, poll shows. No surprises here. We need more progressive change than we’re getting. But obviously, if it’s Biden v Trump, there’s only one choice.

AOC urges US to apologize for meddling in Latin America: ‘We’re here to reset relationships’. Yes. Absolutely this. And everywhere.

Health

The Anti-Vax Movement Isn’t Going Away. We Must Adapt to It. Depressing. I agree that vaccine denial is not going away, and that we need to find other ways to mitigate outbreaks. But what a sad situation to be in.

Labor

Remote work may help decrease sexual assault and harassment, poll finds. “About 5 percent of women who were working remotely reported instances in that time, compared with 12 percent of in-person women workers. Overall, only 5 percent of remote workers reported instances in the past three years, compared with 9 percent of those who work fully or mostly in person.”

Working mothers reach record high, above pre-pandemic levels. Flexible work from home policies have allowed more mothers with young children to join the workforce than ever before. Yet another reason why these policies are positive for everyone and should not just stick around but be significantly expanded.

Media

Amanda Zamora is stepping down as publisher at The 19th. Amanda is absolutely fearless and I was privileged to work with her. As co-founder of The 19th, she was an absolutely core part of what it became: both a strategist and culture instigator. What she does next will certainly change media; I’ll be cheerleading.

Failing Without Knowing Why: The Tragedy Of Performative Content. Thought-provoking for me: particularly as someone who thinks through ideas through writing. But perhaps that writing doesn’t need to be in public, in front of an audience.

How I approach crafting a blog post. “I don’t think I’ve seen someone walk through their process for writing a blog post, though.” I love this breakdown! Tracy’s structured process shows up in the quality of her posts. I love the thoughtfulness here.

In defense of aggressive small-town newspapers. This: “The prevalence of “news deserts” has apparently led some to think it’s normal for neighborhood news outlets to function as lapdogs rather than watchdogs.” The purpose of journalism is to investigate in the public interest.

In the AI Age, The New York Times Wants Reporters to Tell Readers Who They Are. I think this is the right impulse: people tend to follow and trust individual journalists, not publications. Building out profiles and establishing more personal relationships helps build that trust.

Counting Ghosts. “Web analytics sits in the awkward space between empirical analysis and relationship building, failing at both, distracting from the real job to be done: making connections, in whatever form that means for our project.”

Publisher wants $2,500 to allow academics to post their own manuscript to their own repository – Walled Culture. The open access movement is an important way academics can fight back against predatory publishers for the good of human knowledge everywhere - but the publishers are still out there, grifting.

A New Low: Just 46% Of U.S. Households Subscribe To Traditional Cable TV. I’ve lived in the US for twelve years, and at no point have I even been tempted by traditional cable. Every time I encounter it, I wonder why people want it. It’s a substandard, obsolete product. So this is no surprise.

The Ad Industry Bailed On News. Can An AI Solution Offer A Way Back? Services like this become single points of failure with outsize power over the journalism industry. It’s a bad idea. No one entity should be the arbiter of bias in news or where a buyer should put their money. For one thing, who watches that entity’s own inevitable bias? And if you’re offering AI as a bias-free solution, you’ve already lost.

Zine: How We Illustrate Tech (and AI) at The Markup. Lovely!

White House to send letter to news execs urging outlets to 'ramp up' scrutiny of GOP's Biden impeachment inquiry 'based on lies'. I couldn’t be less of a fan of the current Republican Party but I hate this. The White House should not be sending letters to the media encouraging them to do anything. That’s not the sort of relationship we need our journalistic media to have.

Snoop Dogg can narrate your news articles. Snoop Dogg gimmick aside, this is actually pretty neat, and useful. I’d also like the opposite: sometimes I want to read podcasts. Different contexts demand different media; I wish content itself could be more adaptable.

Non-news sites expose people to more political content than news sites. Why? Two thirds of the political content people consume come from non-news sites. And most of the news content people read is not overtly political. Instead, it’s mostly coming from entertainment - which has no ethical need to report factually.

Naomi Klein's "Doppelganger". “Fundamentally: Klein is a leftist, Wolf was a liberal. The classic leftist distinction goes: leftists want to abolish a system where 150 white men run the world; liberals want to replace half of those 150 with women, queers and people of color.”

Society

US surgeons are killing themselves at an alarming rate. One decided to speak out. “Somewhere between 300 to 400 physicians a year in the US take their own lives, the equivalent of one medical school graduating class annually.”

Oxford University is the world’s top university for a record eighth year. This presumably means that the Turf Tavern is the best student pub in the world.

Britain’s attitude to refugees shows, once again, that it’s a colonial nation. “Hostile immigration policy stokes racism but the foundation it builds upon itself is racist and maintains a ‘colonial present’. Through dealing with migrants like pests, who deserve to be locked away in a prison barge, the British government continues to ignore the fact that, “Borders maintain hoarded concentrations of wealth accrued from colonial domination.”″

19th News/SurveyMonkey poll: The State of Our Nation. Lots of interesting insights in this poll, including on nationwide attitudes to gender-affirming care (only 29% of Republicans think their party should focus on it) and gun control (82% of Americans want to restrict access in domestic abuse cases).

Victims of forced sterilization in California prisons entitled to reparations. One thing I learned from this story is that forced sterilization of inmates has still been widespread in the 21st century in America. Ghoulish.

Unconditional cash transfers reduce homelessness. It turns out that if you give homeless people money as assistance, it really helps them. This is something society should do.

Startups

Why Starting Your Investor Updates With “Cash on Hand” Information is a Major Red Flag Right Now. It’s Maybe the Only Thing Worse Than Not Sending Updates at All. I appreciated this succinct discussion on using venture dollars well from Hunter Walk. In particular, this: “Startups spend a $1 to ultimately try and create more than $1 of company. If you do that repeatedly and efficiently we will all make money together.” Too many founders still think of investment as being akin to a grant.

Technology

Meta in Myanmar, Part I: The Setup. “By that point, Meta had been receiving detailed and increasingly desperate warnings about Facebook’s role as an accelerant of genocidal propaganda in Myanmar for six years.” We need more discussion of this - I’m grateful for this four-part series.

Optimizing for Taste. A solid argument against A/B testing. A lot of it comes down to this: “It fosters a culture of decision making without having an opinion, without having to put a stake in the ground. It fosters a culture where making a quick buck trumps a great product experience.” I agree.

Meredith Whittaker reaffirms that Signal would leave UK if forced by privacy bill. Signal on UK privacy law: “We would leave the U.K. or any jurisdiction if it came down to the choice between backdooring our encryption and betraying the people who count on us for privacy, or leaving.” Good.

U.S. Counterintel Buys Access to the Backbone of the Internet to Hunt Foreign Hackers. “The news is yet another example of a government agency turning to the private sector for novel datasets that the public is likely unaware are being collected and then sold.”

Digital Disruption: Measuring the Social and Economic Costs of Internet Shutdowns & Throttling of Access to Twitter. This report found that removing access to Twitter created significant economic and social impacts. Question: are some of these now replicated with the switch to X?

Build Great Software By Repeatedly Encountering It. This is really important, and why we talk about “eating your own dogfood”. If you don’t use what you build, you can’t build anything great.

EV charging infrastruture is a joke – Brad Barrish. Non-Tesla EV charging infrastructure is awful. It’s good that Tesla has opened the standard, but it’s not good that the only really viable charging infrastructure is owned by one company. It needs to be fixed.

The Affordance. I strongly agree with this. “View source” has been an important part of the culture of the web since the beginning. Obfuscating that source or removing the option does damage to its underlying principles and makes the web a worse place. I like the comparison to the enclosure movement, which seems apt.

Online Safety Bill: Crackdown on harmful social media content agreed. This is a horrendous bill that is designed to encourage self-censorship, including around topics like “illegal immigration”, as well as vastly deepen surveillance on internet users. And Britain passing it will likely embolden other nations to try the same.

WordPress blogs can now be followed in the fediverse, including Mastodon. I’d prefer if this was default WordPress functionality - but the big lede is buried here. Hosted WordPress sites are getting fediverse compatibility. That’s a huge deal.

Finishing With Twitter/X. Who at the intersection of tech and politics is still posting on Twitter? And should they be? A good breakdown.

Unity has changed its pricing model, and game developers are pissed off. As with API pricing changes across social media, these tiers disproportionately penalize indie developers. The message is clear: they don’t want or need those customers. In a tighter economy, much of technology is re-organizing around serving bigger, wealthier players.

Silicon Valley's Slaughterhouse. “Andreessen wasn’t advocating for a tech industry that accelerates the development of the human race, or elevates the human condition. He wanted to (and succeeded in creating) a Silicon Valley that builds technology that can, and I quote, “eat markets far larger than the technology industry has historically been able to pursue.””

Google vet wants to turn your hot water heater into a "virtual power plant". I really need this for my home, and I suspect my entire region needs it. This could do a lot of good and be the start of something much bigger using virtual power plants as a platform.

It’s Official: Cars Are the Worst Product Category We Have Ever Reviewed for Privacy. Every modern car brand abuses your personal information. 84% sell your data (including where you go and when). 56% will share it with law enforcement without a warrant. And none of them have demonstrably adequate security.

Tucson's Molly Holzschlag, known as 'the fairy godmother of the web,' dead at 60. Rest in peace, Molly. We’ve lost one of the really good people who made the web better.

· Posts · Share this post

 

What Elon Musk's X is getting right

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Some dude looking at X on his phone

· Posts · Share this post

 

Every company is a community

Silhouettes of people gathering at sunset

There’s a piece in the latest Harvard Business Review which starts with a premise I’d like to challenge:

It’s well-known that firms where strategy and culture align outperform firms where they do not. It follows, then, that if the two aren’t aligned, you most likely need to change your culture.

The rest of the article goes on to describe how storytelling is an integral part of establishing a strong culture in a company — and it absolutely is. A cohesive, supporting culture and the ability to tell strong stories are things every organization needs if it wants to succeed.

What I want to challenge is this idea that if strategy and culture don’t align, it’s the culture that needs changing. To be sure, quite often it does: particularly in situations where not enough time has been spent building a supportive culture to begin with. But culture is made of people, relationships, norms, and stories. The premise above hinges on the idea that if the people in your organization aren’t aligned with an organization’s strategy, you need to change the people. The strategy is paramount. But what if that’s not true? What if the strategy really is at fault, and organizations need to put more trust in their people?

I believe that the strongest organizational cultures are the most equitable ones. Whether you agree with my belief or not, research backs me up: the most productive work cultures are the ones where everyone feels empowered to speak up and be heard, where management genuinely listens to and acts on both the needs and ideas of their workforce. There’s a necessary underlying respect that you can’t simply storytell your way around.

In turn, that respect is built by distributing equity: giving people real ownership, both figurative and literal, in their workplace. There’s a reason burnout is driven by people not feeling like they can affect the choices that impact their work; people want to have control, and to make real progress on meaningful work.

If you don’t have those things, then, sure, your culture needs to be changed. But if you find yourself wishing that everyone would just go along with what you’re telling them to do, perhaps the first change needs to be a little more personal.

Building this level of interpersonal respect necessitates approaching building your workforce like a community. In turn, this means prioritizing strong interpersonal relationships. Can people talk with each other openly? Are they able to bring their whole selves to work? Does management listen and act? Are there rules and norms that foster emotional safety, particularly among people who may feel underrepresented and therefore alone in the organization? Does the organization treat the people who work inside it as fully-realized, three dimensional human beings, or are they fungible line items on a spreadsheet?

Is your company a diverse, happy, healthy community of trusted experts?

Was your strategy co-developed with your community?

Are your community’s needs and ideas represented in your decisions?

Will your community directly experience any upside that is an outcome of their work?

From here, other, related ideas become more obvious. If your community is co-developing your strategy, your want as many diverse ideas as possible. Hiring people from different contexts and backgrounds becomes an integral part of setting strategy. It becomes important to ask, when considering any new hire, whether they’ll bring a new perspective to your community. A homogenous workforce becomes a liability, because you’ll have a narrower set of ideas to work with.

Obviously, this mindset of collaborative inclusion is not commonly employed among business leaders. If it were, we’d see more diverse organizations with happy workforces, rather than the stark monocultures that engage in union-busting we see in tech (and everywhere) today. It runs completely counter to Elon Musk’s “extremely hardcore” work culture and wild firing rampages, for example. Not to mention the screaming fits associated with software leaders like Bill Gates and Steve Jobs.

Often, managers want to be mavericks: the smartest people in the room, who bend reality with their singular vision and bring everyone along from the ride. The truth is, it’s kind of bullshit: a story egotists tell themselves to justify being antisocial. You can’t hypnotize people into working for you through sheer charisma. The only way to scale an organization is to set a really strong internal culture first, and then empower everyone you add to your community to help you build it.

If you’ve hired great people and built a strong community, and those people are telling you that your strategy is off, you should believe them. And then you should shut up and listen to them, work together, and build something better together.

If you haven’t hired great people, and you haven’t built a great community, you’ve failed at business-building 101 and need to go back to the drawing board.

Every industry, including tech, comes down to people. Every company is a community. And every community is built on trust, respect, and equity.

· Posts · Share this post

 

Unions work

Hands joining together in solidarity

The Writer’s Guild of America seems to have received everything it asked for:

Delivering on issues that many scribes saw as core to their profession, the deal contains big leaps in AI guardrails, residuals and data transparency for writers — leaps that could be transplanted into the upcoming negotiations between the AMPTP and SAG-AFTRA, which could start in the next week.

This is a great example of how unions can really work for their members. Hollywood is a broadly-unionized industry, as we’ve seen for the last five months, and the result has been real gains in writer equity and compensation in the face of technology changes like streaming and AI.

Of course, at least in America, most industries are not highly-unionized. 10.1% of wage and salary workers were unionized in 2022, down from a peak of about a third, which coincided with income inequality’s lowest point. Generally, unionized workers make 18% more (20% for African Americans, 23% for women, and 34% for Hispanic workers).

Tech is often the home of a particular kind of libertarian thinking that is often anti-union. But that, thankfully, is changing. In 2004, a third of tech workers were in favor of unionization; twelve years later, it was 59%. These days, prominently recognized tech unions include the Alphabet Workers Union, but firms have engaged in nakedly union-busting activity, from big tech companies like Apple, later-stage startups like Instacart, and supposedly public interest organizations like Code for America. (It’ll be no surprise that Elon Musk’s Tesla was found guilty of illegal union-busting tactics).

Regardless, the industry would gain immensely from unionization — and more and more tech workers agree. It’s not so much about wages as recognition and a say in how these companies are run. Last year, Jane Lytvynenko, senior research fellow on the Tech and Social Change Project at the Shorenstein Center wrote in MIT Technology Review:

[…] Silicon Valley companies don’t see more protests about wages from their white-collar employees—those workers get stock options, good salaries, and free lunch. But such perks do little to address structural discrimination.

My hope is that examples like the WGA’s win will help spread this idea that there should be a counterbalance to corporate power, and that the people who do the work should have influence over how it is organized. If you’ll pardon the pun, tech workers should own the means to push to production. Allworkers should have a say in how their companies function. And I believe — still crossing my fingers, because there’s a lot of work to do and a lot of gains still to be made — that this future is coming.

In the meantime, congratulations to the WGA! Nice work. Solidarity.

· Posts · Share this post

 

Parenting in the age of the internet

A toddler using an iPhone on the floor

I learned to read and write on computers.

Our first home computer, the Sinclair ZX81, had BASIC shortcuts built into the keyboards: you could hit a key combination and words like RUN, THEN, and ELSE would spit out onto the screen. I wrote a lot of early stories using those building blocks.

Our second, the Atari 130XE, had similar BASIC instructions, but also had a much stronger software ecosystem. In one, you would type a rudimentary story, and 8-bit stick figure characters would act it out on screen. “The man walks to the woman”; “The wumpus eats the man.”

We never had a games console in the house, much to my chagrin, although the Atari could take games cartridges, and I once got so far in Joust that the score wrapped back around to 0. But mostly, I used our computers to write stories and play around a little bit with simple computer programming (my mother taught me a little BASIC when I was five).

We walk our son to daycare via the local elementary school. This morning, as we wheeled his empty stroller back past the building, a school bus pulled up outside and a stream of eight-year-olds came tumbling out in front of us. As we stood there and watched them walk one by one into the building, I saw iPhone after iPhone after iPhone clutched in chubby little hands. Instagram; YouTube; texting.

It’s obvious that he’ll get into computers early: he’s the son of someone who learned to write code at the same time as writing English and a cognitive scientist who does research for a big FAANG company. Give him half a chance and he’ll already grab someone’s phone or laptop and find modes none of us knew existed — and he’s barely a year old. The only question is how he’ll get into computers.

I’m adamant with him, as my parents were with me, that he should see a computer as a creation, not a consumption device. At their best, computers are tools that allow children to create things themselves, and learn about the world in the process. At their worst, they’re little more than televisions, albeit with a near-infinite number of channels, that needlessly limit your horizons. For many kids, social media is such a huge part of of their life that being an influencer is their most hoped-for job. No thank you: not for my kid.

But, of course, if we can steer away from streaming media and Instagram’s hollow expectations, there’s a ton of fun to be had. This is one area where I think generative AI could be genuinely joyful: the fun that I had writing stories for those 8-bit stick figures, transposed to a whole universe of visual possibilities. That is, of course, unless using those tools prevents him from learning to draw himself.

He’s entering a very different cultural landscape where computers occupy a very different space. Those early 8-bit machines were, by necessity, all about creation: you often had to type in a BASIC script before you could use any software at all. In contrast, today’s devices are optimized to keep you consuming, and to capture your engagement at all costs. Those iPhones those kids were holding are designed to be addiction machines.

Correspondingly, our role as parents is to teach responsible use. If we are to be good teachers, that also means we have to demonstrate responsible use: something I am notoriously bad at with my own phone. I’ve got every social network installed. I sometimes lose time to TikTok. I’m a slave to my tiny hand-computer in every way I possibly can be. I tell myself that I need to know how it all works because of what I do for a living, but the real truth is, I love it. I don’t need to be on social media; I don’t need to be a part of the iPhone Upgrade Program. I just am.

I think responsible use means dialing up the ratio of creation to consumption for me, too. If I’m to convey that it’s better to be an active part of shaping the world than just being a passive consumer of it, that’s what I have to do. This is true in all things — a core, important lesson is that there isn’t one way to do things, and life is richer if you don’t follow the life templates that are set out for us — but in some ways I feel it most acutely in our relationship to technology.

There will certainly be peer pressure. His friends will have iPhones. I don’t think withholding technology is the right thing to do: consider those kids whose parents never let them have junk food, who then go out and have as much junk food as possible as soon as they can. Instead, if he has an iPhone, he will learn how to make simple iPhone apps. You’d better believe that he’ll learn how to make websites early on (what kind of indieweb advocate would I be otherwise?). He will be writing stories and editing videos and making music. And, sure, he’ll be consuming as part of that — but, in part, as a way to get inspired about making his own things.

These days, creating also means participating in online conversations. As he gets older, we’ll need to have careful discussions about the ideas he encounters. I’m already imagining that first conversation about why Black Lives Matter is an important movement and how to think about right-wing content that seeks to minimize other people. I don’t want our kid to be a lurker who thinks people should be happy with what they get; I want him to feel like the world is his oyster, and that he can help change it for the better. Our devices can be a gateway to bigger ideas, or they can be a path to a constrained walled garden of parochial thought. It all requires guidance and trust.

The computer revolution happened between my birth and his. Realizing so makes me feel as old as dust, but more importantly, it opens up a new set of parental responsibilities. I want to help him be someone who creates and affects the world, not someone who lets the world happen to him. And there’s so much world to see.

· Posts · Share this post

 

Trying out Kagi

As an experiment, I’m trying out Kagi as my default search engine, switching from Google (no link required; they’re probably behind you right now).

I like the idea of an ad-free experience: a paid-for engine puts the incentives in the right place. But it’s got to be about more than ideology. Because search is such an important part of my working life (and most knowledge workers’ working lives) it’s important that the results are actually better than Google’s.

For a while, I tried to use DuckDuckGo, which uses Bing’s search engine behind the scenes. It was just fine for most things and markedly worse for a few, so I had to switch back, even though I love its privacy focus.

Kagi uses a mix of third-party engines and its own to provide its results. So far, they seem pretty good, but the proof will be in intensive use.

What I already know I love: they have a StumbleUpon-like site for discovering small websites, and surface blogs in some search results when they’re relevant. That’s something I want from every search engine.

We will see! I’ll give it a month and then report back.

· Posts · Share this post

 

Subscribing to the blogs of people I follow on Mastodon

It’s no surprise to anyone that I prefer reading peoples’ long-form thoughts to tweets or pithy social media posts. Microblogging is interesting for quick, in-the-now status updates, but I find myself craving more nuance and depth.

Luckily, Blogging is enjoying a resurgence off the back of movements like the Indieweb (at one end of the spectrum) and platforms like Substack (at the other), and far more people are writing in public on their own sites than they were ten years ago. Hooray! This is great for me, but how do I find all those sites to read?

I figured that the people I’m connected to on Mastodon would probably be the most likely to be writing on their own sites, so I wondered if it was possible to subscribe to all the blogs of the people I followed.

I had a few criteria:

  1. I only wanted to subscribe to blogs. (No feeds of updates from GitHub, for example, or posts in forums.)
  2. I didn’t want to have to authenticate with the Mastodon API to get this done. This felt like a job for a scraper — and Mastodon’s API is designed in such a way that you need to make several API calls to figure out each user’s profile links, which I didn’t want to do.
  3. I wanted to write it in an hour or two on Sunday morning. This wasn’t going to be a sophisticated project. I was going to take my son to the children’s museum in the afternoon, which was a far more important task.

On Mastodon, people can list a small number of external links as part of their profile, with any label they choose. Some people are kind enough to use the label blog, which is fairly determinative, but lots don’t. So I decided that I wanted to take a look at every link people I follow on Mastodon added to their profiles, figure out if it’s a blog I can subscribe to or not, and then add the reasonably-bloggy sites to an OPML file that I could then add to an RSS reader.

Here’s the very quick-and-dirty command line tool I wrote yesterday.

Mastodon helpfully produces a CSV file that lists all the accounts you follow. I decided to use that as an index rather than crawling my instance.

Then it converts those account usernames to URLs and downloads the HTML for each profile. While Mastodon has latterly started using JavaScript to render its UI — which means the actual profile links aren’t there in the HTML to parse — it turns out that it includes profile links as rel=“me” metatags in the page header, so my script finds end extracts those using the indieweb link-rel parser to create the list of websites to crawl.

Once it has the list of websites, it excludes any that don’t look like they’re probably blogs, using some imperfect-but-probably-good-enough heuristics that include:

  1. Known silo URLs (Facebook, Soundcloud, etc) are excluded.
  2. If the URL contains /article, /product, and so on, it’s probably a link to an individual page rather than a blog.
  3. Long links are probably articles or resources, not blogs.
  4. Pages with long URL query strings are probably search results, not blogs.
  5. Links to other Mastodon profiles (or Pixelfed, Firefish, and so on) disappear.

The script goes through the remaining list and attempts to find the feed for each page. If it doesn’t find a feed I can subscribe to, it just moves on. Any feeds that look like feeds of comments are discarded. Then, because the first feed listed is usually the best one, the script chooses the first remaining feed in the list for the page.

Once it’s gone through every website, it spits out a CSV and an OPML file.

After a few runs, I pushed the OPML file into Newsblur, my feed reader of choice. It was able to subscribe to a little over a thousand new feeds. Given that I’d written the script in a little over an hour and that it was using some questionable tactics, I wasn’t sure how high-quality the sites would be, so I organized them all into a new “Mastodon follows” folder that I could unsubscribe to quickly if I needed to.

But actually, it was pretty great! A few erroneous feeds did make it through: a few regional newspapers (I follow a lot of journalists), some updates to self-hosted Git repositories, and some Lemmy feeds. I learned quickly that I don’t care for most Tumblr content — which is usually reposted images — and I found myself wishing I’d excluded it. Finally, I removed some non-English feeds that I simply couldn’t read (although I wish my feed reader had an auto-translate function so that I could).

The upshot is that I’ve got a lot more blogs to read from people I’ve already expressed interest in. Is the script anything close to perfect? Absolutely not. It it shippable? Not really. But it did what I needed it to, and I’m perfectly happy.

· Posts · Share this post

 

G'mar chatima tova to all who observe.

· Statuses · Share this post

 

Long-term blogging

Tracy Durnell celebrates 20 years of blogging:

A blog is a much nicer place to publish than social media, sparking fewer but more meaningful interactions. Blogging allows writers a more forgiving pace with slower conversation. On their blog, people can be themselves instead of playing to an audience and feeling judged — a place to escape the pressures of one-upmanship and signaling, the noise of the ever-demanding attention economy, and the stress of hustle culture.

It’s a huge achievement, to be sure, and I couldn’t agree more with Tracy’s sentiment here. Congratulations, Tracy!

I’m a little jealous that she can pinpoint an anniversary date. For me, it depends on how you judge: I had a hand-rolled blog of sorts when I went to university in 1998, but was it really a blog? I definitely had a public Livejournal in 2001, but was that a blog? How about blog I used to keep on Elgg dot net (now a domain squatter, may it rest in peace)? My old domain, benwerd.com, dates back to 2006, and my current one, werd.io, only goes back to 2013. It’s a bit of a messy history, with stops and false starts.

On the other hand, I know people who have posted to the same domain for almost as long as they’ve been online. I don’t know if I can match that sort of dedication - or a commitment to even having a continuous identity for all that time. Am I the same person I was 20+ years ago? A little bit yes, but mostly not really. The idea of joining up my life online on a long-term basis is actually quite daunting.

Tracy links to Mandy Brown’s piece on writers vs talkers, which also deeply resonates: I’m a writer. I hate being drawn into making decisions in ad hoc meetings. I want to write my thoughts down, structure them, and then come to a conclusion after getting feedback and iterating. Perhaps that’s why blogging early appeals to me so much: I can put out ideas and very quickly engage in conversations about them that pushes my thinking along.

Blogging might seem like a solitary activity, but it’s very, very social. Even the name — a pun derived from weblog = we blog — is about community. Writing for 20 years also means building community for that long.

Here’s to the next 20!

· Posts · Share this post

 

The worst part of writing is writing

I’ve been neck-deep in a long-form first draft for months; at this point I’m many tens of thousands of words in. Every time I look back at my writing from tens of thousands of words ago, it’s a horrible mistake that opens up floodgates of self-questioning. How could I possibly have thought that I could do this? Who on earth would want to read this? Amateur! Go back to whatever it is you do for your day job. (Do you even know? I thought you wrote software? When was the last time you actually wrote software, you hack?)

But I’m determined. The only thing I can say for sure is that, eventually, I will have a manuscript. I have professional mentors who will read and critique it once I’ve iterated on it a few times. Beyond that, I can’t say. Perhaps, if I’m lucky, someone will like it. But perhaps it really is doomed to sit on my hard drive, unloved.

The deeper I get into it, the more I’m comfortable with the idea of failure. I think I started with the idea that I might be intentionally writing something that a lot of people might enjoy, but at this point it’s for me. The more I pour in of myself, and the ideas I have about the world (and the future of technology, because that’s the kind of book this is), the more I feel comfortable with it. Even if nobody loves it, it’ll be representative of me: a genuine work of self-expression hooked onto a plot that I continue to think is really interesting. And the feedback I get will help me learn to write the next one.

It turns out that the thing which most motivates me to write is my sense of humor. If it’s too self-serious, I stall. (Honestly, I expect readers would, too.) On the other hand, if I’m amusing myself, undercutting my serious points with irony or adding notes about things from the real world that I think are ridiculous, I can go forever. That’s probably something worth knowing about myself: I thrive on irreverence. I cut my teeth on Douglas Adams, Terry Pratchett, and Charlie Brooker’s early stuff, so that’s probably not surprising. I could probably use more of that here, too.

Anyway. It’s like pulling teeth, but joyously. A gleeful festival of unpleasant monotony wherein I make myself laugh while disgusting myself with my own ineptitude. And maybe, if I’m really, really lucky, something will even come of it.

· Posts · Share this post

 

AI is not a paradigm shift. But it could be useful

A light painting of the word

It’s been interesting to watch all of the articles celebrating the death of NFTs lately. For years, they were the harbinger of the next big thing, hawked by A-list celebrities. Behind the scenes, some of the biggest tech companies in the world spawned NFT strategies, even as critics noted that valuations were partially driven by money laundering and wash trading.

Cut to 2023, and surprise, surprise: 95% of NFTs are now completely worthless.

If you missed the craze: while most digital data can be infinitely replicated for almost no cost, Non Fungible Tokens, or NFTs, were a way to ensure there was only one of an item using blockchains. NFTs were often attached to digital art — for example, these hideous apes — and because they were both scarce and tradable, for a while each one was going for the equivalent of thousands of dollars. Of course, it couldn’t last, and NFTs turned out to be the digital equivalent of investing in Beanie Babies or tulips(pick your proverbial market collapse).

It’s now controversial to say that crypto isn’t completely useless, but if you look beyond the brazen grift, international crimes, and planet-destroying environmental impact, I do think there are a few things to celebrate about the trend. The crypto community deployed the most widely-used ever implementation of identity in the browser, for one: people who installed software like Metamask could choose to identify themselves to a website with a single click. In some countries, digital currencies also gave citizens an accessible safe haven when their own local currency tumbled. And finally, it introduced a much wider audience to the concept of decentralization, where a large-scale internet system is run co-operatively by all of its users instead of a giant megacorp.

Although the rampant speculation and wildly inflated prices are gone, there are some technical outcomes that will likely be with us for some time. And some of those are positive and useful.

This is exactly how the hype cycle works. A technology breakthrough kicks things off and gets people all excited. The market works itself into a frenzy over the technology, and lots of people imagine that it can do all kinds of amazing things. Those inevitably don’t actually pan out, and people lose hope and interest. But it turns out that the technology is useful for something, and eventually, it finds a mainstream use.

The Gartner Hype Cycle

Crypto is very much in the trough of disillusionment right now; eventually some aspects of the technology (maybe identity in the browser, maybe something else) will find a use.

Meanwhile, AI? AI is right at the top of that hype curve.

There are people out there who believe we’re building a new kind of higher consciousness, and that our goal as humans should be to support and spread that consciousness to the stars. A galaxy full of stochastic parrots is an inherently funny, Douglas Adams-esque idea, but naturally, they’re serious, partially because they feel this idea absolves them of dealing with the truth that there are actual human beings living on a dying planet who need help and assistance right now. In erasing the needs of vulnerable communities, AI supremacy (officially called effective accelerationism) is the new white supremacy (sitting comfortably alongside the old white supremacy, which is still going strong).

There are also people who think AI will replace poets, artists, neurosurgeons, and political leaders. AI systems will farm for us, tend to our children, and imagine whole new societies that we wouldn’t otherwise be capable of envisioning. They will write great literature and invent wholly new, never-ending dramatic entertainment for us to sit and consume.

It’s horseshit. The technology can’t do any of those things well. It’s best thought of us a really advanced version of auto-complete, and everyone who claims it’s something more is trying to sell you something.

Which isn’t to say it’s not useful. I’ve certainly used it as a utility in my writing — not to do the writing itself (it produces mediocre porridge-writing), but to prompt for different angles or approaches. I’ve used it to suggest ways to code a function. And I’ve certainly used it, again and again, as a quick way to autocomplete a line of code or an English sentence.

What’s going to happen is this: in a few years, AI will come crashing down as everyone realizes it’s not going to be an evolution of human consciousness, and some other new technology will take its place. Valuations of AI companies will fall and some will go out of business. Then, some of the actual uses of the technology will become apparent and it’ll be a mainstream, but not dominant, part of the technology landscape.

The hype cycle is well-understood. What surprises me, again and again, is how thoroughly people follow it. Across industries, CEOs are right now thinking, “holy shit, if we don’t jump on AI, we’re going to be completely left behind. This is a paradigm shift.” It’s kind of the equivalent of a bunch of soccer players chasing the ball — It’s over here! No, it’s over here! Let’s run towards it! — which is how three-year-olds play soccer. A more strategic approach (let’s call it thinking for yourself) will be more productive for most businesses.

There will absolutely be uses for AI tools. The important thing is to take a step back and think: what are my needs? What are the needs of my customers or my community? Given the actual demonstrated capabilities of the software, does it help me meet any of them in a reliable way? If I do use it, am I holding true to my values and keeping my customers and community safe? If the answer is yes to all of these things, then great! Otherwise it might be worth taking a step back and letting the dust settle.

Keep me honest: if AI doesn’t enter a trough of disillusionment and just keeps growing and growing exponentially, call me on it. But I think it’s a pretty safe bet that it won’t.

· Posts · Share this post

 

I'm going to keep using Zapier for my link blog

The way my link blog works is like this:

I save an article, website, or book I thought was interesting to a database in Notion using the web clipper, together with a description and a high-level category. (These are Technology, Society, Democracy, and so on.) I also have a checkbox that designates whether the link is something I’d consider business-friendly.

Zapier watches for new links. When it finds one, it publishes it to my website using the micropub protocol. (My website then tries to send a webmention to that site to let it know I’ve linked to them.)

Then, it publishes the link to my Mastodon profile using the top-level category as a hashtag. If the link is to a book, it also adds the bookstodon hashtag.

Following that, it publishes to all my other social networks via Buffer, without the hashtag. (The exception is my Bluesky profile, which I had to write some custom API code for). If the business-friendly box was checked, that includes publishing to my LinkedIn profile.

If I’m feeling particularly motivated, I’ll copy and paste the link to my Threads profile, but because there’s no API, it’s a fully manual process. Which means I usually don’t.

Very occasionally, Zapier will pick up a link before the Notion entry has fully saved, which means that links post without a description or a category. Then I either shrug my shoulders and accept that I have some weird posts on my timeline, or I go back and edit or repost each and every one.

Because of this bug, I’ve thought about writing my own code to do all of the above on my server. It would work the exact way I want it to be. It would be cheaper, too: I pay for Zapier every month, and the cost adds up.

But while I could do this, and the up-front cost would certainly be lower, what if something goes wrong? Let’s say LinkedIn changes the way their API works. If I wrote the connection myself, I would need to keep my code up to date every time this happened — and, in turn, stay on top of codebase changes for every single social media platform I used.

And the truth is: I’m tired, friends. I want to be really careful about the amount of code I set myself up to maintain. It might seem like a simple script now, but over time I build up more and more simple scripts and, cumulatively, I end up buried in code.

As I get older, I find myself optimizing that cost more and more. I’d much rather pay something up-front that saves me a ton of time and cognitive overhead, because both of these things are at such an enormous premium for me.

I could also just not post to those social media accounts, or do it fully-manually, but there’s something really satisfying about publishing once and syndicating everywhere I’m connected to people. I could save my links straight to something like Buffer, but I also like having my categorized database of everything I’ve shared. And Notion makes it easy to save links across my devices (I’m sometimes on my phone, sometimes on my laptop, sometimes on my desktop).

So I’m keeping Zapier, at least for now. I like keeping my links, and I like sharing them. And, more than anything else, I like not having to maintain the code that does it.

· Posts · Share this post

 

Shana tova to everyone who celebrates!

· Statuses · Share this post

 

An update on Sup, the ActivityPub API

An abstract network

A little while back I shared an idea about an API service that would make it easy to build on top of the fediverse. People went wild about it on Mastodon and Bluesky, and I got lots of positive feedback.

My startup experience tells me that it’s important to validate your idea and understand your customers before you start building a product, lest you spend months or years building the wrong thing. So that’s exactly what I did.

I put out a simple survey that was really just an opener to find people who would be interested in having a conversation with me about it. I bought each person who replied a book certificate (except for one participant who refused it), and listened to why they had been interested enough to answer my questions. If they asked, I told them a little more about my idea.

The people I spoke with ran the gamut from the CEOs of well-funded tech companies to individuals building something in the context of cash-strapped non-profits. I also spoke with a handful of venture capitalists at various firms who had proactively reached out.

A shout-out to Evan Prodromou, one of the fathers of the fediverse, here: he very kindly spent a bunch of time with me keeping me honest and helping to move the project along.

What I discovered was that the people who wanted me to build my full idea were people who really cared about the fediverse, but were not going to be customers. The people who were going to be customers wanted two specific things:

A fast way to make informational bots. Twitter used to be full of informational, automated accounts. Consider accounts containing local weather updates, earthquake reports, and so on. That’s been much harder for people to build on the fediverse.

Statistics about trends and usage. Aggregate information about how the fediverse is behaving, including about how accounts are responding to individual links and domains.

While these signals were very clear, I couldn’t yet validate the core thing I’d proposed to build, which was a full API service with libraries that let people build fully-featured fediverse-compatible software. I also couldn’t yet validate the idea that existing startups would use a service like this to add fediverse compatibility to their products.

But I believe, to reference a way-overused cliché, that this is where the puck is going.

I strongly believe that the fediverse is how new social networks over the next decade will be built. I also have conviction that more people will be interested in building fully-featured fediverse services once Threads federates and Tumblr joins. It’s likely that another large network will also start supporting these protocols.

However, someone financially backing the project would be doing so on the basis of my conviction alone. I couldn’t yet find strong customers for this use case.

I think that’s okay! In the shorter term, I’m very interested in helping people build those bots in particular — it’s a great place to start and a good example of building the smallest, simplest, thing.

The original name I came up with, Sup, was taken by another fediverse project. So for now, this idea is called Feddy.

Anyway, I wanted to report back on what I’d found and how I was thinking about the project today. As always, I’d love your feedback and ideas! You can always email me at ben@werd.io.

· Posts · Share this post

 

As social networks begin to fill with AI-generated crap, it occurs to me that the small, independent web will be the last place where you know you'll find content and conversations from real people.

· Statuses · Share this post

 

Bush's legacy

The contemporary New York City skyline

Twenty-two years ago, I sat in the office — actually the bottom two floors of a Victorian home with creaking, carpeted floorboards and an overstuffed kitchen — at Daily Information, the local paper where I worked in Oxford. It was mid-afternoon, and I probably had Dreamweaver open; I can’t remember exactly now. I’d taken a year’s break from my computer science degree because my as-yet-undiagnosed anxiety had gotten the better of me in the wake of the death of a close friend. It was the first job I’d ever had that paid for lunch, and the remains of a wholewheat bread slice with spicy red bean paté sat on a plate beside me. Between that and the array of laser printers, the room smelled of toast and ozone.

My dad showed up and told me what had happened: the twin air strikes of September 11, 2001, the details of which are now part of our indelible cultural consciousness. For the rest of the afternoon, we tried to learn what we could, refreshing website after website on the overloaded ISBN connection. One by one, every news website went down for us under the strain of unprecedented traffic, with the exception of The Guardian. I alternated between that and a fast-moving MetaFilter thread until it was time to go home. I vividly remember sitting at the bus stop, watching the faces of all the people in the cars that drove past, thinking that the world would likely change in ways that we didn’t understand yet.

George W Bush was President of the United States: a man who previously had presided over more executions than any other Governor of the State of Texas in history (roughly one every two weeks). While the attacks themselves were obviously an atrocity, he was, in my eyes, unmistakably an evil, untrustworthy leader, and it wasn’t clear that he wouldn’t start a terrible war in response. That was the fear expressed by most of my friends in England at the time: not who was behind the attacks and why?, but what will America do? I was the only American in my friend group, but I shared the same fear.

Of course, now we all know the story of the next two decades. We invaded Iraq under false pretenses, established a major erosion of civil liberties ironically called the PATRIOT Act which granted unprecedented authorities that live on to this day, and racist anti-Muslim rhetoric cranked up to eleven. All in the name of 2,753 people who didn’t ask for any of it. Even the first responders, much lauded at the time, struggle to get the support they need.

In 2002, my parents moved back to California to look after my Oma, and I joined them for a few months. I had the whole row on my transatlantic flight to myself, which seemed strange until I remembered, mid-flight, that it was September 11, 2002 (in retrospect probably the safest day to fly in history). When I arrived, I saw that the freeways were littered with tiny American flags that had fallen off the cars they had presumably been waving from over the last year. As a metaphor, discarded disposable American flags bought to illustrate a kind of temporary superficial patriotism seemed a little on the nose.

While the roads were littered with flags, the air was still thick with fear. My parents had moved to Turlock, a small town outside of Modesto where the radio stations mostly played country music and almond dust polluted the air. There was still a feeling that the next attack could happen at any time, and if it did, why wouldn’t it be here? The dissonance between the significance of the World Trade Center in New York City and the Save-Mart in Turlock seemed to be lost on them. It could happen anywhere. It was the perfect environment for manufacturing consent for war. What did it matter that Saddam Hussein had precisely nothing to do with the attacks and that the purported weapons of mass destruction were obviously fictional? He was brown too, wasn’t he? And, boy, we needed to get revenge.

Even now, I wonder if I should be writing these opinions. In a way, September 11 has become a sacred event. And, seriously, what gives me the right to be talking about it to begin with?

But the tragedy of that day has touched all of us, everywhere. It has also been used as a cover for harms that continue to this day. The deaths of those innocent people are still used to justify erosions of civil liberties; they are still used to justify racism; they are still used to justify mass surveillance domestically and drone strikes internationally; they are still used to justify draconian foreign policies. If any lessons at all were learned from September 11, I think they were the wrong ones.

There’s an alternate universe where America as a population decided that funding and arming covert operations in foreign nations to support American aims was a bad idea. The late Robin Cook, MP, the former British Foreign Secretary, wrote in the wake of the July 7 bombings in London:

‌In the absence of anyone else owning up to yesterday's crimes, we will be subjected to a spate of articles analysing the threat of militant Islam. Ironically they will fall in the same week that we recall the tenth anniversary of the massacre at Srebrenica, when the powerful nations of Europe failed to protect 8,000 Muslims from being annihilated in the worst terrorist act in Europe of the past generation.

[…] Bin Laden was, though, a product of a monumental miscalculation by western security agencies. Throughout the 80s he was armed by the CIA and funded by the Saudis to wage jihad against the Russian occupation of Afghanistan. Al-Qaida, literally "the database", was originally the computer file of the thousands of mujahideen who were recruited and trained with help from the CIA to defeat the Russians. Inexplicably, and with disastrous consequences, it never appears to have occurred to Washington that once Russia was out of the way, Bin Laden's organisation would turn its attention to the west.

The CIA, for the record, denies this. But there’s no denying the effect of American foreign policies overall, from Chile (whose US-aided coup was 50 years ago today) to Iran, let alone the disastrous wars in Iraq and Afghanistan. It’s still a mystery to some Americans why the rest of the world isn’t particularly fond of us, but it really shouldn’t be. (And it’s not, as some particularly tone deaf commentators have suggested, jealousy.)

I remember visiting Ground Zero for the first time. By that time, reconstruction was underway, but the holes were clearly visible: conspicuous voids shot through a bustling, diverse city. I think New York City is one of the most amazing places I’ve ever been to: all kinds of people living on top of each other in relative harmony. It’s alive in a way that many places aren’t. Every time I visit I feel enriched by the humanity around me. One of the reasons I live where I do now is to be closer to it.

I think New York City itself is a demonstration of the lesson we should have learned: one that’s more about cross-border co-operation and humanity than isolation and dominance. To put it another way, a lesson that’s more about love than fear. Some conservative politicians talk derisively about “New York values”, but man — if those values were actually shared by the whole nation, America would be a far better place. That was obvious in the way the city came together that day, and it’s been obvious in the way it’s held itself together since.

In contrast, I think the way America as a whole responded to the September 11 attacks directly paved the way to Trump. It enriched a right-wing populist leader and his party; it created divisive foreign policy based on a supremacist foundation; it once again marked people with a certain skin tone and a different religion as being second-class citizens; it promoted nationalism and exceptionalism; it eroded hard-won freedoms for everyone. We can thank Bush for stoking those fires.

True progress towards peace looks like a collaborative world where we consider ourselves to have kinship with everyone of all religions, skin tones, and nationalities, and where every human being’s life has inherent value. It looks like building foreign policy for the benefit of all people, not the people of one nation. It looks like true, vibrant democracy. It doesn’t look like performative flag-waving, drone strikes, religious intolerance, homogeneity, or surveillance campaigns.

Saying so shouldn’t dishonor the memories of everyone who died on that day, or everyone who died as a result of everything that followed. It also doesn’t besmirch our values. One of the greatest things about America is our freedom to hold it to account. That’s what democracy and free expression are all about. And those values — collaboration, inclusion, freedom, representation, multiculturalism, democracy, and most of all, peace — are what we should be working towards.

· Posts · Share this post

 

An AI capitalism primer

A clenched robot fist

Claire Anderson (hi Claire!) asked me to break down the economics of AI. How is it going to make money, and for whom?

In this post I’m not going to talk too much about how the technology works, and the claims of its vendors vs the actual limitations of the products. Baljur Bjarnason has written extensively about that, while Simon Willison writes about building tools with AI and I recommend both of their posts.

The important thing is that when we talk about AI today, we are mostly talking about generative AI. These are products that are capable of generating content: this could be text (for example, ChatGPT), images (eg Midjourney), music, video, and so on.

Usually they do so in response to a simple text prompt. For example, in response to the prompt ‌Write a short limerick about Ben Werdmuller asking ChatGPT to write a short limerick about Ben Werdmuller, ChatGPT instantly produced:

Ben Werdmuller pondered with glee,
“What would ChatGPT write about me?”
So he posed the request,
In a jest quite obsessed,
And chuckled at layers, level three!

Honestly, it’s pretty clever.

While a limerick isn’t particularly economically useful, you can ask these technologies to write code for you, find hidden patterns in data, highlight potential mistakes in boilerplate legal documents, and so on. (I’m personally aware of companies using it to do each of these things.)

Each of these AI products is powered by a large foundation model: deep learning neural networks that are trained on vast amounts of data. In essence, the neural network is a piece of software that ingests a huge amount of source material and finds patterns in it. Based on those patterns and the sheer amount of data involved, it can statistically decide what the outcome of a prompt should be. Each word of the limerick above is what the model has decided is the most probably likely next piece of the output in response to my prompt.

The models are what have been called stochastic parrots: their output is entirely probabilistic. This kind of AI isn’t intelligence and these models have no understanding of what they’re saying. It’s a bit like a magic trick that’s really only possible because of the sheer amount of data that’s wrapped up in the training set.

And here’s the rub: the training set is a not insignificant percentage of everything that’s ever been published by a human. A huge portion of the web is there; it’s also been shown that entire libraries of pirated books have been involved. No royalties or license agreements have been paid for this content. The vast majority of it seems to have been simply scraped. Scraping publicly accessible content is not illegal(and nor should it be); incorporating pirated books and licensed media clearly is.

Clearly if you’re sucking up everything people have published, you’re also sucking up the prejudices and systemic biases that are a part of modern life. Some vendors, like OpenAI, claim to be trying to reduce those biases in their training sets. Others, like Elon Musk’s X.AI, claim that reducing those biases is tantamount to training your model to lie. He claims to be building an “anti-woke” model in response to OpenAI’s “politically correct” bias mitigation, which is pretty on-brand for Musk.

In other words, vendors are competing on the quality, characteristics, and sometimes ideological slant of their models. They’re often closed-source, giving the vendor control over how the model is generated, tweaked, and used.

These models all require a lot of computing power both to be trained and to produce their output. It’s difficult to provide a service that offers generative AI to large numbers of people due to this need: it’s expensive and it draws a lot of power (and correspondingly has a large environmental footprint).

The San Francisco skyline, bathed in murky red light.

Between the closed nature of the models, and the computing power required to run them, it’s not easy to get started in AI without paying an existing vendor. If a tech company wants to add AI to a product, or if a new startup wants to offer an AI-powered product, it’s much more cost effective to piggyback on another vendor’s existing model than to develop or host one of their own. Even Microsoft decided to invest billions of dollars into OpenAI and build a tight partnership with the company rather than build its own capability.

The models learn from their users, so as more people have conversations with ChatGPT, for example, the model gets better and better. These are commonly called network effects: the more people that use the products, the better they get. The result is that they have even more of a moat between themselves and any competitors over time. This is also true if a product just uses a model behind the scenes. So if OpenAI’s technology is built into Microsoft Office — and it is! — its models get better every time someone uses them while they write a document or edit a spreadsheet. Each of those uses sends data straight back to OpenAI’s servers and is paid for through Microsoft’s partnership.

What’s been created is an odd situation where the models are trained on content we’ve all published, and improved with our questions and new content, and then it’s all wrapped up to us as a product and sold back to us. There’s certainly some proprietary invention and value in the training methodology and APIs that make it all work, but the underlying data being learned from belongs to us, not them. It wouldn’t work — at all — without our labor.

There’s a second valuable data source in the queries and information we send to the model. Vendors can learn what we want and need, and deep data about our businesses and personal lives, through what we share with AI models. It’s all information that can be used by third parties to sell to us more effectively.

Google’s version of generative AI allows it to answer direct questions from its search engine without pointing you to any external web pages in the process. Whereas we used to permit Google to scrape and index our published work because it would provide us with new audiences, it now continues to scrape our work in order to provide a generated answer to user queries. Websites are still presented underneath, but it’s expected that most users won’t click through. Why would you, when you already have your answer? This is the same dynamic as OpenAI’s ChatGPT: answers are provided without credit or access to the underlying sources.

Some independent publishers are fighting back by de-listing their content from Google entirely. As the blogger and storyteller Tracy Darnell wrote:

I didn’t sign up for Google to own the whole Internet. This isn’t a reasonable thing to put in a privacy policy, nor is it a reasonable thing for a company to do. I am not ok with this.

CodePen co-founder Chris Coyier was blunt:

Google is a portal to the web. Google is an amazing tool for finding relevant websites to go to. That was useful when it was made, and it’s nothing but grown in usefulness. Google should be encouraging and fighting for the open web. But now they’re like, actually we’re just going to suck up your website, put it in a blender with all other websites, and spit out word smoothies for people instead of sending them to your website. Instead.

For small publishers, the model is intolerably extractive. Technical writer Tom Johnson remarked:

With AI, where’s the reward for content creation? What will motivate individual content creators if they no longer are read, but rather feed their content into a massive AI machine?

Larger publishers agree. The New York Times recently banned the use of its content to train AI models. It had previously dropped out of a coalition led by IAC that was trying to jointly negotiate scraping terms with AI vendors, preferring to arrange its own deals on a case-by-case basis. A month earlier, the Associated Press had made its own deal to license its content to OpenAI, giving it a purported first-mover advantage. The terms of the deal are not public.

Questions about copyright — and specifically the unlicensed use of copyrighted material to produce a commercial product — persist. The Authors Guild has written an open letter asking them to license its members’ copyrighted work, which is perhaps a quixotic move: rigid licensing and legal action is likely closer to what’s needed to achieve their hoped-for outcome. Perhaps sensing the business risks inherent in using tools that depend on processing copyrighted work to function, Microsoft has promised to legally defend its customers from copyright claims arising from their use of its AI-powered tools.

Meanwhile, a federal court ruled that AI-generated content cannot, itself, be copyrighted. The US Copyright Office is soliciting comments as it re-evaluates relevant law, presumably encompassing the output of AI models and the processes involved in training them. It remains to be seen whether legislation will change to protect publishers or further enable AI vendors.

The ChatGPT homepage

So. Who’s making money from AI? It’s mostly the large vendors who have the ability to create giant models and provide API services around them. Those vendors are either backed by venture capital investment firms who hope to see an exponential return on their investment (OpenAI, Midjourney) or publicly-traded multinational tech companies (Google, Microsoft). OpenAI is actually very far from profitability — it lost $540M last year. To break even, the company will need to gain many more customers for its services while spending comparatively little on content to train its models with.

In the face of criticism, some venture capitalists and AI founders have latterly embraced an ideology called effective accelerationism, or e/acc, which advocates for technical and capitalistic progress at all costs, almost on a religious basis:

Technocapital can usher in the next evolution of consciousness, creating unthinkable next-generation lifeforms and silicon-based awareness.

In part, it espouses the idea that we’re on the fringe of building an “artificial general intelligence” that’s as powerful as the human brain — and that we should, because allowing different kinds of consciousness to flourish is a general good. It’s a kooky, extreme idea that serves as marketing for existing AI products. In reality, remember, they are not actually intelligence, and have no ability to reason. But if we’re serving some higher ideal of furthering consciousness on earth and beyond, matters like copyright law and the impact on the environment seem more trivial. It’s a way of re-framing the conversation away from author rights and considering societal impacts on vulnerable communities.

Which brings us to the question of who’s not making money from AI. The answer is people who publish the content and create the information that allow these models to function. Indeed, value is being extracted from these publishers — and the downstream users whose data is being fed into these machines — more than ever before. This, of course, disproportionately affects smaller publishers and underrepresented voices, who need their platforms, audiences, and revenues more than most to survive.

On the internet, the old adage is that if you’re not the customer, you’re the product being sold. When it comes to AI models, we’re all both the customer and the product being sold. We’re providing the raw ingredients and we’re paying for it to be returned to us, laundered for our convenience.

· Posts · Share this post

 

Some newsletter changes

I’m making some experimental updates to my newsletter:

Starting next week, this newsletter will come in several flavors:

Technology, Media, and Society: technology and its impact on the way we live, work, learn, and vote.

Late Stage: personal reflections on living and surviving in the 21st century.

The Outmap: new speculative and contemporary fiction.

Most of Technology, Media, and Society will continue to be posted on this website. I am experimenting with publishing more personal posts and fiction over there.

Prefer to subscribe via RSS? Here’s the feed URL for those posts.

· Posts · Share this post

 

In defense of being unfocused

Literally an unfocused photo of a sunset. Yes, I know it's a little on the nose. Work with me here.

I spent a little time updating my resumé, which is a process that basically sits at the top of all the things I least like to do in the world. This time around I tried to have an eye towards focus: what about the work I do might other organizations find valuable? Or to put it another way: what am I?

I grew up and went to school in the UK. At the time, the A-level system of high school credentials required you to pick a narrow number of subjects to take at 16. In contrast to the US, where university applications are more universal and you don’t pick a degree major until you’ve actually taken courses for a while, British applicants applied for a major at a particular institution. The majors available to you were a function of the A-level subjects you chose to take. In effect, 16 year olds were asked to pick their career track for the rest of their lives.

I now know that I take a kind of liberal arts approach to product and technology leadership. My interests are in how things work, for sure, but more so who they work for. I care about the mechanics of the internet, but I care more about storytelling. I’m at least as interested in how to build an empathetic, inclusive team as I am in any new technology that comes along. The internet, to me, is made of people, and the thing that excites me more than anything else is connecting and empowering them. I’ll do any work necessary to meet their needs - whether it’s programming, storytelling, research, design, team-building, fundraising, or cleaning the kitchen.

Which means, when I picked my A-levels in 1995, and when I applied for universities two years later, that it was hard to put me in a box.

My high school didn’t even offer computing as a subject, so I arranged to take it as an extra subject in my own time. The standardized tests were so archaic that they included tape drives and punchcards. Meanwhile, my interest in storytelling and literature meant that I studied theater alongside more traditional STEM subjects: something that most British universities rejected outright as being too unfocused.

I have an honors degree in computer science but I don’t consider myself to be a computer scientist. I’ve been a senior engineer in multiple companies, but my skillset is more of a technical generalist: technology is one of the things I bring together in service of a human-centered strategy. I like to bring my whole self to work, which also includes a lot of writing, generative brainstorming, and thinking about who we’re helping and how best to go about it.

Even the term human-centered feels opaque. It just means that I describe my goals and the work I do in terms of its impact on people, and like to figure out who those people are. It’s hard to help people if you don’t know who you’re helping. People who say “this is for everyone!” tend to be inventing solutions for problems and people that they only imagine exist. But there’s no cleanly concise way of saying that without using something that sounds like a buzzword.

So when I’m putting together a resumé, I don’t know exactly what to say that ties together who I am and the way I approach my work in a way that someone else can consume. Am I an entrepreneur? I have been, and loved it; I like to bring that energy to organizations I join. A product lead or an engineering manager or a design thinker? Yes, and I’ve done all those jobs. I think those lines are blurry, though, and a really good product lead has a strong insight into both engineering and design. I’ve also worked on digital transformation for media organizations and invested in startups at an accelerator — two of my favorite things I’ve ever done — and where do I put that?

In the end, I wrote:

I’m a technology and product leader with a focus on mission-driven organizations.

I’ve designed and built software that has been used by social movements, non-profits, and Fortune 500 companies. As part of this work, I’ve built strong technology and product team cultures and worked on overall business strategy as a key part of the C-suite. I’ve taught the fundamentals of building a strong organizational culture, design thinking, product design, and strategy to organizations around the world.

I’m excited to work on meaningful projects that make the world better.

I’ve yet to get feedback on this intro — I guess that’s what this post is, in part — but it feels close in a way that isn’t completely obtuse to someone who’s basing their search on a simple job description. It will still turn off a bunch of people who want someone with a more precise career focus than I’ve had, but perhaps those roles are also not a good fit for me.

Perhaps I should be running my own thing again. I promised myself that I would give myself a third run at a startup, and it’s possible that this is the only thing that really fits. At the same time, right now I’m doing contracts, and I love the people and organization I’m working with right now.

If I think of my various hats as an a la carte menu that people can pick from rather than an all-in-one take-it-or-leave-it deal, this kind of work becomes less daunting. Either way, I do think it’s a strength: even if I’m working as one particular facet officially, the others inform the work I’m doing. As I mentioned, I think it’s helpful for an engineering lead to have a product brain, and vice versa. It’s not a bad thing for either to understand design. And every lead needs to understand how to build a strong culture.

But how to wrap all of that neatly up in a bow? I’m still working on it.

· Posts · Share this post

 

Press Forward brings much-needed support for local news

A man speaking into a number of microphones.

I was pleased to see this announcement from the MacArthur Foundation:

A coalition of 22 donors today announced Press Forward, a national initiative to strengthen communities and democracy by supporting local news and information with an infusion of more than a half-billion dollars over the next five years. Press Forward will enhance local journalism at an unprecedented level to re-center local news as a force for community cohesion; support new models and solutions that are ready to scale; and close longstanding inequities in journalism coverage and practice.

I think this is huge. As I wrote the other day, I think building a commons of tightly-focused newsrooms is absolutely key:

A wide news commons, comprised of many smaller newsrooms with specific areas of focus, as well as the perspectives of individuals in the community, would improve our democracy at the local level. In doing so, it would make a big difference to how the whole country works. I’d love to see us collectively make it happen.

The new initiative has a few key areas:

Strengthen Local Newsrooms That Have Trust in Local Communities: the announcement suggests they will provide direct philanthropic funding to exactly the kinds of newsrooms I’ve been talking about.

Accelerate the Enabling Environment for News Production and Dissemination: Providing shared infrastructure of all kinds is going to be really important. As a rule, I believe newsrooms should be spending their time and resources on things that make them uniquely viable. The various commodity resources that every newsroom must build — technical tools, legal assistance, revenue experiments, help with people operations, assistance with reaching audiences — should be shared so that everyone can take advantage of improvements an discoveries, in a way that keeps costs low for all.

Close Longstanding Inequalities in Journalism Coverage and Practice: ensuring “the availability of accurate and responsive news and information in historically underserved communities and economically challenged news deserts” is vital here. Again, as I mentioned: direct subscriptions don’t work in communities were few can afford to pay. Philanthropic support can help ensure peoples’ stories are told — and when they are, local corruption measurably decreases.

Advance Public Policies That Expand Access to Local News and Civic Information:‌ supporting public policies that will protect journalists and improve support for newsrooms.

My hope is that most of the money will go directly to newsrooms, and to the sorts of shared infrastructure that every newsroom needs. I also hope that this shared infrastructure will be open sourced as much as possible, so that any public interest organization can take advantage — thereby increasing the impact of these donations. While public policy support is important, communities need coverage now, particularly in the run-up to the 2024 election.

· Posts · Share this post