Skip to main content
 

· Links · Share this post

 

An update on Sup, the ActivityPub API

An abstract network

A little while back I shared an idea about an API service that would make it easy to build on top of the fediverse. People went wild about it on Mastodon and Bluesky, and I got lots of positive feedback.

My startup experience tells me that it’s important to validate your idea and understand your customers before you start building a product, lest you spend months or years building the wrong thing. So that’s exactly what I did.

I put out a simple survey that was really just an opener to find people who would be interested in having a conversation with me about it. I bought each person who replied a book certificate (except for one participant who refused it), and listened to why they had been interested enough to answer my questions. If they asked, I told them a little more about my idea.

The people I spoke with ran the gamut from the CEOs of well-funded tech companies to individuals building something in the context of cash-strapped non-profits. I also spoke with a handful of venture capitalists at various firms who had proactively reached out.

A shout-out to Evan Prodromou, one of the fathers of the fediverse, here: he very kindly spent a bunch of time with me keeping me honest and helping to move the project along.

What I discovered was that the people who wanted me to build my full idea were people who really cared about the fediverse, but were not going to be customers. The people who were going to be customers wanted two specific things:

A fast way to make informational bots. Twitter used to be full of informational, automated accounts. Consider accounts containing local weather updates, earthquake reports, and so on. That’s been much harder for people to build on the fediverse.

Statistics about trends and usage. Aggregate information about how the fediverse is behaving, including about how accounts are responding to individual links and domains.

While these signals were very clear, I couldn’t yet validate the core thing I’d proposed to build, which was a full API service with libraries that let people build fully-featured fediverse-compatible software. I also couldn’t yet validate the idea that existing startups would use a service like this to add fediverse compatibility to their products.

But I believe, to reference a way-overused cliché, that this is where the puck is going.

I strongly believe that the fediverse is how new social networks over the next decade will be built. I also have conviction that more people will be interested in building fully-featured fediverse services once Threads federates and Tumblr joins. It’s likely that another large network will also start supporting these protocols.

However, someone financially backing the project would be doing so on the basis of my conviction alone. I couldn’t yet find strong customers for this use case.

I think that’s okay! In the shorter term, I’m very interested in helping people build those bots in particular — it’s a great place to start and a good example of building the smallest, simplest, thing.

The original name I came up with, Sup, was taken by another fediverse project. So for now, this idea is called Feddy.

Anyway, I wanted to report back on what I’d found and how I was thinking about the project today. As always, I’d love your feedback and ideas! You can always email me at ben@werd.io.

· Posts · Share this post

 

Finishing With Twitter/X

Who at the intersection of tech and politics is still posting on Twitter? And should they be? A good breakdown.

[Link]

· Links · Share this post

 

What Mitt Romney Saw in the Senate

A fascinating read that makes me want to check out the full book, which seems to me like an attempt by Romney to save the Republican Party from Trumpism (as well as, let’s be clear, his own reputation). Wild anecdote after wild anecdote that highlights the cynicism of Washington political life.

[Link]

· Links · Share this post

 

Earth ‘well outside safe operating space for humanity’, scientists find

“This update finds that six of the nine boundaries are transgressed, suggesting that Earth is now well outside of the safe operating space for humanity.” No biggie.

[Link]

· Links · Share this post

 

Unity has changed its pricing model, and game developers are pissed off

As with API pricing changes across social media, these tiers disproportionately penalize indie developers. The message is clear: they don't want or need those customers. In a tighter economy, much of technology is re-organizing around serving bigger, wealthier players.

[Link]

· Links · Share this post

 

White House to send letter to news execs urging outlets to 'ramp up' scrutiny of GOP's Biden impeachment inquiry 'based on lies'

I couldn’t be less of a fan of the current Republican Party but I hate this. The White House should not be sending letters to the media encouraging them to do anything. That’s not the sort of relationship we need our journalistic media to have.

[Link]

· Links · Share this post

 

Why Starting Your Investor Updates With “Cash on Hand” Information is a Major Red Flag Right Now. It’s Maybe the Only Thing Worse Than Not Sending Updates at All.

I appreciated this succinct discussion on using venture dollars well from Hunter Walk. In particular, this: “Startups spend a $1 to ultimately try and create more than $1 of company. If you do that repeatedly and efficiently we will all make money together.” Too many founders still think of investment as being akin to a grant.

[Link]

· Links · Share this post

 

As social networks begin to fill with AI-generated crap, it occurs to me that the small, independent web will be the last place where you know you'll find content and conversations from real people.

· Statuses · Share this post

 

Refusing to Censor Myself

A less-discussed problem with book bans: publishers will self-censor, as they did here by requiring the removal of the word "racism" in the context of internment camps.

[Link]

· Links · Share this post

 

Who blocks OpenAI?

“The 392 news organizations listed below have instructed OpenAI’s GPTBot to not scan their sites, according to a continual survey of 1,119 online publishers conducted by the homepages.news archive. That amounts to 35.0% of the total.”

[Link]

· Links · Share this post

 

Never Remember

The best thing I read on the anniversary of 9/11 by far. It feels cathartic to read. But it's also so, so sad.

[Link]

· Links · Share this post

 

Bush's legacy

The contemporary New York City skyline

Twenty-two years ago, I sat in the office — actually the bottom two floors of a Victorian home with creaking, carpeted floorboards and an overstuffed kitchen — at Daily Information, the local paper where I worked in Oxford. It was mid-afternoon, and I probably had Dreamweaver open; I can’t remember exactly now. I’d taken a year’s break from my computer science degree because my as-yet-undiagnosed anxiety had gotten the better of me in the wake of the death of a close friend. It was the first job I’d ever had that paid for lunch, and the remains of a wholewheat bread slice with spicy red bean paté sat on a plate beside me. Between that and the array of laser printers, the room smelled of toast and ozone.

My dad showed up and told me what had happened: the twin air strikes of September 11, 2001, the details of which are now part of our indelible cultural consciousness. For the rest of the afternoon, we tried to learn what we could, refreshing website after website on the overloaded ISBN connection. One by one, every news website went down for us under the strain of unprecedented traffic, with the exception of The Guardian. I alternated between that and a fast-moving MetaFilter thread until it was time to go home. I vividly remember sitting at the bus stop, watching the faces of all the people in the cars that drove past, thinking that the world would likely change in ways that we didn’t understand yet.

George W Bush was President of the United States: a man who previously had presided over more executions than any other Governor of the State of Texas in history (roughly one every two weeks). While the attacks themselves were obviously an atrocity, he was, in my eyes, unmistakably an evil, untrustworthy leader, and it wasn’t clear that he wouldn’t start a terrible war in response. That was the fear expressed by most of my friends in England at the time: not who was behind the attacks and why?, but what will America do? I was the only American in my friend group, but I shared the same fear.

Of course, now we all know the story of the next two decades. We invaded Iraq under false pretenses, established a major erosion of civil liberties ironically called the PATRIOT Act which granted unprecedented authorities that live on to this day, and racist anti-Muslim rhetoric cranked up to eleven. All in the name of 2,753 people who didn’t ask for any of it. Even the first responders, much lauded at the time, struggle to get the support they need.

In 2002, my parents moved back to California to look after my Oma, and I joined them for a few months. I had the whole row on my transatlantic flight to myself, which seemed strange until I remembered, mid-flight, that it was September 11, 2002 (in retrospect probably the safest day to fly in history). When I arrived, I saw that the freeways were littered with tiny American flags that had fallen off the cars they had presumably been waving from over the last year. As a metaphor, discarded disposable American flags bought to illustrate a kind of temporary superficial patriotism seemed a little on the nose.

While the roads were littered with flags, the air was still thick with fear. My parents had moved to Turlock, a small town outside of Modesto where the radio stations mostly played country music and almond dust polluted the air. There was still a feeling that the next attack could happen at any time, and if it did, why wouldn’t it be here? The dissonance between the significance of the World Trade Center in New York City and the Save-Mart in Turlock seemed to be lost on them. It could happen anywhere. It was the perfect environment for manufacturing consent for war. What did it matter that Saddam Hussein had precisely nothing to do with the attacks and that the purported weapons of mass destruction were obviously fictional? He was brown too, wasn’t he? And, boy, we needed to get revenge.

Even now, I wonder if I should be writing these opinions. In a way, September 11 has become a sacred event. And, seriously, what gives me the right to be talking about it to begin with?

But the tragedy of that day has touched all of us, everywhere. It has also been used as a cover for harms that continue to this day. The deaths of those innocent people are still used to justify erosions of civil liberties; they are still used to justify racism; they are still used to justify mass surveillance domestically and drone strikes internationally; they are still used to justify draconian foreign policies. If any lessons at all were learned from September 11, I think they were the wrong ones.

There’s an alternate universe where America as a population decided that funding and arming covert operations in foreign nations to support American aims was a bad idea. The late Robin Cook, MP, the former British Foreign Secretary, wrote in the wake of the July 7 bombings in London:

‌In the absence of anyone else owning up to yesterday's crimes, we will be subjected to a spate of articles analysing the threat of militant Islam. Ironically they will fall in the same week that we recall the tenth anniversary of the massacre at Srebrenica, when the powerful nations of Europe failed to protect 8,000 Muslims from being annihilated in the worst terrorist act in Europe of the past generation.

[…] Bin Laden was, though, a product of a monumental miscalculation by western security agencies. Throughout the 80s he was armed by the CIA and funded by the Saudis to wage jihad against the Russian occupation of Afghanistan. Al-Qaida, literally "the database", was originally the computer file of the thousands of mujahideen who were recruited and trained with help from the CIA to defeat the Russians. Inexplicably, and with disastrous consequences, it never appears to have occurred to Washington that once Russia was out of the way, Bin Laden's organisation would turn its attention to the west.

The CIA, for the record, denies this. But there’s no denying the effect of American foreign policies overall, from Chile (whose US-aided coup was 50 years ago today) to Iran, let alone the disastrous wars in Iraq and Afghanistan. It’s still a mystery to some Americans why the rest of the world isn’t particularly fond of us, but it really shouldn’t be. (And it’s not, as some particularly tone deaf commentators have suggested, jealousy.)

I remember visiting Ground Zero for the first time. By that time, reconstruction was underway, but the holes were clearly visible: conspicuous voids shot through a bustling, diverse city. I think New York City is one of the most amazing places I’ve ever been to: all kinds of people living on top of each other in relative harmony. It’s alive in a way that many places aren’t. Every time I visit I feel enriched by the humanity around me. One of the reasons I live where I do now is to be closer to it.

I think New York City itself is a demonstration of the lesson we should have learned: one that’s more about cross-border co-operation and humanity than isolation and dominance. To put it another way, a lesson that’s more about love than fear. Some conservative politicians talk derisively about “New York values”, but man — if those values were actually shared by the whole nation, America would be a far better place. That was obvious in the way the city came together that day, and it’s been obvious in the way it’s held itself together since.

In contrast, I think the way America as a whole responded to the September 11 attacks directly paved the way to Trump. It enriched a right-wing populist leader and his party; it created divisive foreign policy based on a supremacist foundation; it once again marked people with a certain skin tone and a different religion as being second-class citizens; it promoted nationalism and exceptionalism; it eroded hard-won freedoms for everyone. We can thank Bush for stoking those fires.

True progress towards peace looks like a collaborative world where we consider ourselves to have kinship with everyone of all religions, skin tones, and nationalities, and where every human being’s life has inherent value. It looks like building foreign policy for the benefit of all people, not the people of one nation. It looks like true, vibrant democracy. It doesn’t look like performative flag-waving, drone strikes, religious intolerance, homogeneity, or surveillance campaigns.

Saying so shouldn’t dishonor the memories of everyone who died on that day, or everyone who died as a result of everything that followed. It also doesn’t besmirch our values. One of the greatest things about America is our freedom to hold it to account. That’s what democracy and free expression are all about. And those values — collaboration, inclusion, freedom, representation, multiculturalism, democracy, and most of all, peace — are what we should be working towards.

· Posts · Share this post

 

Writer Sarah Rose Etter on not making things harder than they need to be

I found this interview fascinating: definitely a writer I look up to, whose work I both enjoy and find intimidatingly raw. And who happens to have a very similar day job to me.

[Link]

· Links · Share this post

 

The Anti-Vax Movement Isn’t Going Away. We Must Adapt to It

Depressing. I agree that vaccine denial is not going away, and that we need to find other ways to mitigate outbreaks. But what a sad situation to be in.

[Link]

· Links · Share this post

 

An AI capitalism primer

A clenched robot fist

Claire Anderson (hi Claire!) asked me to break down the economics of AI. How is it going to make money, and for whom?

In this post I’m not going to talk too much about how the technology works, and the claims of its vendors vs the actual limitations of the products. Baljur Bjarnason has written extensively about that, while Simon Willison writes about building tools with AI and I recommend both of their posts.

The important thing is that when we talk about AI today, we are mostly talking about generative AI. These are products that are capable of generating content: this could be text (for example, ChatGPT), images (eg Midjourney), music, video, and so on.

Usually they do so in response to a simple text prompt. For example, in response to the prompt ‌Write a short limerick about Ben Werdmuller asking ChatGPT to write a short limerick about Ben Werdmuller, ChatGPT instantly produced:

Ben Werdmuller pondered with glee,
“What would ChatGPT write about me?”
So he posed the request,
In a jest quite obsessed,
And chuckled at layers, level three!

Honestly, it’s pretty clever.

While a limerick isn’t particularly economically useful, you can ask these technologies to write code for you, find hidden patterns in data, highlight potential mistakes in boilerplate legal documents, and so on. (I’m personally aware of companies using it to do each of these things.)

Each of these AI products is powered by a large foundation model: deep learning neural networks that are trained on vast amounts of data. In essence, the neural network is a piece of software that ingests a huge amount of source material and finds patterns in it. Based on those patterns and the sheer amount of data involved, it can statistically decide what the outcome of a prompt should be. Each word of the limerick above is what the model has decided is the most probably likely next piece of the output in response to my prompt.

The models are what have been called stochastic parrots: their output is entirely probabilistic. This kind of AI isn’t intelligence and these models have no understanding of what they’re saying. It’s a bit like a magic trick that’s really only possible because of the sheer amount of data that’s wrapped up in the training set.

And here’s the rub: the training set is a not insignificant percentage of everything that’s ever been published by a human. A huge portion of the web is there; it’s also been shown that entire libraries of pirated books have been involved. No royalties or license agreements have been paid for this content. The vast majority of it seems to have been simply scraped. Scraping publicly accessible content is not illegal(and nor should it be); incorporating pirated books and licensed media clearly is.

Clearly if you’re sucking up everything people have published, you’re also sucking up the prejudices and systemic biases that are a part of modern life. Some vendors, like OpenAI, claim to be trying to reduce those biases in their training sets. Others, like Elon Musk’s X.AI, claim that reducing those biases is tantamount to training your model to lie. He claims to be building an “anti-woke” model in response to OpenAI’s “politically correct” bias mitigation, which is pretty on-brand for Musk.

In other words, vendors are competing on the quality, characteristics, and sometimes ideological slant of their models. They’re often closed-source, giving the vendor control over how the model is generated, tweaked, and used.

These models all require a lot of computing power both to be trained and to produce their output. It’s difficult to provide a service that offers generative AI to large numbers of people due to this need: it’s expensive and it draws a lot of power (and correspondingly has a large environmental footprint).

The San Francisco skyline, bathed in murky red light.

Between the closed nature of the models, and the computing power required to run them, it’s not easy to get started in AI without paying an existing vendor. If a tech company wants to add AI to a product, or if a new startup wants to offer an AI-powered product, it’s much more cost effective to piggyback on another vendor’s existing model than to develop or host one of their own. Even Microsoft decided to invest billions of dollars into OpenAI and build a tight partnership with the company rather than build its own capability.

The models learn from their users, so as more people have conversations with ChatGPT, for example, the model gets better and better. These are commonly called network effects: the more people that use the products, the better they get. The result is that they have even more of a moat between themselves and any competitors over time. This is also true if a product just uses a model behind the scenes. So if OpenAI’s technology is built into Microsoft Office — and it is! — its models get better every time someone uses them while they write a document or edit a spreadsheet. Each of those uses sends data straight back to OpenAI’s servers and is paid for through Microsoft’s partnership.

What’s been created is an odd situation where the models are trained on content we’ve all published, and improved with our questions and new content, and then it’s all wrapped up to us as a product and sold back to us. There’s certainly some proprietary invention and value in the training methodology and APIs that make it all work, but the underlying data being learned from belongs to us, not them. It wouldn’t work — at all — without our labor.

There’s a second valuable data source in the queries and information we send to the model. Vendors can learn what we want and need, and deep data about our businesses and personal lives, through what we share with AI models. It’s all information that can be used by third parties to sell to us more effectively.

Google’s version of generative AI allows it to answer direct questions from its search engine without pointing you to any external web pages in the process. Whereas we used to permit Google to scrape and index our published work because it would provide us with new audiences, it now continues to scrape our work in order to provide a generated answer to user queries. Websites are still presented underneath, but it’s expected that most users won’t click through. Why would you, when you already have your answer? This is the same dynamic as OpenAI’s ChatGPT: answers are provided without credit or access to the underlying sources.

Some independent publishers are fighting back by de-listing their content from Google entirely. As the blogger and storyteller Tracy Darnell wrote:

I didn’t sign up for Google to own the whole Internet. This isn’t a reasonable thing to put in a privacy policy, nor is it a reasonable thing for a company to do. I am not ok with this.

CodePen co-founder Chris Coyier was blunt:

Google is a portal to the web. Google is an amazing tool for finding relevant websites to go to. That was useful when it was made, and it’s nothing but grown in usefulness. Google should be encouraging and fighting for the open web. But now they’re like, actually we’re just going to suck up your website, put it in a blender with all other websites, and spit out word smoothies for people instead of sending them to your website. Instead.

For small publishers, the model is intolerably extractive. Technical writer Tom Johnson remarked:

With AI, where’s the reward for content creation? What will motivate individual content creators if they no longer are read, but rather feed their content into a massive AI machine?

Larger publishers agree. The New York Times recently banned the use of its content to train AI models. It had previously dropped out of a coalition led by IAC that was trying to jointly negotiate scraping terms with AI vendors, preferring to arrange its own deals on a case-by-case basis. A month earlier, the Associated Press had made its own deal to license its content to OpenAI, giving it a purported first-mover advantage. The terms of the deal are not public.

Questions about copyright — and specifically the unlicensed use of copyrighted material to produce a commercial product — persist. The Authors Guild has written an open letter asking them to license its members’ copyrighted work, which is perhaps a quixotic move: rigid licensing and legal action is likely closer to what’s needed to achieve their hoped-for outcome. Perhaps sensing the business risks inherent in using tools that depend on processing copyrighted work to function, Microsoft has promised to legally defend its customers from copyright claims arising from their use of its AI-powered tools.

Meanwhile, a federal court ruled that AI-generated content cannot, itself, be copyrighted. The US Copyright Office is soliciting comments as it re-evaluates relevant law, presumably encompassing the output of AI models and the processes involved in training them. It remains to be seen whether legislation will change to protect publishers or further enable AI vendors.

The ChatGPT homepage

So. Who’s making money from AI? It’s mostly the large vendors who have the ability to create giant models and provide API services around them. Those vendors are either backed by venture capital investment firms who hope to see an exponential return on their investment (OpenAI, Midjourney) or publicly-traded multinational tech companies (Google, Microsoft). OpenAI is actually very far from profitability — it lost $540M last year. To break even, the company will need to gain many more customers for its services while spending comparatively little on content to train its models with.

In the face of criticism, some venture capitalists and AI founders have latterly embraced an ideology called effective accelerationism, or e/acc, which advocates for technical and capitalistic progress at all costs, almost on a religious basis:

Technocapital can usher in the next evolution of consciousness, creating unthinkable next-generation lifeforms and silicon-based awareness.

In part, it espouses the idea that we’re on the fringe of building an “artificial general intelligence” that’s as powerful as the human brain — and that we should, because allowing different kinds of consciousness to flourish is a general good. It’s a kooky, extreme idea that serves as marketing for existing AI products. In reality, remember, they are not actually intelligence, and have no ability to reason. But if we’re serving some higher ideal of furthering consciousness on earth and beyond, matters like copyright law and the impact on the environment seem more trivial. It’s a way of re-framing the conversation away from author rights and considering societal impacts on vulnerable communities.

Which brings us to the question of who’s not making money from AI. The answer is people who publish the content and create the information that allow these models to function. Indeed, value is being extracted from these publishers — and the downstream users whose data is being fed into these machines — more than ever before. This, of course, disproportionately affects smaller publishers and underrepresented voices, who need their platforms, audiences, and revenues more than most to survive.

On the internet, the old adage is that if you’re not the customer, you’re the product being sold. When it comes to AI models, we’re all both the customer and the product being sold. We’re providing the raw ingredients and we’re paying for it to be returned to us, laundered for our convenience.

· Posts · Share this post

 

Microsoft announces new Copilot Copyright Commitment for customers

“As customers ask whether they can use Microsoft’s Copilot services and the output they generate without worrying about copyright claims, we are providing a straightforward answer: yes, you can, and if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.”

[Link]

· Links · Share this post

 

Some newsletter changes

I’m making some experimental updates to my newsletter:

Starting next week, this newsletter will come in several flavors:

Technology, Media, and Society: technology and its impact on the way we live, work, learn, and vote.

Late Stage: personal reflections on living and surviving in the 21st century.

The Outmap: new speculative and contemporary fiction.

Most of Technology, Media, and Society will continue to be posted on this website. I am experimenting with publishing more personal posts and fiction over there.

Prefer to subscribe via RSS? Here’s the feed URL for those posts.

· Posts · Share this post

 

Silicon Valley's Slaughterhouse

“Andreessen wasn’t advocating for a tech industry that accelerates the development of the human race, or elevates the human condition. He wanted to (and succeeded in creating) a Silicon Valley that builds technology that can, and I quote, “eat markets far larger than the technology industry has historically been able to pursue.””

[Link]

· Links · Share this post

 

In defense of being unfocused

Literally an unfocused photo of a sunset. Yes, I know it's a little on the nose. Work with me here.

I spent a little time updating my resumé, which is a process that basically sits at the top of all the things I least like to do in the world. This time around I tried to have an eye towards focus: what about the work I do might other organizations find valuable? Or to put it another way: what am I?

I grew up and went to school in the UK. At the time, the A-level system of high school credentials required you to pick a narrow number of subjects to take at 16. In contrast to the US, where university applications are more universal and you don’t pick a degree major until you’ve actually taken courses for a while, British applicants applied for a major at a particular institution. The majors available to you were a function of the A-level subjects you chose to take. In effect, 16 year olds were asked to pick their career track for the rest of their lives.

I now know that I take a kind of liberal arts approach to product and technology leadership. My interests are in how things work, for sure, but more so who they work for. I care about the mechanics of the internet, but I care more about storytelling. I’m at least as interested in how to build an empathetic, inclusive team as I am in any new technology that comes along. The internet, to me, is made of people, and the thing that excites me more than anything else is connecting and empowering them. I’ll do any work necessary to meet their needs - whether it’s programming, storytelling, research, design, team-building, fundraising, or cleaning the kitchen.

Which means, when I picked my A-levels in 1995, and when I applied for universities two years later, that it was hard to put me in a box.

My high school didn’t even offer computing as a subject, so I arranged to take it as an extra subject in my own time. The standardized tests were so archaic that they included tape drives and punchcards. Meanwhile, my interest in storytelling and literature meant that I studied theater alongside more traditional STEM subjects: something that most British universities rejected outright as being too unfocused.

I have an honors degree in computer science but I don’t consider myself to be a computer scientist. I’ve been a senior engineer in multiple companies, but my skillset is more of a technical generalist: technology is one of the things I bring together in service of a human-centered strategy. I like to bring my whole self to work, which also includes a lot of writing, generative brainstorming, and thinking about who we’re helping and how best to go about it.

Even the term human-centered feels opaque. It just means that I describe my goals and the work I do in terms of its impact on people, and like to figure out who those people are. It’s hard to help people if you don’t know who you’re helping. People who say “this is for everyone!” tend to be inventing solutions for problems and people that they only imagine exist. But there’s no cleanly concise way of saying that without using something that sounds like a buzzword.

So when I’m putting together a resumé, I don’t know exactly what to say that ties together who I am and the way I approach my work in a way that someone else can consume. Am I an entrepreneur? I have been, and loved it; I like to bring that energy to organizations I join. A product lead or an engineering manager or a design thinker? Yes, and I’ve done all those jobs. I think those lines are blurry, though, and a really good product lead has a strong insight into both engineering and design. I’ve also worked on digital transformation for media organizations and invested in startups at an accelerator — two of my favorite things I’ve ever done — and where do I put that?

In the end, I wrote:

I’m a technology and product leader with a focus on mission-driven organizations.

I’ve designed and built software that has been used by social movements, non-profits, and Fortune 500 companies. As part of this work, I’ve built strong technology and product team cultures and worked on overall business strategy as a key part of the C-suite. I’ve taught the fundamentals of building a strong organizational culture, design thinking, product design, and strategy to organizations around the world.

I’m excited to work on meaningful projects that make the world better.

I’ve yet to get feedback on this intro — I guess that’s what this post is, in part — but it feels close in a way that isn’t completely obtuse to someone who’s basing their search on a simple job description. It will still turn off a bunch of people who want someone with a more precise career focus than I’ve had, but perhaps those roles are also not a good fit for me.

Perhaps I should be running my own thing again. I promised myself that I would give myself a third run at a startup, and it’s possible that this is the only thing that really fits. At the same time, right now I’m doing contracts, and I love the people and organization I’m working with right now.

If I think of my various hats as an a la carte menu that people can pick from rather than an all-in-one take-it-or-leave-it deal, this kind of work becomes less daunting. Either way, I do think it’s a strength: even if I’m working as one particular facet officially, the others inform the work I’m doing. As I mentioned, I think it’s helpful for an engineering lead to have a product brain, and vice versa. It’s not a bad thing for either to understand design. And every lead needs to understand how to build a strong culture.

But how to wrap all of that neatly up in a bow? I’m still working on it.

· Posts · Share this post

 

New Elon Musk biography offers fresh details about the billionaire's Ukraine dilemma

If I was building technology to let people watch Netflix and check their email from remote locations, I would also be upset about it being used for drone strikes. But if that's the case, you shouldn't be deploying your tech to the military in the first place. Nor should you be making strategic military decisions of your own.

[Link]

· Links · Share this post

 

Press Forward brings much-needed support for local news

A man speaking into a number of microphones.

I was pleased to see this announcement from the MacArthur Foundation:

A coalition of 22 donors today announced Press Forward, a national initiative to strengthen communities and democracy by supporting local news and information with an infusion of more than a half-billion dollars over the next five years. Press Forward will enhance local journalism at an unprecedented level to re-center local news as a force for community cohesion; support new models and solutions that are ready to scale; and close longstanding inequities in journalism coverage and practice.

I think this is huge. As I wrote the other day, I think building a commons of tightly-focused newsrooms is absolutely key:

A wide news commons, comprised of many smaller newsrooms with specific areas of focus, as well as the perspectives of individuals in the community, would improve our democracy at the local level. In doing so, it would make a big difference to how the whole country works. I’d love to see us collectively make it happen.

The new initiative has a few key areas:

Strengthen Local Newsrooms That Have Trust in Local Communities: the announcement suggests they will provide direct philanthropic funding to exactly the kinds of newsrooms I’ve been talking about.

Accelerate the Enabling Environment for News Production and Dissemination: Providing shared infrastructure of all kinds is going to be really important. As a rule, I believe newsrooms should be spending their time and resources on things that make them uniquely viable. The various commodity resources that every newsroom must build — technical tools, legal assistance, revenue experiments, help with people operations, assistance with reaching audiences — should be shared so that everyone can take advantage of improvements an discoveries, in a way that keeps costs low for all.

Close Longstanding Inequalities in Journalism Coverage and Practice: ensuring “the availability of accurate and responsive news and information in historically underserved communities and economically challenged news deserts” is vital here. Again, as I mentioned: direct subscriptions don’t work in communities were few can afford to pay. Philanthropic support can help ensure peoples’ stories are told — and when they are, local corruption measurably decreases.

Advance Public Policies That Expand Access to Local News and Civic Information:‌ supporting public policies that will protect journalists and improve support for newsrooms.

My hope is that most of the money will go directly to newsrooms, and to the sorts of shared infrastructure that every newsroom needs. I also hope that this shared infrastructure will be open sourced as much as possible, so that any public interest organization can take advantage — thereby increasing the impact of these donations. While public policy support is important, communities need coverage now, particularly in the run-up to the 2024 election.

· Posts · Share this post

 

Majority of likely Democratic voters say party should ditch Biden, poll shows

No surprises here. We need more progressive change than we’re getting. But obviously, if it’s Biden v Trump, there’s only one choice.

[Link]

· Links · Share this post

 

Snoop Dogg can narrate your news articles

Snoop Dogg gimmick aside, this is actually pretty neat, and useful. I'd also like the opposite: sometimes I want to read podcasts. Different contexts demand different media; I wish content itself could be more adaptable.

[Link]

· Links · Share this post

 

· Posts · Share this post