Skip to main content

Open source startup founder, technology leader, mission-driven investor, and engineer. I just want to help.

Subscribe to get updates via email.

benwerd

werd.social/@ben

 

Footage and stories from Gaza are heart-wrenching. The systematic killing of aid workers is just a small part of the atrocities being committed over there. Hamas is not a force for good in the region but almost all of these people are civilians. There's no way to justify this.

There must be a ceasefire. Now.

· Statuses · Share this post

 

I really hope San Francisco stays an idealistic, progressive city and doesn't succumb to centrism. There are plenty of other places for people who want a city run by those values to live. San Francisco is, and has always been, special.

· Statuses · Share this post

 

72

It’s my mother’s birthday. She would be 72 today.

The week we lost her, I wrote this piece, which I re-read today.

In it, our friend Anita Hurrell remembered her like this:

One time you drove us in the van to the seaside and we ate sandwiches with cucumber in them and I thought they tasted delicious and I felt this strong sense of deep content sitting with Hannah in the back listening to her singing and humming for the whole journey. I have no idea where we went, and in my head it was nowhere in England, but rather part of the big-hearted, loving, funny, relaxed, non-conformist world of your family in my childhood - full of your laughter and your enormous kindness.

[…] I look back and see […] what a true feminist you were, how much of parenting you seemed to do much better than we do these days, how generous and homemade and fun and kind the world you and Oscar [my dad] made was.

I feel so privileged to have had that childhood. To have had a mother like her.

In the piece I read at her memorial, I said:

Before I was born, both my parents were involved in struggles to support affirmative action and tenants’ rights. She described herself as having been radicalized early on, but it’s not particularly that she was radical: she could just see past the social templates that everyone is expected to adhere to, and which perpetuate systemic injustices, and could see how everything should operate to be fairer.

That was true on every level. She wanted she and [her siblings] to all be treated equally, and would make it known if she thought the others were getting a raw deal. She tried her best to treat Hannah and I equally. If someone made a sexist or a homophobic remark around her, she would call it out. If someone was xenophobic, or unthinkingly imperialist, she would bring it up. She was outspoken - always with good humor, but always adamant about what really mattered.

When our son was born, I wrote:

The last time I saw you, just over a year ago, you were in a bed in the same institution, your donated lungs breathing fainter and fainter. I kissed you on the forehead and told you I loved you. You’d told me that what you wanted to hear was us talking amongst ourselves; to know that we’d continue without you. In the end, that’s what happened. But I miss you terribly: I feel the grief of losing you every day, and never more than when my child was born.

[…] In this worse universe that doesn’t have you in it, I’ve been intentionally trying to channel you. I’ve been trying to imagine how you would have shown up with them, and what your advice for me would have been. I’ve been trying to convey that good-humored warmth I always felt. You made me feel safe: physically, yes, but more than that, emotionally. I want to make them feel safe, too: to be who they really are.

That first piece from three days after we lost her:

I want to honor her by furthering what she put into the world. The loving, non-conformist, irreverent, equity-minded spirit that she embodied.

Her values are the right ones, I’m sure of it — her non-conformity, her progressivism, her intellectual curiosity, her fearlessness and silliness (and fearless silliness), her gameness for adventure, her internationalism, her inclusive care and love for everybody and absolute disregard for the expectations other people had for her, or for nonsense tradition, for institutions, or for money.

She is the best person I’ve ever met and could ever hope to meet. I’ve been so, deeply sad, every day, but grief isn’t enough to honor her, and I wish I’d been better at doing that.

I miss you, Ma. I love you. I’m so, so sorry.

Ma, long before I was born

Ma with me as a young boy

Ma making her way down to the beach

Ma in Scotland, making the grimace emoji face

· Posts · Share this post

 

I think it would be fun to (co-)organize an East Coast IndieWebCamp this year, mostly because I would like to go to an East Coast IndieWebCamp this year. Perhaps there's scope for an IndieWebCamp NYC in September / October?

· Statuses · Share this post

 

Over time I'm becoming more and more enamored with the Derek Sivers mindset to posting on the internet.

· Statuses · Share this post

 

One of the disappointments of my adult life has been realizing that I’m way to the left of a lot of people - that things I think are sensible improvements that we need in order to help people have better lives are often seen as out of touch and overly-ideological.

· Statuses · Share this post

 

Share Openly

You know all those “share to Facebook” / “share to Twitter” links you see all over peoples’ websites? They’re all out of date.

Social media has evolved over the last year, yet nobody has “share to” links for Mastodon, Bluesky, Threads, etc. There have been a few attempts to create “share to Mastodon” buttons, but they haven’t taken the larger breadth of the new social media landscape into account.

So I’ve built a prototype, which I’ve called ShareOpenly.

At the bottom of every article on my site, you’ll see a “share to social media” button. Here’s the button for this article.

If you click it, you’ll be taken to a page that looks like this one:

Share Openly share screen

You can select one of the pre-set sites in the list, and you’ll be taken to share a post there. For example, if I click on Threads, it will take me to share there:

But if you, for example, have a Mastodon instance, or a Known site, or an indieweb site at a different domain, you can enter that domain in the box, and ShareOpenly will try and find a way to let you share the page with that site.

ShareOpenly will do a few things first:

  1. If it’s on a “well-known” domain — eg, facebook.com — it’ll send you to the share page there.
  2. It checks to see if it can figure out if the site is on a known platform (currently Mastodon, Known, hosted WordPress, micro.blog, and a few others). If so — hooray! — it knows the share URL, and off you go.
  3. It looks for a <link rel=“share-url”> header tag on the page. The href attribute should be set to the share URL for the site, with template variables {text} and (optionally) {url} present where the share text and URL should go. (If {url} is not present, the URL to share will be appended at the end of the text.) If it’s there — yay! — we forward there, replacing {text} and {url} as appropriate.

Once you’ve shared to a site, the next time you visit ShareOpenly, it will be in the quick links. For example, I shared to my site at werd.io in the example above, and now here it is in the links:

It’s early days yet — this is just a prototype — but I thought I’d share what I’ve built so far.

If you want to add ShareOpenly to your own site, please do! Just replace the URL and test in this link - https://shareopenly.org/share/?url=url&text=text - with your own. You can also just visit the ShareOpenly homepage to share a site directly.

 

Syndicated to Indienews

· Posts · Share this post

 

Building engineering

Software developers

I’ve spent most of my career — now well over two decades of it — building things on the web. I’ve worked as a software developer, I’ve founded a couple of my own companies, and I’ve often found myself leading teams of engineers. Right now I’m the director for both engineering and IT (although there are teams of people who write code who aren’t under my wing — newsrooms are complicated).

Over time, a lot of my work seems to have become less about “what shall we build?” or “how shall we build it?”. Those questions are always vitally important, but there are prerequisites that sometimes need to take center stage: things like “what are we here to do?”, “how should we work together?”, and “how do we think about what matters?”

I’ve been sharpening my thinking about the necessary conditions to do good work, and how to achieve them. Here’s a window into how I’m thinking about these ideas across three dimensions: Organizational Context, Team Leadership, and Technology Trends.

Words painted on the street: passion led us here

Organizational Context

There’s an unattributed but often-quoted management strategy cliché that says: culture eats strategy for breakfast. I’m a believer. Culture contains the fundamental building blocks of how an organization acts as a community: its values, beliefs, attitudes, norms, processes, and rules. Without a strong one, you cannot succeed — regardless of what your strategy looks like. Conversely, a great strategy, by definition, is one that incorporates building a great, intentional culture. Without one, your team is more likely to burn out and leave, you’re much less likely to build something high-quality, and you’re unlikely to foster new ideas.

Software engineering, at least in the places I’ve practiced it, is all about innovation. The focus is rarely on maintaining the present, although some degree of maintenance is always necessary. Instead, it’s about building the future: figuring out what we’ll need our platform to look like in two to five years and finding ways to get there. It’s a creative pursuit as much as it is about rigor and craft, and it’s about values and taste as much as it is about business necessity.

This dynamic is well-served by some organizational cultures and actively undermined by others. The trick is figuring out which you’re in, and finding ways to either embrace the former or build a buffer zone for the latter.

One popular way of looking at organizational culture is the Competing Values Framework, which defines four distinct overall culture types — all four of which are usually present to different degrees inside an organization.

  • Adhocracy: an organic, unbureaucratic way of organizing work that challenges the status quo, formal titles, and hierarchy in favor of a focus on risk-taking and innovating at speed.
  • Clan: a family-like culture that, again, is relatively unbureaucratic, without much structure, where rules tend to manifest as social norms rather than edicts or rigid process.
  • Hierarchy: where an emphasis is placed on top-down control from upper management in order to create predictability and lower risk. Roles are clearly defined, rules are codified, and even internal communication tends to be stratified.
  • Market: a culture optimized around competition, both with competitors and internally. Measurable results are central, but the workplace can easily become toxic because everyone is trying to better themselves vs their peers.

Like many frameworks, the reality is not actually as cut and dry as this. Instead, I think these categories are best thought of as facets of an organizational culture. In some organizations, hierarchy and market focus may have a heavier emphasis; in others, innovation and collaboration.

I vastly prefer working within organizations that look like the first two environments — adhocracies and clans — and I’d hazard to say that almost every single engineer, designer, and product manager I’ve ever worked with feels the same way. Hierarchical systems are inherently creatively stifling: innovation can’t take place in an environment with predominantly top-down control. The same goes for hyper-competitive environments: while the competition might be motivating for some in the short term, it’s really hard to collaborate effectively and build on each others’ ideas if everyone is trying to get ahead of each other.

Hierarchies in particular definitionally strip your authority in favor of top-down direction, forcing you to negotiate through layers of politics to make any kind of change. Most good engineers are collaborators, not cogs, with ideas, expertise, and creativity that should be embraced. But hierarchy demands cog-like behavior, and creates institutional fiefdoms that tend towards bureaucracy, inhibiting any really new work from being done if it hasn’t been rubber-stamped. These aren’t great places for a creative person to work.

As Robin Rendle put it recently:

This is the most obvious thing to say in the world, but: the hard work should never be the bureaucracy, it should be designing things and solving technical problems. If the hard work ain’t the hard work, ya gotta bounce. Don’t kill yourself trying to tell people that.

That isn’t to say that every team in an organization should work the same way or strive for the same culture. It might be that a legal, compliance, or safety team needs to work in a more rigid way as a system of control, or that a sales team needs to be intensely market-oriented. Or those things might not be true at all! My point is that it’s a mistake for engineers to assume that because they work best in a particular kind of environment, everyone should work that way. Every organization is comprised of a mix of culture types, and every team needs to work in a way that allows them to do their best work.

This may seem obvious, but we often talk about a single team’s working style setting the cultural norms for a whole organization. For example, it’s common for an organization to be described as engineering-led or sales-led. To be clear, this is a false choice: there should simply be people-led organizations that are inclusive of different interdisciplinary needs and styles of working.

For that to be a reality, top-level leadership in particular needs to acknowledge that not every team works the same way. For my purposes, this means acknowledging that engineering needs a particular kind of culture in order to thrive (and is important enough to have its own culture and be deserving of autonomy).

A prerequisite to this is understanding the potential for a technology team (or any team) in the first place. That’s less likely to happen in organizations where it’s treated as back-office, paint-by-numbers work. If an organization can’t see the importance of a team’s work, and if it inherently does not respect the effort and expertise inherent in those roles, it’s going to be very difficult for them to do good work.

My bias is to lean heavily on storytelling and listening as tools for fostering understanding: finding ways to explain why the work of product and engineering is important in the context of the whole organization, and how that expertise can be leveraged in order to benefit everybody. It’s okay to not understand what an engineering team has the potential to do from the outset, but if organizational leaders continue to not understand, that’s on me. The way to get there is through being transparent about what we’re doing, how we’re thinking, and which challenges we expect to encounter.

It really matters. Mutual understanding begets mutual respect.

There needs to be an explicit understanding between teams, mutual respect between parties that encompasses their expertise and different ways of working, and loose protocols for how everyone is going to communicate with each other that is compatible with their different styles of working.

I shouldn’t presume to tell a team from another discipline what they need to do their job, just as they shouldn’t presume to tell me. I should treat another team as the expert in its discipline, and they should treat my team as the expert in mine.

Throughout all this and despite our differences, we’re all in the same boat. We need to all be pulling in the same direction, motivated around a single, motivating mission (why we’re all here), vision (what is the world we’re here to try to create), and strategy (what are we going to try and do next to make it a reality).

The role of upper management is to set the direction, foster a culture that supports everyone, and help to build those protocols (all while not running out of money). One role of team leaders is to navigate those protocols and act as a buffer where there is friction.

Silhouettes of people walking down a hill. One is in front

Team Leadership

Vulnerable, open leaders make it safe for everyone to take risks and show up to work as they are.

So far I’ve written a lot about how engineering teams need organizational support that starts with a compatible culture that is founded on respect. But even in an environment that is un-hierarchical, transparent, informal, respectful, and open, with clear organizational goals and a defining mission, there’s more work to be done in order to create an environment where engineers can do their best work.

As I wrote last year:

The truth is that while some of the tools of the trade are drawn from math and discrete logic, software is fundamentally a people business, and the only way to succeed is to build teams based on great, collaborative communication, human empathy, true support, and mutual respect.

Leaders need to be stewards of those values. I believe — strongly — that this is best achieved through servant leadership:

[Servant leadership] aims to foster an inclusive environment that enables everyone in the organization to thrive as their authentic self. Whereas traditional leadership focuses on the success of the company or organization, servant leadership puts employees first to grow the organization through their commitment and engagement. When implemented correctly, servant leadership can help foster trust, accountability, growth, and inclusion in the workplace.

Each of these are important; I would also add safety. A blame-free environment where everyone can speak openly, be themselves, make mistakes, and not feel like they have to put on a mask to work is one where people can take risks and therefore innovate more effectively.

When you’re facilitating a brainstorming exercise, you might intentionally throw in a few out-there suggestions to make participants feel comfortable to take risks with their own contributions. Similarly, one of the roles of a leader is to push the envelope, and maybe risk looking a bit silly, in order to allow other people to feel more comfortable taking risks with their work — and when they do, to cheerlead them, support them, and help them feel comfortable even if their ideas don’t work out.

In a hierarchical team, the leader might ask if team members are adhering to their standards. In a supportive team, the leader might primarily ask how they are doing at supporting their team. It’s not that you don’t ever ask if someone isn’t performing; it’s more to do with the center of gravity of assessment. Supportive teams put the employees first.

Fostering that sort of team culture heavily depends on how a manager shows up day to day. A manager who isn’t vulnerable, doesn’t reveal much of themselves, and requires homogeneity is — probably unintentionally — fostering a hierarchical culture where masking is the norm rather than a supportive one where people are free to to be themselves.

The same sorts of fractal dynamics that affect inter-team collaboration apply to inter-personal collaboration, too. Everyone is different and has different working and communication styles, and homogeneity should never be the goal.

You can tell a lot by a team’s approach to feedback. If it is given in one direction — from managers down — then you likely have a hierarchical culture where team members may be less able to speak up and share their ideas. (The same is true if feedback is sometimes given to managers but rarely acted on.) I’ve observed that the most successful teams have clear, open, 360-degree feedback loops, where everyone’s feedback is directly sought out and incorporated — from team members to managers, between team members, and from manager to manager.

Another observable difference in team cultures can be seen through the kinds of norms that are enforced. To the extent that there are hard and fast rules on a team, they should be grounded in a purpose that supports forward motion, rather than to provide comfort to leadership or simply to enforce sameness.

As illustration, here are two contrasting examples of norms I’ve often seen enforced on teams:

  • Source code is written to adhere to common style guidelines, and is peer reviewed.
  • Cameras should be turned on during video calls.

Common coding style rules are a social contract that lower the cognitive load of working with code that someone else on the team has written, removing important roadblocks to everyone’s work; peer review is a really great channel for feedback, learning, and preventing bugs. Meanwhile, enforcing that cameras should always be on during video calls only serves to make some people less comfortable on the call.

Ultimately, success here is measured in what you ship, how happy your team is, whether they recommend working at your organization to their friends, and how long people stick around for.

A robot and a person holding hands

Technology Trends

It’s important for an engineering team to not just have a competence in working with technology but to have strong opinions about it, its implications, and how it intersects with the lives of the people it touches. They should strive to be experts in those issues, learning as much as they can from relevant publications, scholars, and practitioners.

It would be ludicrous to examine the use of AI but not study its ethical issues. Not only is there a moral hazard in not understanding the subject holistically, but by leaving out topics like bias, intellectual property violations, and hallucinations as you investigate bringing AI into your work, you actually create liability for your organization. It’s both an ethical duty and good due diligence.

Similarly, imagine studying blockchain a few years ago but not covering its environmental impact or its potential for use in money laundering. Leadership might have been excited by the potential for financial growth, but by not examining the human impacts of the technology, you would have missed substantial risks that might have created real business headwinds later on.

Or imagine relying on developing code as a core function of your organization and not staying on top of new techniques, approaches, exploits, and technologies to build with. Your team would effectively be stuck in time without any real way to progress and stay relevant, creating a risk that your product would suffer over time.

Or, come to that, imagine working in a fast-moving field like technology and not forming a strong, informed opinion about how it will change that is rooted in learning, experimentation, and active collaboration with experts and other organizations.

This is another area where an open, collaborative, inclusive culture can be helpful. Giving space to team members who want to share their knowledge and ideas about a subject, and entrusting them to cover it from their perspective, helps allow for topics to be covered through the lens of a variety of diverse lived experiences. But by practicing and championing the idea of inclusion as a core team value, you encourage team members to actively go and speak to diverse experts and gather a variety of viewpoints. The gene pool of ideas is widened as you investigate a subject and your own ideas and resulting products and strategies will be stronger as a result.

In a hierarchical culture where strategy is set from the top down, this kind of broad, inclusive learning might not be as effective, or it might not be present at all. Servant leadership helps ensure that everyone has the space to learn and grow with respect to topics they may not have mastered yet, or that their perspectives are championed. You simply have access to fewer ideas from fewer perspectives, and you’re wildly limited as a result.

Those same open feedback channels that create well-functioning, communicative teams can also serve as a way for team members to learn from each other. The principles of openness, inclusion, respect, openness to risk, and collaboration can serve as guiding lights as teams navigate new technologies and help their respective organizations get to grips with these topics. Leaders have a role in fostering learning and knowledge-sharing on a team, and ensuring that it is a first-class activity alongside writing and architecting code.

Stenciled letters on a wall: Live, Work, Create.

Overall

A lot of the things that are important to get right with engineering aren’t really about engineering at all. The best teams have a robust, intentional culture that champions openness, inclusivity, and continuous learning — which requires a lot of relationship-building both internally and with the organization in which it sits. These teams can make progress on meaningful work, and make their members valued, heard, and empowered to contribute.

At a team leadership level, servant leadership is a vital part of fostering a culture of innovation and adaptability. By prioritizing the well-being and development of the people on their teams, leaders are making an investment that leads to higher performance, more nuanced strategy, more resilience, and lower churn.

At an organizational leadership level, a clear strategic direction and a focus on inclusivity help to provide the leeway to get this work done. I don’t know if you can succeed without those things; I certainly know that you can’t create a satisfying place for engineers and other creative people to work.

The most interesting and successful organizations have an externally-focused human mission and an internal focus on treating their humans well. That’s the only way to build technology well: to empower the people who are doing it, with a focus on empathy and inclusion, and a mission that galvanizes its community to work together. And, perhaps most importantly to me, that’s the only way to build a team that I want to work on.

That’s how I’ve been thinking about it. I’d love to read your reflections and to learn from you.

· Posts · Share this post

 

While I respect that some people find comfort in tradition and institutions, I can’t agree. Those things are how we maintain the status quo - and there’s so much work to do.

· Statuses · Share this post

 

Exploring AI, safely

I’ve been thinking about the risks and ethical issues around AI in the following buckets:

  • Source impacts: the ecosystem impact of generative models on the people who created the information they were trained on.
  • Truth and bias: the tendency of generative models to give the appearance of objectivity and truthfulness despite their well-documented biases and tendency to hallucinate.
  • Privacy and vendor trust: because the most-used AI models are provided as cloud services, users can end up sending copious amounts of sensitive information to service providers with unknown chain of custody or security stances.
  • Legal fallout: if an organization adopts an AI service today, what are the implications for it if some of the suits in progress against OpenAI et al succeed?

At the same time, I’m hearing an increasing number of reports of AI being useful for various tasks, and I’ve been following Simon Willison’s exploratory work with interest.

My personal conclusions for the above buckets, such as they are, break down like this:

  • Source impacts: AI will, undoubtedly, make it harder for lots of people across disciplines and industries to make a living. This is already in progress, and continues a trend that was started by the internet itself (ask a professional photographer).
  • Truth and bias: There is no way to force an LLM to tell the truth or declare its bias, and attempts to build less-biased AI models have been controversial at best. Our best hope is probably well-curated source materials and, most of all, really great training and awareness for end-users. I also would never let generative AI produce content that saw the light of day outside of an organization (eg to write articles or to act as a support agent); it feels a bit safer as an internal tool that helps humans do their jobs.
  • Privacy and vendor trust: I’m inclined to try and use models on local machines and cloud services that follow a well-documented and controllable trust model, particularly in an organizational context. There’s a whole set of trade-offs here, of course, and self-hosted servers are not necessarily safer. But I think the future of AI in sensitive contexts (which is most contexts) needs to be on-device or on home servers. That doesn’t mean it will be, but I do think that’s a safer approach.
  • Legal fallout: I’m not a lawyer and I don’t know. Some but not all vendors have promised users legal indemnity. I assume that the cases will impact vendors more than downstream users — and maybe (hopefully?) change the way training material is obtained and structured to be more beneficial to authors — but I also don’t know that for sure. The answer feels like “wait and see”.

My biggest personal conclusion is, I don’t know! I’m trying not to be a blanket naysayer: I’ve been a natural early adopter my whole life, and I don’t plan to stop now. I recently wrote about how I’m using ChatGPT as a motivational writing partner. The older I get, the more problems I see with just about every technology, and I’d like to hold onto the excitement I felt about new tech when I was younger. On the other hand, the problems I see are really big problems, and ignoring those outright doesn’t feel useful either.

So it’s about taking a nimble but nuanced approach: pay attention to both the use cases and the issues around AI, keep looking at organizational needs, the kinds of organic “shadow IT” uses that are popping up as people need them, and figure out where a comfortable line is between ethics, privacy / legal needs, and utility.

At work, I’m going to need to determine an organizational stance on AI, jointly with various other stakeholders. That’s something that I’d like to share in public once we’re ready to roll it out. This post is very much not that — this space is always personal. But, as always, I wanted to share how I’m thinking about exploring.

I’d be curious to hear your thoughts.

· Posts · Share this post

 

It’s snowing again. That’s it, I’m moving to Spain.

· Statuses · Share this post

 

Startup pitch: Fediverse VIP

An illustrative sketch of a new service

Here’s my pitch for a fediverse product for organizations.

Think of it as WordPress VIP for the fediverse: a way for organizations to safely build a presence on the fediverse while preserving their brand, keeping their employees safe, and measuring their engagement.

We’ve established that the fediverse is large and growing: Threads has around 130M monthly users, Flipboard has 100M, Mastodon has a couple of million, and there’s a very long tail. And the network is growing, with more existing services and new entrants joining all the time. It is the future of the social web.

But the options for organizations to join are not fully aligned with organizations’ needs:

  • Flipboard is a good solution for publications to share articles directly, but not individuals to interact as first-class fediverse citizens.
  • Threads allows anyone to have an independent profile, but there’s no good organizational way to keep track of them all.
  • Mastodon allows you to establish communities, but you need to work with a hosting provider or install it yourself.
  • There’s no really great way to know that a profile really does belong to an organization. For example, on Threads, verification is at the ID level, and costs an individual $11.99 a month.
  • There’s no way to style profiles to match your brand, or to enforce brand guidelines.
  • There’s no analytics.
  • There are no brand or individual safety features like allowing safety teams to co-pilot an account if it’s suffering abuse.
  • There’s no shared inbox to manage support requests or other enquiries that come in via social media.

Fediverse VIP is a managed service that allows any brand to create individual fediverse profiles for its employees and shared ones for its publications, on its own domain, using its own brand styles, with abuse prevention and individual safety features, and with full analytics reporting.

For example, if the New York Times hypothetically signs up for Fediverse VIP, each of its reporters could have an account @reporter.name@newyorktimes.com, letting everyone know that this is a real New York Times account. If you click through to a profile, it will look like the New York Times, with custom links that click through directly to NYT content. On the back end, multiple users can contribute, edit, and schedule posts for shared accounts.

Each Fediverse VIP instance has its own analytics, so you can learn more about the content you’ve published and how it performed — and build reports that instance administrators can share with their managers. And in the unfortunate event that an account suffers abuse, a member of their staff can copilot an account and field incoming messages, or a third-party service can be brought in to help ensure everybody is safe. There are full, shared blocklists on both an individual and domain level, of course. And highly-available support and training is included.

Finally, components, libraries, and APIs are made available so that social features — including “share to fediverse” — can be deeply integrated with a brand’s existing site.

Fediverse VIP is an annual subscription, tiered according to the number of followers an instance receives. Its first market would be media companies that are having trouble figuring out how to maintain a presence and maintain both trust and audience attention in the midst of rapid change in the social media landscape.

The venture would be structured as a Delaware Public Benefit Corporation, and would raise traditional venture funding in order to become the way organizations maintain an institutional presence on the open social web. As part of its mission, it would seek to devote resources to make the open social web as big and as successful as possible.

This isn’t a deck; it’s more of a first-draft sketch. But I think there might be something here?

Obvious disclaimers: this is a sketch / idea, not a solicitation. Also, the New York Times is just an example and had nothing to do with this idea.

· Posts · Share this post

 

So everyone in tech understands that when the AI readjustment happens your stocks are going through the floor, right?

· Statuses · Share this post

 

Some personal updates

I write a lot about the intersection of technology and society here, and lately a lot about AI, but over the last year I’ve written a little less about what I’ve been up to. So, this post is an update about some of that. This isn’t everything, by any means — 2023 was, frankly, a hard year for lots of reasons, which included not a small amount of personal loss and trauma — but I wanted to share some broad strokes.

We’re now based in the Greater Philadelphia area, rather than San Francisco. There have been all kinds of life changes: it’s the ‘burbs, which is weird, but I’m writing this on a train to New York City, which is now easily within reach. I grew up in Oxford and could easily go to London for a day trip; now I have the same relationship with NYC. We haven’t yet brought the baby to the city, but that’s coming. (He’s not a baby anymore: we have a delightful toddler whose favorite things, somehow, are reading books and brushing his teeth.)

I joined ProPublica as Senior Director of Technology after working with the team as an advisor on contract for a while. ProPublica publishes vital American journalism: you might remember the story about Supreme Court Justices with billionaire friends that broke last year, or the story about Peter Thiel’s $5 Billion tax-free IRA. You might also have come across Nonprofit Explorer and other “news apps”. Our technology philosophy is very compatible, and it’s a lovely team. I’m hoping we can revive The Nerd Blog.

I work mostly remotely and spend a lot of my time at my desk looking like this:

The author, alone, in a Google Meet room

(Guess the books! Yes, that’s also an issue of .net — specifically, one from decades ago that showcased Elgg.)

My website is still powered by Known, and I still intend to invest time and resources into that platform. I’ve also finally accepted — between having a toddler, a demanding job, an ongoing project (more on that in a second), and other commitments — that I’m not going to be making a ton of contributions to the codebase myself anytime soon. But there’s a pot of money in the Open Collective, and I’m eager to support open source developers in adding functionality to the platform. The first stop has been adding ActivityPub support to make Known compatible with the fediverse. The next stop will be improving the import / export functionality so that it (1) functions as expected (2) is in line with other platforms.

I’ve been struggling with writing a book. I’ve had the benefit of really great 1:1 coaching through The Novelry, and was making great progress until I realized I needed to revise a major element. It’s been a slog since then: I have printouts of my first draft covered in Sharpie all over my office. My fear of being terrible at this increases with every sideways glance at the unfinished manuscript (which seems, somehow, to be staring back at me). I’m certain that as soon as I send it out into the world I’ll be ridiculed. But I’m determined to get it to the finish line, revise it, send it out, and do it again.

As painful as writing the draft has been, I also love the act of it. Writing has always been my first love, far before computers. Don’t get me wrong: I don’t claim any sort of literary excellence, in the same way that I enjoy making dinner for everyone but would never call myself a chef. I’ve got huge respect for anyone who’s gone down this road and actually succeeded (hi, Sarah, you are radically inspiring to me). It’s a craft that deserves care, attention, and practice, and stretching these muscles is as desperately uncomfortable as it is liberating. I find the whole process of it meditative and freeing, and also simultaneously like pulling every fingernail from my body.

So, uh, we’ll see if the end result is any good.

I’ve been helping a few different organizations with their work (pro bono): two non-profits that are getting off the ground, a startup, and a venture fund. Each of them is doing something really good, and I’m excited to see them emerge into the world.

Also, my universe has been rocked by this recipe for scrambled eggs. So there’s that, too.

What’s up with you?

· Posts · Share this post

 

Platforms are selling your work to AI vendors with impunity. They need to stop.

Some WordPress source code

404 Media reports that Automattic is planning to sell its data to Midjourney and OpenAI for training generative models:

The exact types of data from each platform going to each company are not spelled out in documentation we’ve reviewed, but internal communications reviewed by 404 Media make clear that deals between Automattic, the platforms’ parent company, and OpenAI and Midjourney are imminent.

Various arms of Automattic made subsequent clarifications. Specifically, it seems like premium versions of WordPress’s online platform, like the WordPress VIP service that powers sites for major newsrooms, will not sell user data to AI platforms.

This feels like a direct example of my point about how the relationship between platforms and users has been redefined. It appears that free versions of hosted Automattic platforms will sell user data by default, while premium versions will not.

Reddit announced a similar deal last week, and in total has made deals worth $203M for its content. WordPress powers over 40% of the web, which, given these numbers, could lead to a significant payday for the company. Much of that is on the self-hosted open source project rather than sites powered by Automattic, but that number gets fuzzier once you consider the Jetpack and Akismet plugins.

From a platform’s perspective it seems like AI companies might look like a godsend. They have an open license to tens or hundreds of millions of users’ content, often going back years — and suddenly, thanks to AI vendors’ need for legal, structured content to train on — the real market value of that content has shot up. It wouldn’t surprise me to see new social platforms emerge that have underlying data models designed specifically in order to sell to AI vendors. Finally, “selling data” is the business model it was always purported to be.

It’s probably no surprise that publishers are a little less keen, although there have been well-publicized deals with Axel Springer and the Associated Press. The deals OpenAI is offering to news companies for their content tend to top out at $5M each, for one thing. But social platforms don’t trade on the content themselves: they’re scalable businesses because they’re building conduits for other peoples’ posts. Their core value is the software and an enormous, engaged user-base. In contrast, publishers’ core value really is the articles, art, audio, images, and video they produce; the hard-reported journalism, the unscalable art, and the slow-burning communities that emerge around those things. Publishing doesn’t scale. The rights to that work should not be given away easily. The incentives between platforms and AI vendors are more or less aligned; the incentives between publishers and AI vendors are not.

I don’t think bloggers and social video producers should give those rights away easily either. They might not be publishing companies with large bodies of work, but the integrity of what they produce still matters.

For WordPress users, it’s kind of a bait and switch.

While writers may be using the free, hosted version of a publishing platform like WordPress, they retain the moral right of authorship:

As defined by the Berne Convention for the Protection of Literary and Artistic Works, an international agreement governing copyright law, moral rights are the rights “to claim authorship of the work and to object to any distortion, mutilation or other modification of, or other derogatory action in relation to, the said work, which would be prejudicial to his honor or reputation.”

The hosted version of WordPress contains this sentence about ownership in its TOS:

We don’t own your content, and you retain all ownership rights you have in the content you post to your website.

A reasonable person could therefore infer that their content would not be licensed for an AI vendor. And yet, that seems to be on the cards.

So now what?

If every platform is more and more likely to sell user data to AI platforms over time, the only way to object is to start to use self-hosted indieweb platforms.

But every public website can also be scraped directly by AI vendors, in some cases even if they use the Robots Exclusion Protocol that has been used for decades to prevent search engine bots from indexing unauthorized content. A large platform can sue for violation of content licenses, but individual publishers are unlikely to have the means — unless they gather together and form a collective organization that can fight on their behalf.

If every public website is more and more likely to be scraped by AI vendors over time, the only way to object is to thwart the scrapers. That can be done electronically, but that’s an arms race between open source platforms and well-funded AI vendors. Joining together and organizing collectively is perhaps more effective; organizing for regulations that can actually hold vendors to account would be more effective still.

It’s time for publishers, writers, artists, musicians, and everyone who publishes cultural work for a living (or for themselves) to start working together and pushing back. The rights of the indie website are every bit as important as the rights of organizations like the New York Times that do have the funds to sue. And really, truly, it’s time for legislators to take notice of the untrustworthy, exploitative actions of these vendors and their platform accomplices.

· Posts · Share this post

 

I don't want to live in a nationalist country.

I don't want my son to grow up in a world where nationalism is rising.

I don't want the future to be dictated by nationalism.

· Statuses · Share this post

 

ASCAP for AI

A musician playing an electric organ

Hunter Walk writes:

The checks being cut to ‘owners’ of training data are creating a huge barrier to entry for challengers. If Google, OpenAI, and other large tech companies can establish a high enough cost, they implicitly prevent future competition. Not very Open.

It’s fair to say that I’ve been very critical of AI vendors and how training data has been gathered without much regard to the well-being of individual creators. But I also agree with Hunter in that establishing mandatory payments for training content creates a barrier to entry that benefits the incumbents. If you need to pay millions of dollars to create an AI model, you won’t disincentivize generative AI models overall, but you will create a situation where only people with millions of dollars can create an AI model. In this situation, the winners are likely Google and Microsoft (in the latter case, via OpenAI), with newcomers unable to break in.

To counteract this anticompetitive situation, Hunter previously suggested a safe harbor scheme:

AI Safe Harbor would also exempt all startups and researchers who have not released public base models yet and/or have fewer than, for example, 100,000 queries/prompts per day. Those folks are just plain ‘safe’ so long as they are acting in good faith.

I would add that they cannot be making revenue above a certain safe threshold, and that they cannot be operating a hosted service (or provide models that are used for a hosted service) with over 100,000 registered users. This way early-stage startups and researchers alike are protected while they experiment with their data.

After that cliff, I think AI model vendors could pay a fee to an ASCAP-like copyright organization that distributes revenue to organizations that have made their content available for training.

If you’re not familiar with ASCAP and BMI, here’s broadly how they work: when a musician joins as a member, the organization tracks when their music is used. That might be in live performances, on the radio, on television, and so on. Those users of the music — production companies, radio stations, etc — pay license fees to the organization, and the organization pays the musicians. The music users get the legal right to use the music, and the musicians get paid.

The model could apply rather directly to AI. Here, rather than one-off deals with the likes of the New York Times, vendors would pay the licensing organization, and all content creators would be compensated based on which material actually made it into a training corpus. The organization would provide tools to make it easy for AI vendors and content creators alike to provide content, report its use in AI models, and audit the composition of existing models.

I’d suggest that model owners could pay on a sliding scale that is dependent on both usage and total revenue. One component increases proportionally with the number of queries performed along a sliding scale at the model level; the other in pricing tiers associated with a vendor’s total gross revenue at the end-user level. So for example, if Microsoft used OpenAI to provide a feature in Bing, OpenAI would pay a fee based on the queries people actually made in Bing, and Microsoft would pay a fee based on its total corporate revenue. Research use would always be free for non-profits and accredited institutions, as long as it was for research or internal use only.

This model runs the risk of becoming a significant revenue stream for online community platforms, which tend to assert rights over the content that people publish to them. In this case, for example, rather than Facebook users receiving royalties for content published to Facebook that was used in an AI model, Facebook itself could take the funds. So there would need to be one more rule: even if a platform like Facebook asserts rights over the content that is published to it, it would need to demonstrate a best effort to return at least 60% of royalties to users whose work was used in AI training data.

Net result:

  • Incumbents don’t enjoy a barrier to entry from copyright payments: new entrants can build with impunity.
  • AI vendors and their users are indemnified from copyright claims against their models.
  • AI vendors don’t have to make individual deals with publishers and content creators.
  • Independent creators are financially incentivized to produce great creative and informational work — including individual creatives like artists and writers who might not otherwise have found a way to financially support their work.
  • The model shifts from one where AI vendors scrape content with no regard to the rights of the creator to one where creators give explicit consent to be included.

The AI horse has left the stable. I don’t think shutting it all down is an option, however vocal critics like myself and others might be. What we’re left with, then, is questions about how to create a healthy ecosystem, how to properly compensate creators, and how to ensure that the rights of an author are respected. This, I think, is one way forward.

· Posts · Share this post

 

Stop what you're doing and watch Breaking the News

Stills from the documentary, Breaking the News

Breaking the News, the documentary about The 19th, aired on PBS last night and is available to watch for free on YouTube for the next 90 days.

It’s both a film about the news industry and about startups: a team’s journey to show that journalism can and should be published with a more representative lens. It’s also not a puff piece: real, genuine struggles are covered here, which speak to larger conversations about race and gender that everyone needs to be having.

I worked with The 19th for a period that mostly sits directly after this film. My chin — yes, just my chin — shows up for a fraction of a second, but otherwise I’m not in it. My association with it is not why I’m recommending that you watch it.

The 19th is not a perfect workplace, in part because no such workplace exists. It has struggles like any other organization. But there was a thoughtfulness about culture and how work gets done that I’ve rarely seen elsewhere. Some of those policies were developed in direct response to workplace cultures that are prevalent in newsrooms, including narrow leadership demographics, hierarchical communication, a focus on work product rather than work process, and lack of goal-setting.

My experience was privileged, in part because of my position in the senior leadership team, but for me it was a breath of fresh air. There aren’t many places where I’ve felt calmer at work. Some of that is because of the early conversations and hard work that were captured on film here.

From the synopsis:

Who decides which stories get told? A scrappy group of women and LGBTQ+ journalists buck the white male-dominated status quo, banding together to launch The 19th*, a digital news startup aiming to combat misinformation. A story of an America in flux, and the voices often left out of the narrative, the documentary Breaking the News shows change doesn’t come easy.

You can watch the whole documentary for free here. And if you haven’t yet, go subscribe to The 19th over on its website.

· Posts · Share this post

 

Social, I love you, but you’re bringing me down

A big thumbs-down made of people

This weekend I realized that I’m kind of burned out: agitated, stressed about nothing in particular, and peculiarly sleepless. It took a little introspection to figure out what was really going on.

Here’s what I finally decided: I really need to pull back from using social media in particular as much as I do.

A few things brought me here:

  1. The sheer volume of social media sites is intense
  2. Our relationship with social media has been redefined
  3. I want to re-focus on my actual goals

I’d like to talk about them in turn. Some of you might be feeling something similar.

The sheer volume of social media sites is intense

It used to be that I posted and read on Twitter. That’s where my community was; that’s where I kept up to date with what was happening.

Well, we all know what happened there.

In its place, I find myself spending more time on:

  1. Mastodon
  2. Threads
  3. Bluesky
  4. LinkedIn (really!)
  5. Facebook (I know)
  6. Instagram

The backchannel that Twitter offered has become rather more diffuse. Mastodon, Threads, and Bluesky offer pretty much the same thing as each other, with a different set of people. LinkedIn is more professional; I’m unlikely to post anything political there, and I’m a bit more mindful of polluting the feed. My Facebook community is mostly people I miss hanging out with, so I’ll usually post sillier or less professionally relevant stuff there. And Instagram, until recently, was mostly photos of our toddler.

I haven’t been spending a ton of time interacting on any of them; it’s common for almost a full day to go between posts. Regardless, there’s something about moving from app to app to app that feels exhausting. I realized I was experiencing a kind of FOMO — am I missing something important?! — that became an addiction.

Each dopamine hit, each context switch, each draw on my attention pushes me further to the right on the stress curve. Everyone’s different, but this kind of intense data-flood — of the information equivalent of empty calories, no less — makes me feel awful.

Ugh. First step: remove every app from my phone. Second step: drastically restrict how I can access them on the web.

Our relationship with social media has been redefined

At this point we’re all familiar with the adage that if you’re not the customer, you’re the product being sold.

It never quite captured the true dynamic, but it was a pithy way to emphasize that we were being profiled in order to optimize ad sales in our direction. Of course, there was never anything to say that we weren’t being profiled or that our data wasn’t being traded even if we were the ostensible customer, but it seemed obvious that data mining for ad sales was more likely to happen on an ad-supported site.

With the advent of generative AI, or more precisely the generative AI bubble, this dynamic can be drawn more starkly. Everything we post can be ingested by a social media platform as training data for its AI engines. Prediction engines are trained on our words, our actions, our images, our audio, and then re-sold. We really are the product now.

I can accept that for posts where I share links to other resources, or a rapid-fire, off-the-cuff remark. Where I absolutely draw the line is allowing an engine to be trained on my child. Just as I’m not inclined to allow him to be fingerprinted or added to a DNA database, I’m not interested in having him be tracked or modeled. I know that this is likely an inevitability, but if it happens, it will happen despite me. I will not be the person who willingly uploads him as training data.

So, when I’m uploading images, you might see a picture of a snowy day, or a funny sign somewhere. You won’t see anything important, or anything representative of what life actually looks like. It’s time to establish an arms-length distance.

There’s something else here, too: while the platforms are certainly profiling and learning from us, they’re still giving us more of what we pause and spend our attention on. In an election year, with two major, ongoing wars, I’m finding that to be particularly stressful.

It’s not that I don’t want to know what’s going on. I read the news; I follow in-depth journalism; I read blogs and opinion pieces on these subjects. Those things aren’t harmful. What is harmful is the endless push for us to align into propaganda broadcasters ourselves, and to accept broad strokes over nuanced discussion and real reflection. This was a problem with Twitter, and it’s a problem with all of today’s platforms.

The short form of microblogging encourages us to be reductive about impossibly important topics that real people are losing their lives over right now. It’s like sports fans yelling about who their preferred team is. In contrast, long-form content — blogging, newsletters, platforms like Medium — leaves space to explore and truly debate. Whereas short-form is too low-resolution to capture the fidelity of the truth, long-form at least has the potential to be more representative of reality.

It’s great for jokes. Less so for war.

I want to re-focus on my actual goals

What do I actually want to achieve?

Well, I’ve got a family that I would like to support and show up for well.

I’ve got a demanding job doing something really important, that I want to make sure I show up well for.

I’ve also got a first draft of a majority of a novel printed out and sitting on my coffee table with pen edits all over it. I’d really like to finish it. It’s taken far longer than I intended or hoped for.

And I want to spend time organizing my thoughts for both my job and my creative work, which also means writing in this space and getting feedback from all of you.

Social media has the weird effect of making you feel like you’ve achieved something — made a post, perhaps received some feedback — without actually having done anything at all. It sits somewhere between marketing and procrastination: a way to lose time into a black hole without anything to really show for it.

So I want to move my center of gravity all the way back to writing for myself. I’ll write here; I’ll continue to write my longer work on paper; I’ll share it when it’s appropriate.

Posting in a space I control isn’t just about the principle anymore. It’s a kind of self-preservation. I want to preserve my attention and my autonomy. I accept that I’m addicted, and I would like to curb that addiction. We all only have so much time to spend; we only have one face to maintain ownership of. Independence is the most productive, least invasive way forward.

 

IndieNews

· Posts · Share this post

 

It's kind of impressive to see Ghost become a real open source alternative to WordPress. Many people have said it couldn't be done - but by focusing on a certain kind of independent creator (adjacent to both Medium and Substack), they've done it. It's a pretty amazing feat.

· Statuses · Share this post

 

A creative process

The silhouette of someone walking above the cloudline.

Over on Threads, Amanda Zamora asks:

I'm plotting away on Agencia Media and some personal writing/reporting this weekend (over a glass of 🍷 and many open tabs). One of the things I love most about building something new is the chance to design for intended outcomes — how to structure time and energy? What helps quiet chaos? Bring focus and creativity? Inspired by Ben Werdmuller’s recent callout about new Mac setups, I want to know about the ways you've built (or rebuilt) your way of working! Apps, workflows, rituals, name 'em 👇

A thing I’ve had to re-learn about building and creating is the importance of boredom in the way I think. I know that some people thrive when moving from thing to thing to thing at high speed, but I need time to reflect and toss ideas around in my head without an imposing deadline: the freedom to be creative without consequence.

The best way I’ve found to do that is to walk.

The work I’m proudest of was done in a context where I could walk for hours on end. When I was building Elgg, I would set off around Oxford, sometimes literally walking from one end of the city to the other and back again. When I was building Known and working for Matter, I roamed the east bay, sometimes walking from Berkeley to the tip of Oakland, or up through Tilden Park. I generally didn’t listen to music or audiobooks; I was alone with my thoughts and the sounds of the city. It helped me to figure out my priorities and consider what I was going to do next. When I came up with something new, it was more often than not in the midst of one of those walks.

When you’re deep into building something that’s your own, and that’s the entirety of what you’re doing (i.e., you don’t have another day job), you have the ability to structure your time however you’d like. Aside from the possible guilt of not working a traditional office day, there’s no reason to do that. Particularly at the beginning stages, I found that using the morning as unstructured reflective time led to better, more creative decision-making.

Again, this is me: everyone is different, and your mileage may vary. I do best when I have a lot of unstructured time; for some people, more structure is necessary. I think the key is to figure out what makes you happy and less stressed, and to get out from behind a screen. But also, walking really does boost creativity, so there’s that.

I recognize there’s a certain privilege inherent here: not everyone lives somewhere walkable, and not everyone feels safe when they’re walking out in the world. The (somewhat) good news is that indoor walking works just as well, if you can afford a low-end treadmill.

So what happens when you get back from a walk with a head full of ideas?

It’s probably no surprise that my other creativity hack is to journal: I want to get those unstructured thoughts, particularly the “what ifs” and “I wishes”, out on the page, together with the most important question, which is “why”. Writing long-form in this way puts me into a more contemplative state, much the same way that writing a blog post like this one helps me refine how I think about a topic. Putting a narrative arc to the thought gives it context and helps me refine what’s actually useful.

The through line here is an embrace of structurelessness; in part that’s just part of my personality, but in part it’s an avoidance of adhering to someone else’s template. If I’m writing items on a to-do list straight away, I’m subject to the design decisions of the to-do list software’s author. If I’m filling in a business model canvas, I’m thinking about the world in the way the canvas authors want me to. I can, and should, do all those things, but I always want to start with a blank page first. A template is someone else’s; a blank page is mine.

Nobody gets to see those thoughts until I’ve gone over them again and turned them into a written prototype. In the same way that authors should never show someone else their first draft, letting someone into an idea too early can deflate it with early criticism. That isn’t to say that understanding your hypotheses and doing research to validate them isn’t important — but I’ve found that I need to keep up the emotional momentum behind an idea if I’m going to see it through, and to do that, I need to keep the illusion that it’s a really good idea just long enough to give it shape.

Of course, when it has shape, I try to get all the expert feedback I can. Everyone needs an editor, and asking the right questions early and learning fast is an obvious accelerant.

So I guess my creative process boils down to:

  • Embrace boredom and unstructured, open space to think creatively
  • Capture those creative thoughts in an untemplated way, through narrative writing
  • Identify my hypotheses and figure out what needs to be researched to back up the idea
  • Ask experts and do that research as needed in order to create a second, more validated draft
  • Get holistic feedback from trusted collaborators on that second draft
  • Iterate 1-2 times
  • Build the smallest, fastest thing I can based on the idea

There are no particular apps involved and no special frameworks. Really, it’s just about giving myself some space to be creative. And maybe that’s the only advice I can give to anyone building something new: give yourself space.

· Posts · Share this post

 

A reminder that the whole point of open source, federated technologies is that there doesn't have to be one winner. It's not a market where every vendor is trying to be a monopoly. It's about building a bigger, collaborative pie.

· Statuses · Share this post

 

Three variations on Omelas

The Ones Who Walk Away From Omelas, by Ursula K. LeGuin:

They all know it is there, all the people of Omelas. Some of them have come to see it, others are content merely to know it is there. They all know that it has to be there. Some of them understand why, and some do not, but they all understand that their happiness, the beauty of their city, the tenderness of their friendships, the health of their children, the wisdom of their scholars, the skill of their makers, even the abundance of their harvest and the kindly weathers of their skies, depend wholly on this child’s abominable misery.

The Ones Who Stay and Fight, by N.K. Jemisin:

But this is no awkward dystopia, where all are forced to conform. Adults who refuse to give up their childhood joys wear wings, too, though theirs tend to be more abstractly constructed. (Some are invisible.) And those who follow faiths which forbid the emulation of beasts, or those who simply do not want wings, need not wear them. They are all honored for this choice, as much as the soarers and flutterers themselves—for without contrasts, how does one appreciate the different forms that joy can take?

Why Don’t We Just Kill the Kid in the Omelas Hole, by Isabel J. Kim:

So they broke into the hole in the ground, and they killed the kid, and all the lights went out in Omelas: click, click, click. And the pipes burst and there was a sewage leak and the newscasters said there was a typhoon on the way, so they (a different “they,” these were the “they” in charge, the “they” who lived in the nice houses in Omelas [okay, every house in Omelas was a nice house, but these were Nice Houses]) got another kid and put it in the hole.

· Posts · Share this post

 

The four phases

A fictional mainframe

This post is part of February’s IndieWeb Carnival, in which Manuel Moreale prompts us to think about the various facets of digital relationships.

Our relationship to digital technology has been through a few different phases.

One: the census

In the first, computers were the realm of government and big business: vast databases that might be about us, but that we could never own or interrogate ourselves. Companies like IBM manufactured room-sized (and then cabinet-sized) machines that took a team of specialized technicians to operate. They were rare and a symbol of top-down power.

Punch cards were invented in the 1880s, and were machine-sortable even then, although not by anything we would recognize as a computer today. In the 1930s, a company called Dehomag, which was a 90%-owned subsidiary of IBM, used its punch card census technology to help the German Nazi party ethnically identify and sort the population. (Thomas Watson, IBM’s CEO at the time, even came to Germany to oversee the operation.)

The first general-purpose digital computer, ENIAC, was first put to use to determine the feasibility of the H bomb. Other mainframe computers were used by the US Navy for codebreaking, and by the US census bureau. By the sixties and seventies, though, they were commonplace in larger corporate offices and in universities for non-military, non-governmental applications.

Two: the desk

Personal computers decentralized computing power and put it in everybody’s hands. There was no overarching, always-on communications network for them to connect to, so every computer had its own copy of software that ran locally on it. There was no phoning home; no surveillance of our data; there were no ad-supported models. If you were lucky enough to have the not-insignificant sum of money needed to buy a computer, you could have one in your home. If you were lucky enough to have money left over for software, you could even do things with it.

The government and large institutions didn’t have a monopoly on computing power; theoretically, anyone could have it. Anyone could write a program, too, and (if you had yet more money to buy a modem) distribute it on bulletin board systems and online services. Your hardware was yours; your software was yours; once you’d paid your money, your relationship with the vendor was over.

For a while, you had a few options to connect with other people:

  • Prodigy, an online service operated as a joint venture between CBS, IBM, and Sears
  • CompuServe, which was owned and run by H&R Block
  • America Online, which was originally a way for Atari 2600 owners to download new games and store high scores
  • Independent bulletin boards, which were usually a single computer connected to a handful of direct phone lines for modems to connect to, run by an enthusiast

(My first after-school job was as a BBS system operator for Daily Information, a local information and classifieds sheet in my hometown.)

In 1992, in addition to bulletin board systems and online services, the internet was made commercially available. Whereas BBSes, AOL, etc were distinct walled gardens, any service that was connected to the internet could reach any other service. It changed everything. (In 1995, my BBS job expanded to running what became one of the first classifieds websites.)

But for a while, the decentralized, private nature of personal computing remained. For most private individuals, connecting to the internet was like visiting a PO box: you’d dial in, would upload and download any email you had pending, browse any websites you needed to, and then log off again. There was no way to constantly monitor people because internet users spent 23 hours of the day disconnected from the network.

Three: the cloud

Broadband, the iPhone, and wifi changed everything. Before the advent of broadband, most people needed to dial in to go online using their phone line. Before the iPhone, cell connections weren’t metered for data, and there was very little bandwidth to go around. Before wifi, a computer needed to physically be connected with a cable to go online.

With broadband and wifi, computers could be connected to the internet 24/7. With the iPhone, everyone had a computer in their pocket, that was permanently connected and could be constantly sending data back to online services — including your location and who was in your address book.

It was incredibly convenient and changed the world in hundreds of ways. The web in particular is a modern marvel; the iPhone is a feat of design and engineering. But what we lost was the decentralized self-ownership of our digital worlds. More than that, we lost an ability to be private that we’d had since the beginning of human civilization. It used to be that nobody needed to know where you were or what you were thinking about; that fundamental truth has gone the way of the dinosaur.

Almost immediately, our relationship to software changed in a few key ways:

  • We could access all of our data from anywhere, on any device.
  • Instead of buying a software package once, we were asked to subscribe to it.
  • Instead of downloading or installing software, the main bulk of it could be run in a server farm somewhere.
  • Every facet of our data was stored in one of these server farms.
  • More data was produced about us as we used our devices — or even as we walked through our cities, shopped at stores, and met with other people — than we created intentionally ourselves.

While computing became infinitely easier to use and the internet became a force that changed global society in ways that I still believe are a net positive, surveilling us also became infinitely easier. Companies wanted to know exactly what we were likely to buy; politicians wanted to know how we might vote; law enforcement wanted to know if we were dangerous. All paid online services to build profiles about us that could be used to sell advertising, could be mined by the right buyer, and could even be used to influence elections.

Four: the farm

Our relationship is now changing again.

Whereas in the cloud era we were surveilled in order to profile us, our data is now being gathered for another set of reasons. We’re used to online services ingesting our words and actions in order to predict our behaviors and influence us in certain directions. We’re used to Target, for example, wanting to know if we’re pregnant so they can be the first to sell us baby gear. We’re not used to those services ingesting our words and actions in order to learn how to be us.

In our new relationship, software isn’t just set up to surveil us to report on us; it’s also set up to be able to do our work. GitHub Copilot learns from software we write so that it can write software automatically. Midjourney builds stunning illustrations and near-photorealistic images. Facebook is learning from the text and photos we upload so it can create its own text and realistic imagery (unlike many models, from data it actually has the license to). Far more than us being profiled, our modes of human expression are now being farmed for the benefit of people who hope to no longer have to hire us for our unique skills.

In the first era, technology was here to catalogue us.

In the second, it was here to empower us.

In the third, it was here to observe us.

In the fourth, it is here to replace us.

We had a very brief window, somewhere between the inception of the homebrew computer club and the introduction of the iPhone, where digital technology heralded distributed empowerment. Even then, empowerment was hardly evenly distributed, and any return to decentralization must be far more equitable than it ever was. But we find ourselves in a world where our true relationship is with power.

Of course, it’s a matter of degrees, and everything is a spectrum: there are plenty of services that don’tuse your data to train generative AI models, and there are plenty that don’t surveil you at all. There are also lots of applications and organizations that are actively designed to protect us from being watched and subjugated. New regulations are being proposed all the time that would guarantee our right to privacy and our right to not be included in training data.

Those might seem like technical decisions, but they’re really about preserving our ownership and autonomy, and returning those things to us when they’ve already been lost. They’re human, democratic decisions that seek to enforce a relationship where we’re in charge. They’re becoming more and more important every day.

· Posts · Share this post

 

I’m genuinely thinking about starting a new blog about my experiences of fatherhood. It would be good on a new domain rather than be a part of my usual tech journaling. Too much?

· Statuses · Share this post