Skip to main content

Blog aspirationally, not opportunistically

1 min read

When you find yourself writing a 3000 word essay about engineering management on your personal website, you might want to take a step back and take another look at your goals.

And if you find that this isn’t quite what you want to be talking or writing about, it maybe might be time take some more risks.

I mean, I stand by everything in the post. And on one level it’s important.

But also: let’s go make things and have fun and be creative and let go of our inhibitions a bit.

Perhaps write about hopes and dreams rather than work and administration.

Less business. More human.

Let’s go.

· Asides · Share this post


Building engineering

Software developers

I’ve spent most of my career — now well over two decades of it — building things on the web. I’ve worked as a software developer, I’ve founded a couple of my own companies, and I’ve often found myself leading teams of engineers. Right now I’m the director for both engineering and IT (although there are teams of people who write code who aren’t under my wing — newsrooms are complicated).

Over time, a lot of my work seems to have become less about “what shall we build?” or “how shall we build it?”. Those questions are always vitally important, but there are prerequisites that sometimes need to take center stage: things like “what are we here to do?”, “how should we work together?”, and “how do we think about what matters?”

I’ve been sharpening my thinking about the necessary conditions to do good work, and how to achieve them. Here’s a window into how I’m thinking about these ideas across three dimensions: Organizational Context, Team Leadership, and Technology Trends.

Words painted on the street: passion led us here

Organizational Context

There’s an unattributed but often-quoted management strategy cliché that says: culture eats strategy for breakfast. I’m a believer. Culture contains the fundamental building blocks of how an organization acts as a community: its values, beliefs, attitudes, norms, processes, and rules. Without a strong one, you cannot succeed — regardless of what your strategy looks like. Conversely, a great strategy, by definition, is one that incorporates building a great, intentional culture. Without one, your team is more likely to burn out and leave, you’re much less likely to build something high-quality, and you’re unlikely to foster new ideas.

Software engineering, at least in the places I’ve practiced it, is all about innovation. The focus is rarely on maintaining the present, although some degree of maintenance is always necessary. Instead, it’s about building the future: figuring out what we’ll need our platform to look like in two to five years and finding ways to get there. It’s a creative pursuit as much as it is about rigor and craft, and it’s about values and taste as much as it is about business necessity.

This dynamic is well-served by some organizational cultures and actively undermined by others. The trick is figuring out which you’re in, and finding ways to either embrace the former or build a buffer zone for the latter.

One popular way of looking at organizational culture is the Competing Values Framework, which defines four distinct overall culture types — all four of which are usually present to different degrees inside an organization.

  • Adhocracy: an organic, unbureaucratic way of organizing work that challenges the status quo, formal titles, and hierarchy in favor of a focus on risk-taking and innovating at speed.
  • Clan: a family-like culture that, again, is relatively unbureaucratic, without much structure, where rules tend to manifest as social norms rather than edicts or rigid process.
  • Hierarchy: where an emphasis is placed on top-down control from upper management in order to create predictability and lower risk. Roles are clearly defined, rules are codified, and even internal communication tends to be stratified.
  • Market: a culture optimized around competition, both with competitors and internally. Measurable results are central, but the workplace can easily become toxic because everyone is trying to better themselves vs their peers.

Like many frameworks, the reality is not actually as cut and dry as this. Instead, I think these categories are best thought of as facets of an organizational culture. In some organizations, hierarchy and market focus may have a heavier emphasis; in others, innovation and collaboration.

I vastly prefer working within organizations that look like the first two environments — adhocracies and clans — and I’d hazard to say that almost every single engineer, designer, and product manager I’ve ever worked with feels the same way. Hierarchical systems are inherently creatively stifling: innovation can’t take place in an environment with predominantly top-down control. The same goes for hyper-competitive environments: while the competition might be motivating for some in the short term, it’s really hard to collaborate effectively and build on each others’ ideas if everyone is trying to get ahead of each other.

Hierarchies in particular definitionally strip your authority in favor of top-down direction, forcing you to negotiate through layers of politics to make any kind of change. Most good engineers are collaborators, not cogs, with ideas, expertise, and creativity that should be embraced. But hierarchy demands cog-like behavior, and creates institutional fiefdoms that tend towards bureaucracy, inhibiting any really new work from being done if it hasn’t been rubber-stamped. These aren’t great places for a creative person to work.

As Robin Rendle put it recently:

This is the most obvious thing to say in the world, but: the hard work should never be the bureaucracy, it should be designing things and solving technical problems. If the hard work ain’t the hard work, ya gotta bounce. Don’t kill yourself trying to tell people that.

That isn’t to say that every team in an organization should work the same way or strive for the same culture. It might be that a legal, compliance, or safety team needs to work in a more rigid way as a system of control, or that a sales team needs to be intensely market-oriented. Or those things might not be true at all! My point is that it’s a mistake for engineers to assume that because they work best in a particular kind of environment, everyone should work that way. Every organization is comprised of a mix of culture types, and every team needs to work in a way that allows them to do their best work.

This may seem obvious, but we often talk about a single team’s working style setting the cultural norms for a whole organization. For example, it’s common for an organization to be described as engineering-led or sales-led. To be clear, this is a false choice: there should simply be people-led organizations that are inclusive of different interdisciplinary needs and styles of working.

For that to be a reality, top-level leadership in particular needs to acknowledge that not every team works the same way. For my purposes, this means acknowledging that engineering needs a particular kind of culture in order to thrive (and is important enough to have its own culture and be deserving of autonomy).

A prerequisite to this is understanding the potential for a technology team (or any team) in the first place. That’s less likely to happen in organizations where it’s treated as back-office, paint-by-numbers work. If an organization can’t see the importance of a team’s work, and if it inherently does not respect the effort and expertise inherent in those roles, it’s going to be very difficult for them to do good work.

My bias is to lean heavily on storytelling and listening as tools for fostering understanding: finding ways to explain why the work of product and engineering is important in the context of the whole organization, and how that expertise can be leveraged in order to benefit everybody. It’s okay to not understand what an engineering team has the potential to do from the outset, but if organizational leaders continue to not understand, that’s on me. The way to get there is through being transparent about what we’re doing, how we’re thinking, and which challenges we expect to encounter.

It really matters. Mutual understanding begets mutual respect.

There needs to be an explicit understanding between teams, mutual respect between parties that encompasses their expertise and different ways of working, and loose protocols for how everyone is going to communicate with each other that is compatible with their different styles of working.

I shouldn’t presume to tell a team from another discipline what they need to do their job, just as they shouldn’t presume to tell me. I should treat another team as the expert in its discipline, and they should treat my team as the expert in mine.

Throughout all this and despite our differences, we’re all in the same boat. We need to all be pulling in the same direction, motivated around a single, motivating mission (why we’re all here), vision (what is the world we’re here to try to create), and strategy (what are we going to try and do next to make it a reality).

The role of upper management is to set the direction, foster a culture that supports everyone, and help to build those protocols (all while not running out of money). One role of team leaders is to navigate those protocols and act as a buffer where there is friction.

Silhouettes of people walking down a hill. One is in front

Team Leadership

Vulnerable, open leaders make it safe for everyone to take risks and show up to work as they are.

So far I’ve written a lot about how engineering teams need organizational support that starts with a compatible culture that is founded on respect. But even in an environment that is un-hierarchical, transparent, informal, respectful, and open, with clear organizational goals and a defining mission, there’s more work to be done in order to create an environment where engineers can do their best work.

As I wrote last year:

The truth is that while some of the tools of the trade are drawn from math and discrete logic, software is fundamentally a people business, and the only way to succeed is to build teams based on great, collaborative communication, human empathy, true support, and mutual respect.

Leaders need to be stewards of those values. I believe — strongly — that this is best achieved through servant leadership:

[Servant leadership] aims to foster an inclusive environment that enables everyone in the organization to thrive as their authentic self. Whereas traditional leadership focuses on the success of the company or organization, servant leadership puts employees first to grow the organization through their commitment and engagement. When implemented correctly, servant leadership can help foster trust, accountability, growth, and inclusion in the workplace.

Each of these are important; I would also add safety. A blame-free environment where everyone can speak openly, be themselves, make mistakes, and not feel like they have to put on a mask to work is one where people can take risks and therefore innovate more effectively.

When you’re facilitating a brainstorming exercise, you might intentionally throw in a few out-there suggestions to make participants feel comfortable to take risks with their own contributions. Similarly, one of the roles of a leader is to push the envelope, and maybe risk looking a bit silly, in order to allow other people to feel more comfortable taking risks with their work — and when they do, to cheerlead them, support them, and help them feel comfortable even if their ideas don’t work out.

In a hierarchical team, the leader might ask if team members are adhering to their standards. In a supportive team, the leader might primarily ask how they are doing at supporting their team. It’s not that you don’t ever ask if someone isn’t performing; it’s more to do with the center of gravity of assessment. Supportive teams put the employees first.

Fostering that sort of team culture heavily depends on how a manager shows up day to day. A manager who isn’t vulnerable, doesn’t reveal much of themselves, and requires homogeneity is — probably unintentionally — fostering a hierarchical culture where masking is the norm rather than a supportive one where people are free to to be themselves.

The same sorts of fractal dynamics that affect inter-team collaboration apply to inter-personal collaboration, too. Everyone is different and has different working and communication styles, and homogeneity should never be the goal.

You can tell a lot by a team’s approach to feedback. If it is given in one direction — from managers down — then you likely have a hierarchical culture where team members may be less able to speak up and share their ideas. (The same is true if feedback is sometimes given to managers but rarely acted on.) I’ve observed that the most successful teams have clear, open, 360-degree feedback loops, where everyone’s feedback is directly sought out and incorporated — from team members to managers, between team members, and from manager to manager.

Another observable difference in team cultures can be seen through the kinds of norms that are enforced. To the extent that there are hard and fast rules on a team, they should be grounded in a purpose that supports forward motion, rather than to provide comfort to leadership or simply to enforce sameness.

As illustration, here are two contrasting examples of norms I’ve often seen enforced on teams:

  • Source code is written to adhere to common style guidelines, and is peer reviewed.
  • Cameras should be turned on during video calls.

Common coding style rules are a social contract that lower the cognitive load of working with code that someone else on the team has written, removing important roadblocks to everyone’s work; peer review is a really great channel for feedback, learning, and preventing bugs. Meanwhile, enforcing that cameras should always be on during video calls only serves to make some people less comfortable on the call.

Ultimately, success here is measured in what you ship, how happy your team is, whether they recommend working at your organization to their friends, and how long people stick around for.

A robot and a person holding hands

Technology Trends

It’s important for an engineering team to not just have a competence in working with technology but to have strong opinions about it, its implications, and how it intersects with the lives of the people it touches. They should strive to be experts in those issues, learning as much as they can from relevant publications, scholars, and practitioners.

It would be ludicrous to examine the use of AI but not study its ethical issues. Not only is there a moral hazard in not understanding the subject holistically, but by leaving out topics like bias, intellectual property violations, and hallucinations as you investigate bringing AI into your work, you actually create liability for your organization. It’s both an ethical duty and good due diligence.

Similarly, imagine studying blockchain a few years ago but not covering its environmental impact or its potential for use in money laundering. Leadership might have been excited by the potential for financial growth, but by not examining the human impacts of the technology, you would have missed substantial risks that might have created real business headwinds later on.

Or imagine relying on developing code as a core function of your organization and not staying on top of new techniques, approaches, exploits, and technologies to build with. Your team would effectively be stuck in time without any real way to progress and stay relevant, creating a risk that your product would suffer over time.

Or, come to that, imagine working in a fast-moving field like technology and not forming a strong, informed opinion about how it will change that is rooted in learning, experimentation, and active collaboration with experts and other organizations.

This is another area where an open, collaborative, inclusive culture can be helpful. Giving space to team members who want to share their knowledge and ideas about a subject, and entrusting them to cover it from their perspective, helps allow for topics to be covered through the lens of a variety of diverse lived experiences. But by practicing and championing the idea of inclusion as a core team value, you encourage team members to actively go and speak to diverse experts and gather a variety of viewpoints. The gene pool of ideas is widened as you investigate a subject and your own ideas and resulting products and strategies will be stronger as a result.

In a hierarchical culture where strategy is set from the top down, this kind of broad, inclusive learning might not be as effective, or it might not be present at all. Servant leadership helps ensure that everyone has the space to learn and grow with respect to topics they may not have mastered yet, or that their perspectives are championed. You simply have access to fewer ideas from fewer perspectives, and you’re wildly limited as a result.

Those same open feedback channels that create well-functioning, communicative teams can also serve as a way for team members to learn from each other. The principles of openness, inclusion, respect, openness to risk, and collaboration can serve as guiding lights as teams navigate new technologies and help their respective organizations get to grips with these topics. Leaders have a role in fostering learning and knowledge-sharing on a team, and ensuring that it is a first-class activity alongside writing and architecting code.

Stenciled letters on a wall: Live, Work, Create.


A lot of the things that are important to get right with engineering aren’t really about engineering at all. The best teams have a robust, intentional culture that champions openness, inclusivity, and continuous learning — which requires a lot of relationship-building both internally and with the organization in which it sits. These teams can make progress on meaningful work, and make their members valued, heard, and empowered to contribute.

At a team leadership level, servant leadership is a vital part of fostering a culture of innovation and adaptability. By prioritizing the well-being and development of the people on their teams, leaders are making an investment that leads to higher performance, more nuanced strategy, more resilience, and lower churn.

At an organizational leadership level, a clear strategic direction and a focus on inclusivity help to provide the leeway to get this work done. I don’t know if you can succeed without those things; I certainly know that you can’t create a satisfying place for engineers and other creative people to work.

The most interesting and successful organizations have an externally-focused human mission and an internal focus on treating their humans well. That’s the only way to build technology well: to empower the people who are doing it, with a focus on empathy and inclusion, and a mission that galvanizes its community to work together. And, perhaps most importantly to me, that’s the only way to build a team that I want to work on.

That’s how I’ve been thinking about it. I’d love to read your reflections and to learn from you.

· Posts · Share this post


Threads has entered the fediverse

"We’re taking a phased approach to Threads’ fediverse integration to ensure we can continue to build responsibly and get valuable feedback from our users and the fediverse community."

It's really great to see Meta do this and communicate well about it. However you see the company, it's a big step for one of the tech giants to embrace the open social web in this way.

In the future, this is how every new social platform will be built - so take note both on the detail and of their overall approach.

· Links · Share this post


Big Journalism’s hopeless myopia

"One way you know that it’s business as usual for journalists is that so many have remained on Twitter, a platform whose owner has taken right-wing trollery to extremes lately. He loudly supports people who want to install a fascist government in the United States, and it’s clear enough that he would support fascism if and when it arrives."

"[...] If fascism arrives, a lot of these journalists will be fine. After all, they’re helping to create the conditions for a new Trump presidency. But a lot more will not be fine — and even the ones that are in favor under a Trump government will eventually realize that their safety and livelihoods are at the whim of the extreme right-wing cultists who’ll be in control."

· Links · Share this post


The Intercept charts a new legal strategy for digital publishers suing OpenAI

A detail I hadn't noticed: while the New York Times OpenAI lawsuit rested on copyright infringement, the Intercept, Raw Story, and AlterNet are claiming a DMCA violation.

"A study released this month by Patronus AI, a startup launched by former Meta researchers, found that GPT-4 reproduced copyrighted content at the highest rate among popular LLMs. When asked to finish a passage of a copyrighted novel, GPT-4 reproduced the text verbatim 60% of the time. The new lawsuits similarly allege that ChatGPT reproduces journalistic works near-verbatim when prompted."

· Links · Share this post


AI Is Threatening My Tech and Lifestyle Content Mill

"Sure, our articles maintain a rigid SEO template that creatively resembles the kitchen at a poorly run Quiznos, and granted, all our story ideas are gleaned from better-written magazine articles from seven months ago (that we’re totally not plagiarizing), but imagine if AI wrote those articles? So much would be lost."


· Links · Share this post


Building vs using the web

2 min read

One thing that becomes clear when you move outside of open web groups and a certain kind of tech company is the difference between trying to build the web as a platform and trying to use the web as a platform.

In the former mental model, you’re experimenting to try and figure out how to push the envelope on a common platform. What doesn’t exist yet on the web that would be cool or useful? How can we preserve its openness and decentralization? How can the commons be richer for everyone? It’s ultimately an ideological endeavor: the web is great and we should keep building it in everyone’s interest, whether through protocols and extensions or through amazing public interest sites.

In the latter, you’re taking what exists and figuring out how to get the most use out of it. How can we harness this? Which web capabilities allow us to meet our goals more easily? Where are the opportunities? It’s not in any way an ideological endeavor: instead, it’s a pragmatic one. It’s business. You’re taking a resource and getting the most use out of it that you can.

Of course, it happens to be the case that the public resource continues to exist and is vibrant because of the first group of people I described. But it’s also okay to just use the web. The web is for everyone.

· Asides · Share this post


ASCII art elicits harmful responses from 5 major AI chatbots

"Researchers have discovered a new way to hack AI assistants that uses a surprisingly old-school method: ASCII art."

So many LLM exploits come down to finding ways to convince an engine to disregard its own programming. It's straight out of 1980s science fiction, like teaching an android to lie. To be successful, you have to understand how LLMs "think", and then exploit that.

This one in particular is so much fun. By telling it to interpret an ASCII representation of a word and keep the meaning in memory without saying it out loud, front-line harm mitigations can be bypassed. It's like a magic spell.

· Links · Share this post


The edges are more interesting

1 min read

If AI makes it easier to create generic, middle-of-the-road content, the way forward for human beings is to create content that is out there on the edges, blazing ground that probabilistic algorithms could never possibly reach. 

Which, honestly, I wish more people would do anyway. The middle of the road has nothing new to say.

· Asides · Share this post


“I've Rediscovered A Mode Of Expression That Was Important To Me As A Kid”: A Talk with Jordan Mechner

A lovely interview with the creator of Karateka and Prince of Persia. (Karateka in particular was a formative game for me.)

"If you'd asked me at age 12, I’d probably have said that my dream job would be comics artist or animator." Me too. So much of this resonates.

I'm really excited to read his new book, about Mechner's family history as migrants during WWII and beyond. I strongly suspect that it, too, will resonate strongly.

· Links · Share this post


The weird world of altruistic YouTube

This is such an interesting trend:

"It seems like a pretty well-worn path at this point. Start a YouTube channel with some compelling videos, and when you amass enough views/revenue, use that money to entice strangers into helping you make more videos that get more revenue."

Mr Beast is the most well-known, but there are lots of them. I feel pretty uncynical about it: although there's definitely something icky about profiting from peoples' poor fortune, there's also real good often being done.

· Links · Share this post


Right-wing comments on Microsoft Start

1 min read

My posts are syndicated to Microsoft Start as part of the Creator Program. It’s been interesting to see which ones find an audience there and which ones don’t: politics seems to be more interesting to the community there than tech commentary, which stands to reason, as it’s a more universal topic.

What’s noticeable, though, is that the only comments I see over there are wildly right-wing. The Microsoft Start readers who seem driven to weigh in tell me that climate change isn’t real, that the police are right to infiltrate protest movements, and that DEI initiatives are wrong.

This skew doesn’t match the population overall, so I wonder what’s happening there. Are there people looking for content on these topics to comment on in order to squash those topics? Does Microsoft Start itself somehow skew right-wing? Or is something else going on?

· Asides · Share this post


Four things about

"We're selling ourselves out by letting Facebook own a new social network and not putting that energy into building something that preserves our choice."

I am worried that this might turn out to be correct.

· Links · Share this post


While I respect that some people find comfort in tradition and institutions, I can’t agree. Those things are how we maintain the status quo - and there’s so much work to do.

· Statuses · Share this post


FBI sent several informants to Standing Rock protests, court documents show

"Up to 10 informants managed by the FBI were embedded in anti-pipeline resistance camps near the Standing Rock Sioux Indian Reservation at the height of mass protests against the Dakota Access pipeline in 2016."

This seems obvious: there are informants in any major protest movement, and have been since there were protest movements. It's not great, and it's a fundamentally undemocratic way to conduct yourself, but it doesn't represent a change from the status quo.

This article talks about surveillance, but of course, there may have been situations where informants and plants actually set out to undermine the protest. This, too, would not have been a change.

· Links · Share this post


Seeking share URLs for every platform

1 min read

I wish there was a conclusive list of “share-to” URLs. For example, here’s the URL you can use to build a “share to Threads” button:

Here’s the equivalent URL for Reddit:

Every Known site has a URL like:


Every Mastodon instance has a URL like:


Does have a share URL? How about WordPress installations? Ghost? Bluesky? And platforms like Lemmy, etc?

I’m on a mission to collect them all.

· Asides · Share this post


FCC scraps old speed benchmark, says broadband should be at least 100Mbps

"The Federal Communications Commission today voted to raise its Internet speed benchmark for the first time since January 2015, concluding that modern broadband service should provide at least 100Mbps download speeds and 20Mbps upload speeds."

Finally. The previous 25Mbps down 3Mbps up benchmark was pathetic - and even that is above many peoples' actual connections in practice.

The new standard should pull other FCC regulations up with it, which is a welcome change:

"With a higher speed standard, the FCC is more likely to conclude that broadband providers aren't moving toward universal deployment fast enough and to take regulatory actions in response. During the Trump era, FCC Chairman Ajit Pai's Republican majority ruled that 25Mbps download and 3Mbps upload speeds should still count as "advanced telecommunications capability," and concluded that the telecom industry was doing enough to extend advanced telecom service to all Americans."

· Links · Share this post


CEO of Data Privacy Company Founded Dozens of People-Search Firms

Something I've long suspected is often the case: the founder of a data privacy firm also ran dozens of the people search services the firm was set up to remove people from for a fee.

"Onerep’s “Protect” service starts at $8.33 per month for individuals and $15/mo for families, and promises to remove your personal information from nearly 200 people-search sites. Onerep also markets its service to companies seeking to offer their employees the ability to have their data continuously removed from people-search sites."

· Links · Share this post


Coming back to Obsidian

It is useful, after all.

1 min read

After some to-ing and fro-ing, I finally cracked how using Obsidian is useful.

I’d previously been trying to work in the open and update my thoughts for a public website there — but, of course, that’s what my personal site is for! So it didn’t click, because I was already saving notes to a space that people could read.

I’ve started keeping daily notes in a private vault, linking to people, products, and concepts as it makes sense, but not bothering to actually create resources at the other end of those links until there’s something that needs to live there. Backlinks are on so I can always see what’s referencing a particular resource.

And it’s clicked. I’m finding it particularly useful to keep track of features and products that aren’t part of my daily workstream but still are something I need to remember the status of (and when I last interacted with them). Suddenly what felt obtuse and overcomplicated seems easy and incredibly useful. I get it!

· Asides · Share this post


A smackdown over programmatic ads and why reader revenue is crucial

"There’s a reason that some 2,900 newspapers have closed since 2005, and that reason is the ad revenues publishers were hoping for to support what were initially free websites never materialized."

What's left: paywalls and patronage.

I've become much more bullish about patronage than paywalls for journalism content, and working for two non-profit newsrooms with exactly that model has only solidified that opinion. The Guardian is an illustration of how well it can work - as are ProPublica and The 19th.

What the decline of programmatic ad revenue does make me wonder is: what's going to happen to the platforms that are sustained the same way?

· Links · Share this post


Former Treasury Secretary Mnuchin is putting together an investor group to buy TikTok

"Former Treasury Secretary Steven Mnuchin is building an investor group to acquire ByteDance’s TikTok, as a bipartisan piece of legislation winding its way through Congress threatens its continued existence in the U.S."

Come on. This is brazen.

Whatever you think of TikTok, I'm not excited about the idea that the US can force a sale of an internet service because it's under the control of another company. It seems to me that this undermines the effectiveness of the internet itself: the idea that anyone can reach anyone.

"There’s no way that the Chinese would ever let a U.S. company own something like this in China," Mnuchin said. Sure - they have the Great Firewall. We don't. We're supposed to be something different.

· Links · Share this post


Seeking a first-class Fediverse platform

A place to read, to discuss, to share.

2 min read

Subsequent conversations have convinced me that I’m right about the assertions I made about the Fediverse for media organizations. There’s a huge need, a huge opportunity, and the underlying technology is there.

The thing that’s a bit missing is a first-class Fediverse platform. Mastodon itself has become a bottleneck. Its design decisions are all reasonable in its own right, but there’s a need for something that goes beyond copying existing siloed services like Twitter. (Pixelfed, similarly, apes Instagram; Lemmy apes Reddit.) What does a Fediverse service look like that’s been designed from the ground up to meet a user need rather than copy something that already exists? And what if that user need is a first-class reader experience with the ability to comment and share interesting stuff with your friends?

I’m not bullish on squeezing long-form content into a microblogging platform, whether on Mastodon or X. Long-form content isn’t best consumed as part of a fast-moving stream of short updates. But the fact that both have those features — and that people are syndicating full-length articles straight to the Fediverse despite the poor UX — points to an interesting deer path to pave.

What if we had a great experience that ties together both short-form discussion and re-sharing and long-form reading, in a way that better showcases both kinds of content and realizes that the way we consume both is different? What if it had a beautiful, commercial-level design? And what if it remained tied to the open social web at its core, and pushed the capabilities of the protocols forward as it released new features and discovered new user needs?

If I had a year and funding, this is what I’d be working on.

· Asides · Share this post


Tiktok and the Fediverse

"The House bill, then, is an acknowledgment that algorithmic curation of feeds is a powerful feature that can have a major influence on individuals and society. It at least makes the point that allowing a foreign company, under its own government’s influence, to have some level of control of the algorithm, is a potential danger for domestic security."

I'm honestly troubled by the Tiktok legislation. I think Evan has a partial solution here: decoupling platforms from curation algorithms seems important.

I think there's also a lot to be said for not allowing any platform to get this big, regardless of national origin. If any company is big enough for its curation algorithm to influence national security, isn't that a problem? We saw Facebook influence multiple elections in worrying ways. I'd rather see lots of smaller platforms, linked with common protocols. And I'd support legislation designed to help prevent a small number of platforms from dominating our media consumption.

· Links · Share this post


EU Parliament passes AI Act in world’s first attempt at regulating the technology

Europe once again leads the way by passing meaningful AI regulation. Banned unacceptable-risk uses of AI include facial recognition, social scoring, and emotion recognition at schools and workplaces.

"The use of real-time facial recognition systems by law enforcement is permitted “in exhaustively listed and narrowly defined situations,” when the geographic area and the length of deployment are constrained."

I'm all in favor of these changes, but it's a little bit sad that this sort of regulation is always left up to the EU. American regulators appear to be sleeping.

· Links · Share this post


Exploring AI, safely

I’ve been thinking about the risks and ethical issues around AI in the following buckets:

  • Source impacts: the ecosystem impact of generative models on the people who created the information they were trained on.
  • Truth and bias: the tendency of generative models to give the appearance of objectivity and truthfulness despite their well-documented biases and tendency to hallucinate.
  • Privacy and vendor trust: because the most-used AI models are provided as cloud services, users can end up sending copious amounts of sensitive information to service providers with unknown chain of custody or security stances.
  • Legal fallout: if an organization adopts an AI service today, what are the implications for it if some of the suits in progress against OpenAI et al succeed?

At the same time, I’m hearing an increasing number of reports of AI being useful for various tasks, and I’ve been following Simon Willison’s exploratory work with interest.

My personal conclusions for the above buckets, such as they are, break down like this:

  • Source impacts: AI will, undoubtedly, make it harder for lots of people across disciplines and industries to make a living. This is already in progress, and continues a trend that was started by the internet itself (ask a professional photographer).
  • Truth and bias: There is no way to force an LLM to tell the truth or declare its bias, and attempts to build less-biased AI models have been controversial at best. Our best hope is probably well-curated source materials and, most of all, really great training and awareness for end-users. I also would never let generative AI produce content that saw the light of day outside of an organization (eg to write articles or to act as a support agent); it feels a bit safer as an internal tool that helps humans do their jobs.
  • Privacy and vendor trust: I’m inclined to try and use models on local machines and cloud services that follow a well-documented and controllable trust model, particularly in an organizational context. There’s a whole set of trade-offs here, of course, and self-hosted servers are not necessarily safer. But I think the future of AI in sensitive contexts (which is most contexts) needs to be on-device or on home servers. That doesn’t mean it will be, but I do think that’s a safer approach.
  • Legal fallout: I’m not a lawyer and I don’t know. Some but not all vendors have promised users legal indemnity. I assume that the cases will impact vendors more than downstream users — and maybe (hopefully?) change the way training material is obtained and structured to be more beneficial to authors — but I also don’t know that for sure. The answer feels like “wait and see”.

My biggest personal conclusion is, I don’t know! I’m trying not to be a blanket naysayer: I’ve been a natural early adopter my whole life, and I don’t plan to stop now. I recently wrote about how I’m using ChatGPT as a motivational writing partner. The older I get, the more problems I see with just about every technology, and I’d like to hold onto the excitement I felt about new tech when I was younger. On the other hand, the problems I see are really big problems, and ignoring those outright doesn’t feel useful either.

So it’s about taking a nimble but nuanced approach: pay attention to both the use cases and the issues around AI, keep looking at organizational needs, the kinds of organic “shadow IT” uses that are popping up as people need them, and figure out where a comfortable line is between ethics, privacy / legal needs, and utility.

At work, I’m going to need to determine an organizational stance on AI, jointly with various other stakeholders. That’s something that I’d like to share in public once we’re ready to roll it out. This post is very much not that — this space is always personal. But, as always, I wanted to share how I’m thinking about exploring.

I’d be curious to hear your thoughts.

· Posts · Share this post