Skip to main content
 

Peter Capaldi says posh actors are smooth, confident and tedious

[Vanessa Thorpe in The Guardian]

“Art is about reaching out. So I think it’s wrong to allow one strata of society to have the most access.”

This is an older article, but it resonated with me so much that I wanted to share it immediately.

This is so important, and a sign of what we've lost:

“I went [to art school] because the government of the day paid for me to go and I didn’t have to pay them back. There was a thrusting society then, a society that tried to improve itself. Yes, of course, it cost money. But so what? It allowed people from any kind of background to learn about Shakespeare, or Vermeer.”

A culture where only the rich are afforded the space, training, and platform to make art is missing the voices that make it special.

The same goes for other spaces: newsrooms where only the wealthy can serve as journalists cannot accurately represent the people who depend on it. Technology without class diversity is myopic. Above all else, a culture of rich people is boring as hell.

Art school - like all school - should be free and available to everyone. It's tragic that it's not. We all lose out, regardless of our background.

[Link]

· Links · Share this post

 

Fighting bots is fighting humans

[Molly White]

"I fear that media outlets and other websites, in attempting to "protect" their material from AI scrapers, will go too far in the anti-human direction."

I've been struggling with this.

I'm not in favor of the 404 Media approach, which is to stick an auth wall in front of your content, forcing everyone to register before they can load your article. That isn't a great experience for anyone, and I don't think it's sustainable for a publisher in the long run.

At the same time, I think it's fair to try and prevent some bot access at the moment. Adding AI agents to your robots.txt - although, as recent news has shown, perhaps not as effective a move as it might be - seems like the right call to me.

Clearly an AI agent isn't a human. For ad hoc queries - where an agent is retrieving content from a website in direct response to a user query - it clearly is acting on behalf of a human. Is it a browser, then? Maybe? If it is, we should just let it through.

It's accessing articles as training data that I really take issue with (as well as the subterfuge of not always advertising what it is when it accesses a site). In these cases, content is copied into a corpus in a manner that's outside of its licensing, without the author's knowledge. That sucks - not because I'm in favor of DRM, but because often the people whose work is being taken are living on a shoestring, and the software is run by very large corporations who will make a fortune.

But yes: I don't think auth walls, CAPTCHAs, paywalls, or any added friction between content and audience are a good idea. These things make the web worse for everybody.

Molly's post is in response to an original by Manu Moreale, which is also worth reading.

[Link]

· Links · Share this post

 

AP to launch sister organization to fundraise for state, local news

"Governed by an independent board of directors, the 501(c)3 charitable organization will help AP sustain, augment and grow journalism and services for the industry, as well as help fund other entities that share a commitment to state and local news."

Fascinating! And much needed.

I'm curious to learn how this fits into other fundraising efforts, like the $500M Press Forward initiative for local news that was announced last year.

I do also have a question about whether all this centralized philanthropy is sustainable. What happens to these newsrooms if the foundation dollars go away? Are they incentivized to find their own business and fundraising models, or does this create a kind of dependence that might be harmful in the long run?

My hope, of course, is that these efforts are the shot in the arm that journalism needs, and that the newsrooms which receive this funding will be sustainable and enduring. It's certainly lovely to see the support.

[Link]

· Links · Share this post

 

Some polite words regarding the British General Election on July 4

Bring out the champagne.

4 min read

Apropos of nothing, here's some lettuce

On July 4th I’ll be on the beautiful Oregon coast, and I plan to have a bottle of champagne handy. Not so much because of the American Independence Day — although there’s nothing wrong with celebrating that, and I’m sure I will — but because of the British election happening on the same day.

It’s been a long fourteen years of the worst government imaginable: a Conservative Party that brought about the formidable economic and social own-goal of Brexit, an intellectual blunderbuss to the foot followed by several subsequent very practical blunderbusses to the crotch, followed by a succession of the most ineffectual, rotten-souled Prime Ministers in British history, one of whom famously had less staying power than a literal salad. It was brought into being by a coalition aided by Nick Clegg (who has since made a career of putting a shiny face on terrible things), and then pitifully trumped along in a meandering path fueled by middling opposition, middle-England small-island nationalism, and the distant, smarmy memory of Tony Blair and the Iraq War. (Here I mean lowercase T trump, which means fart, rather than uppercase T Trump, which means Trump.)

I’m not particularly excited about Keir Starmer’s Labour. It seems to be a sort of 21st century riff on John Major’s Conservative Party of the mid-nineties, presumably in an effort to reach old-school Conservative voters who are sick of the Asda own-brand lunacy of the modern incarnation of their party, knowing that actual left-wing voters have nowhere else to turn. So this isn’t me hoping for major change from him; I expect very little to actually happen. But I am absolutely psyched for the Tories to have their well-heeled posteriors handed to them and their nannies with a fork and knife, finally. It’s been a long time coming.

If it sounds like it’s personal: yes, it’s personal. I’m a European citizen who grew up in the UK and left for the US to look after a parent, assuming I’d just go back afterwards. It didn’t even occur to me that David Cameron would hold a ham-fisted referendum on European membership, and it didn’t seem to occur to him that he’d lose it and the country would vote to leave. (Ham-fisted, of course, is the way he likes it.) I took it very personally; I still take it very personally; if this post feels like I’m being unusually effluviant, please know that I am holding myself back.

I’m under no illusions of any major change, even outside of Keir Starmer’s Primark blandness. All these runts will get cushy jobs as chairmen of boards and minty after-dinner speakers. Britain is effed to infinity, and there’s only so much play you can even have within that framework, particularly considering that nobody seems to want to shift the Overton window even slightly leftwards. Heaven forbid you protect the poor and vulnerable and strive to build an inclusive society within a lasting peace. Still, the catharsis of seeing those cordyceps zombie-suits roundly voted away from the nominal seat of power, even if their ilk will continue to be the effective ruling class for evermore, will give me some superficial glee. So, champagne.

Oh, and I’m excited to see Nigel Farage get his, too.

Now, back to technology and stuff.

· Asides · Share this post

 

Law enforcement is spying on thousands of Americans’ mail, records show

[Drew Harwell at the Washington Post]

"Postal inspectors say they fulfill [requests from law enforcement to share information from letters and packages] only when mail monitoring can help find a fugitive or investigate a crime. But a decade’s worth of records, provided exclusively to The Washington Post in response to a congressional probe, show Postal Service officials have received more than 60,000 requests from federal agents and police officers since 2015, and that they rarely say no."

I wish this was surprising. Something similar seems to have gone on in every trusted facet of American life: from cell phone providers to online library platforms to license plate readers on the roads. It's all part of an Overton window shift into pervasive surveillance that has been ongoing for decades.

Senator Ron Wyden is right to be blunt:

“These new statistics show that thousands of Americans are subjected to warrantless surveillance each year, and that the Postal Inspection Service rubber stamps practically all of the requests they receive.”

We shouldn't accept it. And yet, by and large, we do.

[Link]

· Links · Share this post

 

The Future of Fashion Commerce Is a Designer's AI Bot Saying You Look Great and Your Personal AI Bot Sifting Through the Bullshit

[Hunter Walk]

"The best commerce platforms will be constantly grooming you, priming you, shaping you to buy. The combination of short-term and long-term value that leads to the optimal financial outcome for the business."

I think this is inevitably correct: the web will devolve into a battle between different entities who are all trying to persuade you to take different actions. That's already been true for decades, but it's been ambient until now; generative AI gives it the ability to literally argue with us. Which means we're going to need our own bots to argue back.

Hunter's analogy of a bot that's supposedly in your corner calling bullshit on all the bots trying to sell things to you is a good one. Except, who will build the bot that's in your corner? Why will it definitely be so? Who will profit from it?

What a spiral this will be.

[Link]

· Links · Share this post

 

Why does moral progress feel preachy and annoying?

[Daniel Kelly and Evan Westra in Aeon]

"Many genuinely good arguments for moral change will be initially experienced as annoying. Moreover, the emotional responses that people feel in these situations are not typically produced by psychological processes that are closely tracking argument structure or responding directly to moral reasons."

This is a useful breakdown of why arguments for social progress encounter so much friction, and why the first emotional response may be to roll our eyes. It's all about our norm psychologies - and some people have stronger reactions than others.

As the authors make clear here, people who are already outside of the mainstream culture for one reason or another (immigration, belonging to a minority or vulnerable group, and so on) already feel friction from the prevailing norms being misaligned with their own psychology. If that isn't the case, change is that much harder.

But naming it is at least part of the battle:

"Knowing this fact about yourself should lead you to pause the next time you reflexively roll your eyes upon encountering some new, annoying norm and the changes its advocates are asking you to make. That irritation is not your bullshit detector going off."

Talking about these effects, and understanding their origins, helps everyone better understand their reactions and get to better outcomes. Social change is both necessary and likely to happen regardless of our reactions. It's always better to be a person who celebrates progressive change rather than someone who creates friction in the face of it.

[Link]

· Links · Share this post

 

Systems: What does a board of directors do?

[Anil Dash]

"I realize that most people who've never been in the boardroom have a lot of questions (and often, anxieties) about what happens on a board, so I wanted to share a very subjective view of what I've seen and learned over the years."

This is great, and jibes with my experiences both being on boards and supporting them as a part of various organizations.

The most functional boards I've seen do what Anil describes here: they're pre-briefed and are ready to have a substantive discussion in a way that pushes the organization forward. Board meetings have a heavy reporting component, for sure, but the discussion and working sessions are always the most meaningful component.

This is also often true, and a challenge:

"I believe in the structure of a board (usually along with some separate advisors) to help an organization reach its fullest potential, in much the same way as I believe in governments having separate branches with separate forms of accountability and appointment. In practice, having nearly all-powerful executives select the membership of the organization that's meant to hold them accountable tends to fail just as badly in business or non-profits as it does in governments."

The board meetings I've attended that are the most robust and open to discussion and genuine debate have also been the ones attached to the most successful companies. I don't think it's quite causation, but rather two things that come from a particularly pragmatic attitude towards running a business: one where outside perspectives and differences of opinion are a strength, not a threat.

[Link]

· Links · Share this post

 

Don't let them tell you what to think

A protest

Last year I wrote a little about how I hope AI will be used, using the GPS navigation in my car as an analogy:

I like my GPS. I use it pretty much every time I drive. But it’s not going to make the final decision about which way I go.

Perhaps it seems obvious, but I’d like to extend that analogy to news, media, and influencers.

We all need journalism — and particularly investigative journalism — to inform us and help us make better decisions. We need to take in sources, form opinions based on them, and vote accordingly as a baseline. But democratic participation doesn’t start and end with voting: we also need to know how to use our voices, spend our money, organize our communities, and, in areas we feel particularly strongly about, protest.

I do think we all need to use our voices. I’m wary when people are silent: whether this is their intention or not, silence is acquiescence to the status quo. If our government is doing something harmful on our behalf and we don’t speak out about it, or an atrocity is taking place somewhere and we choose not to speak up, our lack of action is an endorsement. Change only happens when people speak up.

But this only makes sense when we make up our own mind. If our opinions that copy what’s popular, or what a particular news outlet has to say, then we’re not exercising our democratic rights at all. We’re handing over that power to someone else. When we let someone make our mind up for us, using our voice is just amplifying their voice.

When people complain that we’re not all watching the same newscasts anymore, that’s the world they want to create: one where we’re all getting the same narrow band of information and forming opinions in the same way. That’s not democracy; that’s homogeny. It’s worth considering whose voices could be heard in that world. How diverse was it? Who was really represented?

Similarly, while there is certainly disinformation put out in the world that’s designed to coerce people to exercise their democratic rights in a particular direction (often towards fascism), some people have also used the words “misinformation” and “disinformation” (or “fake news”) to describe reporting that they simply don’t like.

This is the playbook of Trumpworld. When all of journalism is painted as biased and “fake news” — as Trump has taken pains to do — supporters are left with the officially-endorsed channels like Fox News, OANN, and Newsmax. They receive a narrow band of information that becomes the basis of their opinion-making. For example, during Trump’s presidency and beyond, these channels frequently pushed narratives that undermined trust in mainstream media, labeled critical reports as conspiracies, and even presented alternative facts about significant events like the COVID-19 pandemic and the 2020 election results. This systematic discrediting of journalism fosters an echo chamber that isolates its audience from opposing viewpoints and critical analysis.

But there’s a streak of this in Democrat-land, too: a subset of the community that’s sometimes been described as “blue MAGA” for its use of similar rhetoric. Here, any voice that criticizes Biden is also described as fake news, or even a Putin plot. For instance, when progressive commentators or journalists critique Biden’s policies on immigration or healthcare, they are sometimes met with accusations of undermining the Democratic agenda or aiding Republican narratives. This phenomenon isn't as pervasive as Trumpworld’s approach, but it highlights a discomfort with internal criticism within certain Democratic circles. While I’d clearly prefer a Democratic America to one run by Trump, this dismissal of uncomfortable sources as being fake because we don’t like them is no less undemocratic.

And, of course, the same goes for people who learn how to vote and what to think from their places of worship. In some religious communities, congregants are encouraged to vote in line with specific doctrinal beliefs, which can limit their exposure to broader societal issues and alternative viewpoints. It’s a hell of a waste of a free mind and a democratic bill of rights.

We need to consume information from a variety of sources, be critically aware of the biases and origins of those sources so that we can properly evaluate and contextualize them, and then make up our own minds, regardless of whether our conclusions are popular or not.

Making up our own minds has gotten a bad name lately through people who “do their own research” and end up promoting ivermectin for covid, believing that vaccines cause autism, or that climate change isn’t real. I’m not arguing for abandoning critical reasoning or scientific fact here; quite the opposite. The antidote to this kind of quackery is stronger critical thinking and source evaluation, not — as some have argued — restricting our information diet to a few approved sources.

New voices and sources matter. The world changes. Lots of things that were wildly unpopular and sneered at in the past are now part of ordinary life. For example:

  • Abolition
  • Women’s suffrage
  • Access to birth control
  • Interracial marriage
  • Marriage equality
  • The 40 hour work-week

Each of these things were hard-won by people who were very much outside the mainstream until they weren’t. Consider what it would have meant to be silent while each of those struggles for basic rights were underway, or what it might say about a person if they stayed silent because doing otherwise would affect their job prospects or earnings potential. These ideas weren’t popular to begin with, but they were right.

Even the internet was dismissed as a weird fad in the nineties. The mainstream press didn’t think it would catch on; people inside newsrooms had to fight to establish the first news websites. Memorably, one British magazine called it “the new name for ham radio” — just a few years before it took over the world.

What matters is not adherence to the values of a tribe. We aren’t better people if we demonstrate that our values are the same as an accepted set. The world isn’t like supporting a sports team, where you put on a red or a blue jersey and sing the same songs in the stands. It’s nuanced, and each of us can and should have our own nuanced perspectives that are informed by our lived experiences and those of the people around us, and a set of diverse, freely-reported information sources.

For the avoidance of doubt, my values are vehemently anti-war, pro-immigration, and fiercely on the side of diversity, equity, and inclusion. I believe in the right to choose. I believe that trans women are women and trans men are men. I believe that too-small government leads to big corporate power, and too-big government leads to authoritarianism, so a continual balance must be found. I believe that universal healthcare is a fundamental human right. I believe guns must be controlled. I roll my eyes when people complain about socialism in America, because usually what they mean when they use that word is what I’d consider to be basic infrastructure. I think there needs to be a ceasefire in Gaza and in Ukraine. I dislike patriotism because I think it encourages people to care more about people who are geographically close to them. I believe Ayn Rand’s “morality of self-interest” is an excuse to act without compassion. I like startups and believe in the right to start and run a business — and that they can be the vehicle for great change. I think climate change is not just real and behind many of the geopolitical decisions we’re seeing playing out today. I believe that the civil rights marches and movements of the 2020s are the signs of really exciting progressive change. I believe Trump must not become President. I believe a progressive world is a better world.

And I believe in talking about those things and why I believe them. Loudly. Even when it’s uncomfortable. There is no media outlet I’m aware of that publishes based on that exact set of values. You might nod your head in agreement with some of them and be angered by others.

The news I read and the information I gather is my GPS. I appreciate the signal, and it will certainly inform my actions and beliefs. I’m still going to find my own way.

· Posts · Share this post

 

I Will Piledrive You If You Mention AI Again

[Nikhil Suresh at Ludicity]

"This entire class of person is, to put it simply, abhorrent to right-thinking people. They're an embarrassment to people that are actually making advances in the field, a disgrace to people that know how to sensibly use technology to improve the world, and are also a bunch of tedious know-nothing bastards that should be thrown into Thought Leader Jail until they've learned their lesson, a prison I'm fundraising for."

I enjoyed this very much.

Here's the thing, though: I don't think what Nikhil wants will happen.

I mean, don't get me wrong: it probably should. The author is a leader in his field, and his exasperation at the hype train is well-earned.

But it's not people like Nikhil who actually make the decisions, or invest in the companies, or make the whole industry (or industries) tick over. Again: it should be.

What happens again and again is that people who see that they can make money out of a particularly hyped technology leap onto the bandwagon, and then market the bandwagon within an inch of everybody's lives. Stuff that shouldn't be widespread becomes widespread.

And here we are again with AI.

This is exactly right:

"Unless you are one of a tiny handful of businesses who know exactly what they're going to use AI for, you do not need AI for anything - or rather, you do not need to do anything to reap the benefits. Artificial intelligence, as it exists and is useful now, is probably already baked into your businesses software supply chain."

And this:

"It did not end up being the crazy productivity booster that I thought it would be, because programming is designing and these tools aren't good enough (yet) to assist me with this seriously."

There is work that will be improved with AI, but it's not something that most industries will have to stop everything and leap on top of. The human use cases must come first with any technology: if you have a problem that AI can solve, by all means, use AI. But if you don't, hopping on the hype train is just going to burn you a lot of money and slow your actual core business down.

[Link]

· Links · Share this post

 

New ALPR Vulnerabilities Prove Mass Surveillance Is a Public Safety Threat

[Dave Maass and Cooper Quintin at EFF]

"When law enforcement uses ALPRs to document the comings and goings of every driver on the road, regardless of a nexus to a crime, it results in gargantuan databases of sensitive information, and few agencies are equipped, staffed, or trained to harden their systems against quickly evolving cybersecurity threats."

As the EFF points out, it's often vulnerable software - and even when it's not, it violates the security principle of only collecting the information you need. Information security and data strategies are not core law enforcement skillsets, and the software they buy is often oversold.

As the EFF explains:

"That partially explains why, more than 125 law enforcement agencies reported a data breach or cyberattacks between 2012 and 2020, according to research by former EFF intern Madison Vialpando. The Motorola Solutions article claims that ransomware attacks "targeting U.S. public safety organizations increased by 142 percent" in 2023."

The use of these tactics seems uncontrolled - perhaps this is one area where legislation could help.

[Link]

· Links · Share this post

 

Social-Media Influencers Aren’t Getting Rich—They’re Barely Getting By

[Sarah E. Needleman and Ann-Marie Alcántara at the Wall Street Journal]

"Earning a decent, reliable income as a social-media creator is a slog—and it’s getting harder. Platforms are doling out less money for popular posts and brands are being pickier about what they want out of sponsorship deals."

For many kids, becoming an influencer has become the new becoming a sports star: in enormous numbers, it's what they want to be. More broadly, if you dare to say that it's not a real job, you're likely to be drowned out by complaints and contradictions.

But it isn't, and this article makes it clear:

"Last year, 48% of creator-earners made $15,000 or less, according to NeoReach, an influencer marketing agency. Only 13% made more than $100,000."

Of course, some people really did shoot to fame and have been doing really well. But there aren't many Mr Beasts or Carli D'Amelios of this world, and the lure of being famous has trapped less lucky would-be influencers in cycles of debt and mental illness.

This is despite having sometimes enormous followings: hundreds of thousands to millions of people, with hundreds of millions of views a month. The economics of the platforms are such that even at those numbers, you can barely scrape by.

I like the advice that, instead, you should cultivate a genuine expertise and use social media to promote offsite services you provide around that. It might be that a following can land you a better job, or help you build up a consultancy. Trying to make money from ads and brand sponsorships is a losing game - and thousands of people are losing big.

[Link]

· Links · Share this post

 

Sharing Openly About ShareOpenly

[Alan Levine at CogDogBlog]

"ShareOpenly breaks the door even wider than sharing to Mastodon, and I intend to be using it to update some of my examples listed above. Thanks Ben for demonstrative and elegant means of sharing."

Thank you, Alan, for sharing!

There's more to come on ShareOpenly - more platforms to add, and some tweaks to the CSS so that the whole thing works better on older devices or smaller phone screens. It's a simple tool, but I'm pleased with how people have reacted to it, and how it's been carried forward.

There are no terms to sign and there's nothing to sign up for; adding a modern "share this" button to your site is as easy as following a few very simple instructions.

[Link]

· Links · Share this post

 

Progress on the book

1 min read

A sound shook Frances fully awake. Her dreams faded quickly into the cold air, her sleeping memories of San Francisco collapsing into the smell of stone and moss and rot.

There was someone in the house.

And so begins The Source, at least as the draft stands today.

What follows is an adventure that touches on accelerationism, climate change, capital, and the guilt of culpability.

I’m getting there.

· Asides · Share this post

 

Succor borne every minute

[Michael Atleson at the FTC Division of Advertising Practices]

"Don’t misrepresent what these services are or can do. Your therapy bots aren’t licensed psychologists, your AI girlfriends are neither girls nor friends, your griefbots have no soul, and your AI copilots are not gods."

The FTC gets involved in the obviously rife practice of overselling the capabilities of AI services. These are solid guidelines, and hopefully the precursor to more meaningful action when vendors inevitably cross the line.

While these points are all important, for me the most pertinent is the last:

"Don’t violate consumer privacy rights. These avatars and bots can collect or infer a lot of intensely personal information. Indeed, some companies are marketing as a feature the ability of such AI services to know everything about us. It’s imperative that companies are honest and transparent about the collection and use of this information and that they don’t surreptitiously change privacy policies or relevant terms of service."

It's often unclear how much extra data is being gathered behind the scenes when AI features are added. This is where battles will be fought and lines will be drawn, particularly in enterprises and well-regulated industries.

[Link]

· Links · Share this post

 

United Airlines seat ads: How to opt out of targeted advertising

[Michael Grothaus at FastCompany]

"United Airlines announced that it is bringing personalized advertising to the seatback entertainment screens on its flights. The move is aimed at increasing the airline’s revenue by leveraging the data that it has on its passengers."

Just another reason why friends don't let friends fly United. We should all be reducing our air travel overall anyway, given the climate crisis, and in a world where we all fly less, shouldn't we choose a better experience?

This sounds like the absolute worst:

"United believes its advertising network will be appealing to brands because “there is the potential for 3.5 hours of attention per traveler, based on average flight time.”"

Passengers from California, Colorado, Connecticut, Virginia, and Utah can opt out of having their private information used to show targeted ads to them for the duration of what sounds like an agonizing flight. Passengers from other US States are out of luck - at least until their legislatures also pass reasonable privacy legislation.

Other airlines are removing seat-back entertainment to reduce fuel, so on top of the baseline climate impact of the air travel industry, there's a real additional climate implication here. Planes with seat-back entertainment, in general, use more fuel; United is making a revenue decision with all kinds of negative impacts that they should not be rewarded for.

[Link]

· Links · Share this post

 

Perplexity AI Is Lying about Their User Agent

[Robb Knight]

Perplexity AI doesn't use its advertised browser string or IP range to load content from third-party websites:

"So they're using headless browsers to scrape content, ignoring robots.txt, and not sending their user agent string. I can't even block their IP ranges because it appears these headless browsers are not on their IP ranges."

On one level, I understand why this is happening, as everyone who's ever written a scraper (or scraper mitigations) might: the crawler for training the model likely does use the correct browser string, but on-demand calls likely don't to prevent them from being blocked. That's not a good excuse at all, but I bet that's what's going on.

This is another example of the core issue with robots.txt: it's a handshake agreement at best. There are no legal or technical restrictions imposed by it; we all just hope that bots do the right thing. Some of them do, but a lot of them don't.

The only real way to restrict these services is through legal rules that create meaningful consequences for these companies. Until then, there will be no sure-fire way to prevent your content from being accessed by an AI agent.

[Link]

· Links · Share this post

 

Pentagon ran secret anti-vax campaign to incite fear of China vaccines

[Chris Bing and Joel Schechtman at Reuters]

"The U.S. military launched a clandestine program amid the COVID crisis to discredit China’s Sinovac inoculation – payback for Beijing’s efforts to blame Washington for the pandemic. One target: the Filipino public. Health experts say the gambit was indefensible and put innocent lives at risk."

Reading this, it certainly seems indefensible, although unfortunately not out of line with other US foreign policy efforts. Innocent people died because of this US military operation.

It's a reflection of the simple idea, which seems to have governed US foreign policy for almost a century, that foreign lives matter less in the quest for dominance over our perceived rivals.

Even if you do care about America more than anywhere else, this will have hurt at home, too. The internet being what it is, it also would make sense that these influence campaigns made their way back to the US and affected vaccine uptake on domestic soil.

The whole thing feels like the military equivalent of a feature built by a novice product manager: someone had a goal that they needed to hit, and this was how they decided to get there. But don't get me wrong: I don't think this was an anomaly or someone running amok. This was policy.

[Link]

· Links · Share this post

 

On being human and "creative"

[Heather Bryant]

"What generative AI creates is not any one person's creative expression. Generative AI is only possible because of the work that has been taken from others. It simply would not exist without the millions of data points that the models are based upon. Those data points were taken without permission, consent, compensation or even notification because the logistics of doing so would have made it logistically improbable and financially impossible."

This is a wonderful piece from Heather Bryant that explores the humanity - the effort, the emotion, the lived experience, the community, the unique combination of things - behind real-world art that is created by people, and the theft of those things that generative AI represents.

It's the definition of superficiality, and as Heather says here, living in a world made by people, rooted in experiences and relationships and reflecting actual human thought, is what I hope for. Generative AI is a technical accomplishment, for sure, but it is not a humanist accomplishment. There are no shortcuts to the human experience. And wanting a shortcut to human experience in itself devalues being human.

[Link]

· Links · Share this post

 

Escaping the 9-5

The silhouette of a man holding his arms out, representing freedom

Imagine a life where you dictate your own schedule, free from the confines of a traditional job.

That’s a thought experiment I’ve been playing with lately: what would it look like if this was my last ever job? How might I optimize my lifestyle for freedom?

By that I don’t mean that it would be the last time I needed to earn money. I work in non-profit news; nobody does this because they want to become rich beyond their wildest dreams. Even tech salaries feel distant from this vantage point. To be clear, I’m doing this work because it’s important, and I have no plans to leave.

Regardless, I think it’s an important thought experiment. What if this was the last time I worked a job with regular hours and a boss and a hierarchy? What would it look like to have a lifestyle that was less bound to working norms, so that I could choose how to spend my day, or my week, or my year?

This desire to seek a lifestyle less bound by traditional working norms is shaped by two big influences:

  • My working life in startups, which was very much self-driven
  • My own parents, who had their own publishing startup for a key part of my childhood.

My parents’ ability to dictate their schedules and norms meant that I was able to have childhood experiences — in particular, trips to mainland Europe and the US — that would have been much harder otherwise. (These things didn’t need all that much money; they needed time.) That lifestyle did something else important, too: it showed me that it was attainable, and that a person doesn’t need a 9-5 to live. That perspective, in turn, allowed me to become a founder and build new things.

I would like to do the same for our son. Honestly, selfishly, I would also like to do it for me.

What are the roads to more independence when you aren’t independently wealthy?

Here are some options I’ve considered:

Startups

The first potential path to independence is through entrepreneurship.

I’ve founded two startups in my life. The first one was bootstrapped for the first couple of years before raising a round from British investors; the second was kicked off with a small amount ($50K) of accelerator seed money.

My life has changed since then. In particular, my capital needs have shot up. There’s a child and daycare and a mortgage in the picture, which is radically different from my life as a twenty-something prepared to live on Pot Noodles and scrape by with little money. A working life of open source, mission-driven startups, and non-profit news means that my savings are meager and wouldn’t support a new venture. A friends and family round is out of the question for me, as it is for anyone who doesn’t come from wealth.

Building a startup means working hard on it while holding down my day job, until it reaches the point where it has enough traction to raise a seed round. The barrier for that traction is rising steadily; it probably needs to be making tens of thousands of dollars a month for a seed investor to find it interesting. Still, that isn’t insurmountable — particularly with a co-founder. I have more product, engineering, and organizational growth skills than ever before, and I believe that I could do it.

But also: at the point where it’s making tens of thousands of dollars a month, assuming a low running cost, that’s more than enough to sustain me! It doesn’t need to be a high-growth startup. It could be a small business that is content to do quite well. A Zebra, perhaps. The disadvantage is that the upside is limited: it’s unlikely to make me wealthy beyond my wildest dreams. But what if that isn’t the goal? If the goal is freedom, a modest income is wonderful.

Consulting or Coaching

I have coaching training, and I’ve previously coached founders across a portfolio of mission-driven startups. In many ways, my roles as a CTO / Head of Engineering / Director of Technology have been largely coaching-based too: effective 1:1s and frameworks for feedback are the lifeblood of building a team.

I’ve also got strong product design and design thinking training, and have run workshops and design sprints with many teams. I understand product fundamentals, how to instill product thinking in a team, and can shepherd a product (and product team) from insight to launch.

And I’m technical. I can architect software and write code; I can advise teams about how to think about new technologies like AI, or how to build their own software. I’ve done this in many different contexts, many, many times.

So I think I can offer a lot. The challenge with consulting of any kind, though, is that it’s essentially a freelance job: you’re working from contract to contract, or from session to session, which means that you’re constantly having to sell yourself for the next thing, at least until your reputation has reached the point where people are asking for you.

Perhaps a retainer model would work: enough people subscribing to receive your attention and you have a steady income. Too many, though, and you can’t support them all. Too few, and you need to be in sales mode all the time. Still, it seems attractive from the provider end; the question, of course, is whether any customers would actually go for that. My guess is probably not — at least until you have enough glowing referrals.

Selling Products

In a way, this seems like the most attractive option: sell a finite product that doesn’t require your direct involvement, so that you can spend your time building the next product to sell, until you have a portfolio of products that sell without you and generate a reasonable income.

There are plenty of influencers who peddle “passive income”. My strong belief is that they’re all scammers, and that the dream of financial independence is what they’re all actually selling. Still, there are clearly people who sell things on the internet, and some of them do quite well.

These include:

  • Books: Yay for books! Of course, the idea that you’ll make an income from books alone is a pipe dream. Even bestselling published authors often don’t leave their jobs until they’ve had a few successes in a row. There are more books being published and it’s harder to break out. Full disclosure: I am writing a book! But I don’t expect it to cover my costs. I’m doing it because there’s a story I want to tell. (And then I’ll do it again, because there are more stories to tell.)
  • Courses: Do people really make a lot of money from these? I mean, maybe. It feels like courses mostly fall into the same category as books: something you do because you want to share some knowledge or potentially demonstrate some expertise, but not something you do as a money-making venture in its own right.
  • Apps: Hmm. This was a great idea in 2008. Some software really does support independent developers, though — but my suspicion is that the software that does the best are actually services, which fit better into my “startup / small business” description above.

A Portfolio

I think this is the real answer: it isn’t just one thing. Likely, a repeatable income is cobbled together from threads of at least some of the above elements: building a service, offering coaching or consulting, and selling individual products.

One danger here is that attention is spread too thinly: because multiple threads are required, you necessarily have less time to spend on each. Consequently, the quality of each element may suffer.

This approach no longer puts all eggs in one basket, which means there’s (in theory) more tolerance for one thread to fail. But it also means that you’re spinning plates in order to try and keep them all working. Because there’s less time for each, and attention is split, there’s a real chance of all of them failing.

Still, overall, it feels like the most resilient approach, with the most room for experimentation. It’s by no means the least work, but minimizing work isn’t the goal: that would be maximizing freedom, which isn’t the same thing.

What do you think? Have you made this leap? Did it work for you? I’d love to learn more.

· Posts · Share this post

 

The Encyclopedia Project, or How to Know in the Age of AI

[Janet Vertesi at Public Books]

"Our lives are consumed with the consumption of content, but we no longer know the truth when we see it. And when we don’t know how to weigh different truths, or to coordinate among different real-world experiences to look behind the veil, there is either cacophony or a single victor: a loudest voice that wins."

This is a piece about information, trust, the effect that AI is already having on knowledge.

When people said that books were more trustworthy than the internet, we scoffed; I scoffed. Books were not infallible; the stamp of a traditional publisher was not a sign that the information was correct or trustworthy. The web allowed more diverse voices to be heard. It allowed more people to share information. It was good.

The flood of automated content means that this is no longer the case. Our search engines can't be trusted; YouTube is certainly full of the worst automated dreck. I propose that we reclaim the phrase pink slime to encompass this nonsense: stuff that's been generated by a computer at scale in order to get attention.

So, yeah, I totally sympathize with the urge to buy a real-world encyclopedia again. Projects like Wikipedia must be preserved at all costs. But we have to consider if all this will result in the effective end of a web where humans publish and share information. And if that's the case, what's next?

[Link]

· Links · Share this post

 

Innovation depends on inclusion

The word

A few weeks ago I wrote about how solving the challenges facing the news industry requires fundamentally changing newsroom culture. While newsrooms have depended on referrals from social media and search engines to find audiences and make an impact, both of those segments are in flux, and audiences are therefore declining. The only way to succeed is to experiment and try new things — and, therefore, to have a culture where experimentation and trying new things are supported.

While the article was focused on journalism, the same changes are required for any organization to succeed in the face of rapid technological change. Building an open culture of experimentation is just as important for technology and manufacturing companies as it is for news: every organization experiences challenges in the face of major change.

Okay, but how?

Building a great culture is non-negotiable. The question, of course, is how you build it.

There are a few versions of this question to consider. For me, the most interesting are:

  1. How do you build a great culture from scratch in a new organization?
  2. How do you build a great culture in an established organization that has not yet invested in building one?
  3. How do you build a great culture in an established organization that has an entrenched bad culture?

Of course, to consider this, you have to have a firm opinion of what constitutes a good or bad culture. I strongly believe it relates to building an open, nurturing culture of experimentation, which I have previously written about in depth:

The best teams have a robust, intentional culture that champions openness, inclusivity, and continuous learning — which requires a lot of relationship-building both internally and with the organization in which it sits. These teams can make progress on meaningful work, and make their members valued, heard, and empowered to contribute.

One indicator

I believe the litmus test of such cultures is inclusivity.

Consider this hypothetical scenario: the individual contributors in an organization complain to management that underrepresented members of the team are not able to be heard in meetings and that their ideas are always overlooked.

The managers could react in a few different ways:

  1. Dismiss the complaints outright.
  2. Try to make the complaints go away as quickly as possible so everyone can get back to work.
  3. Listen deeply to the complaints and to the people affected, then work with the whole organization to get real training and build better processes in order to ensure everyone can participate and is heard.

Only the third option represents an open, inclusive organization. The first is obviously dismissive; the second is arguably even worse, as it allows managers to delude themselves that they’re doing something while actively trying to do the bare minimum. (They might privately roll their eyes at having to do it to begin with.) In the third scenario, managers stop and listen to the people affected and work with them in order to effect real change.

Now consider: what happens if nobody brings that complaint to begin with?

In a truly inclusive organization, nobody has to bring that complaint, because managers are constantly assessing the well-being of their teams, and likely receiving continuous, honest feedback. This doesn’t happen by default: the culture of the organization has to be well-considered to ensure that a focus on inclusivity is a cherished value, and that everyone feels emotionally safe to contribute without needing to put on a work persona or mask away aspects of their identities.

This has certain prerequisites. In particular, it’s impossible for an organization with a top-down leadership style to be inclusive, by definition. Even if upper management is truly representative of the demographics and backgrounds of the wider organization and its customers (which is never true), top-down leadership misses the perspectives and ideas of people lower down the hierarchy. Gestures like “ideas boxes” are performative at best. If they wouldn’t be out of place in your organization, its culture is probably top-down.

Organizations can foster inclusivity by implementing regular feedback mechanisms, providing training on both inclusivity and management, promoting transparent communication, and establishing clear systems and boundaries which allow managers to say “yes” more often.

The received wisdom is that rules are barriers to innovation. But it turns out that establishing the right kind of structure helps innovation thrive.

The tyranny of structurelessness

News often does have a top-down culture, inherited from the editorial cultures of old-school newspapers. It’s not alone: finance, law, and many other legacy industries also suffer from this problem. This is a giant headwind for any kind of real innovation, because every new idea essentially has to achieve royal assent. There’s no leeway for experimentation, trying stuff, or getting things wrong — and managers are more likely to take credit for any successes. If something doesn’t fit into the manager’s worldview, the “no”s come freely. But, of course, that worldview is derived from their own experiences, backgrounds, and contexts, rather than the lived experiences of other people.

Structureless organizations, where culture has been under-invested in, tend to have these characteristics. If it’s not the managers dictating what happens, it’s the loudest people in the room, who tend to be the people who come from relative privilege. Without structure to ensure inclusivity, inevitably you’ll lose out on valuable perspectives and ideas.

It just so happens that the structures that establish inclusive practices also form the backbone of intentional cultures for everyone. It’s not just people from vulnerable communities who aren’t necessarily heard; by creating structures that intentionally lift those voices up, we lift up everybody and ensure everyone gets an equitable say.

Ensuring that all voices collaborate on the strategy of the organization and are able to define the work makes for better work, because a wider set of ideas and perspectives are considered — particularly those that managers might otherwise be blind to.

Inclusivity should never be considered a nice-to-have: in addition to being the morally correct path, it’s the key to unlocking an innovative culture that has the power to save existing industries and establish new ones. The people who roll their eyes at it are doomed to live out the status quo. Ultimately, inevitably, they will be left behind.

· Posts · Share this post

 

Microsoft Refused to Fix Flaw Years Before SolarWinds Hack

[Renee Dudley at ProPublica]

"Former [Microsoft] employee says software giant dismissed his warnings about a critical flaw because it feared losing government business. Russian hackers later used the weakness to breach the National Nuclear Security Administration, among others."

This is a damning story about profit over principles: Microsoft failed to close a major security flaw that left the government (alongside other customers) vulnerable because it wanted to win their business. This directly paved the way for the SolarWinds hack.

This doesn't seem to have been covert or subtext at Microsoft:

"Morowczynski told Harris that his approach could also undermine the company’s chances of getting one of the largest government computing contracts in U.S. history, which would be formally announced the next year. Internally, Nadella had made clear that Microsoft needed a piece of this multibillion-dollar deal with the Pentagon if it wanted to have a future in selling cloud services, Harris and other former employees said."

But publicly it said something very different:

"From the moment the hack surfaced, Microsoft insisted it was blameless. Microsoft President Brad Smith assured Congress in 2021 that “there was no vulnerability in any Microsoft product or service that was exploited” in SolarWinds."

It will be interesting to see what the fallout of this disclosure is, and whether Microsoft and other companies might be forced behave differently in the future. This story represents business as usual, and without external pressure, it's likely that nothing will change.

[Link]

· Links · Share this post

 

Calm Company Fund is taking a break

[Calm Company Fund]

"Inhale. Exhale. Find the space between… Calm Company Fund is going on sabbatical and taking a break from investing in new companies and raising new funds. Here’s why."

Calm Company Fund's model seems interesting. It's a revenue-based investor that makes a return based on its portfolio companies' earnings, but still uses a traditional VC model to derive its operating budget. That means it makes a very small percentage of funds committed from Limited Partners, rather than sharing in the success of its portfolio (at least until much later, when the companies begin to earn out).

That would make sense in a world where the funds committed were enormous, but revenue-based investment tends to raise smaller fund sizes. So Calm Company Fund had enough money to pay for basically one person - and although the portfolio was growing, the staff size couldn't scale up to cope.

So what does an alternative look like? I imagine that it might look like taking a larger percentage of incoming revenue as if it were an LP itself. Or maybe this kind of funding simply doesn't work with a hands-on firm, and the models that attract larger institutional investors are inherently more viable (even if that isn't always reflected in their fund returns).

I want something like this to exist, but the truth is that it might live in the realm of boring old business loans, and venture likely is able to exist because of the risks involved in those sorts of companies.

[Link]

· Links · Share this post

 

These Wrongly Arrested Black Men Say a California Bill Would Let Police Misuse Face Recognition

[The Markup]

"Now all three men are speaking out against pending California legislation that would make it illegal for police to use face recognition technology as the sole reason for a search or arrest. Instead it would require corroborating indicators."

Even with mitigations, it will lead to wrongful arrests: so-called "corroborating indicators" don't assist with the fact that the technology is racially biased and unreliable, and in fact may provide justification for using it.

And the stories of this technology being used are intensely bad miscarriages of justice:

“Other than a photo lineup, the detective did no other investigation. So it’s easy to say that it’s the officer’s fault, that he did a poor job or no investigation. But he relied on (face recognition), believing it must be right. That’s the automation bias this has been referenced in these sessions.”

"Believing it must be right" is one of core social problems widespread AI is introducing. Many people think of computers as being coldly logical deterministic thinkers. Instead, there's always the underlying biases of the people who built the systems and, in the case of AI, in the vast amounts of public data used to train them. False positives are bad in any scenario; in law enforcement, it can destroy or even end lives.

[Link]

· Links · Share this post