Skip to main content

We need solidarity across creative industries

I strongly believe in this:

"Artists and writers must have solidarity across creative industries: if you wouldn’t feel comfortable with your own work being replaced by algogen, then you shouldn’t use generated content of other creative mediums."

On top of it being an ethical affront across the board, I don't believe AI can ever create the kind of art that I think is particularly valuable: subversive, provocative, pushing envelopes. It's fundamentally limited by its technical shortcomings. It'll always be, in the most literal sense, average.

But all art is valuable and all artists are valuable. They've already been in a vulnerable position forever; these kinds of products and policies punch down on people who already struggle to live and yet literally help us figure out what it means to be human.

· Links


Top consultancy undermining climate change fight: whistleblowers

Management consultants are to blame, for sure, but so are politicians for taking the bait. We know that there's big oil and gas money pushing against real solutions to climate change - anyone who's in that space needs to be vigilant against it.

One aspect of this might, perhaps, have been to not allow the talks to take place in one of the world's largest oil-producing nations. But here we are.

None of this is to say that McKinsey is off the hook for this kind of behavior. If this is happening, it's right to name and shame them. It's just: there are a lot of other people who should take some blame, too.

· Links


‘It is a beast that needs to be tamed’: leading novelists on how AI could rewrite the future

This runs the gamut, but generally sits where I am: AI itself is not the threat. How it might be used in service of a profit motive is the threat.

Harry Josephine Giles worries about the digital enclosure movement - making private aspects of life that were once public - and I agree. That isn't just limited to AI; it's where we seem to be at the intersection of business and society.

Nick Harkaway: "In the end, this is a sideshow. The sectors where these systems will really have an impact are those for which they’re perfectly suited, like drug development and biotech, where they will act as accelerators, compounding the post-Covid moonshot environment and ambushing us with radical possibilities over time. I don’t look at this moment and wonder if writers will still exist in 2050. I ask myself what real new things I’ll need words and ideas for as they pass me in the street."

· Links


Matt Mullenweg on Tumblr's downsizing

This is a great post from Matt: in response to a leak, he re-posted the full leaked content and added transparent context. Exactly how it should be done.

I wish, like many, that this wasn't the reality for Tumblr. But it's likely that it's too set in another era of the web, and it was too neglected by its previous owners. Automattic is a great company that makes sense as an acquirer, and they spent $100M to try and turn it around. That they ultimately couldn't is not an indictment of them.

Kudos also for not letting go of the team, and simply finding other places for them to go in the org - again, exactly how it should be done, even if it almost never is.

· Links


Meet Nightshade—A Tool Empowering Artists to Fight Back Against AI

While empowering artists is obviously a good thing, this feels like an unwinnable arms race to me. Sure, Nightshade can produce incorrect results in image generators, but this will be mitigated, leading to another tool, leading to another mitigation, and so on.

For now, this may be a productive kind of activism that draws attention to the plight of artists at the hands of AI. Ultimately, though, real agreements will need to be reached.

· Links


Advertisers Don’t Want Sites Like Jezebel to Exist

I just don't think advertising is an appropriate way to support this kind of journalism - or, potentially, any kind. This is more evidence, but it's also worth knowing that the private equity firm that owns G/O Media has not been a good steward.

Non-profits and worker-owned co-operatives aren't just more aligned ways to run this kind of organization, but I strongly suspect they last longer, too.

There is, of course, always the possibility that advertising is an excuse, and the owners didn't want to support a feminist publication.

· Links


We're sorry we created the Torment Nexus

"Speaking as a science fiction writer, I'd like to offer a heartfelt apology for my part in the silicon valley oligarchy's rise to power. And I'd like to examine the toxic role of science fiction in providing justifications for the craziness."

· Links


How Will Journalists Survive Digital Media’s Decline? Forget Scale.

On models for journalism:

"I wonder if the big problem is that we focused on scale when we should have been focused on nailing down the audience. If we focused on millions when we should have focused on building ourselves a liveable wage. And if we put too much of an emphasis on global at the cost of local."

Yes! This! Exactly! News was seduced by the exponential VC model that should have been limited to certain kinds of hardware and software. And in the process - as well as through some legacy ivory tower thinking - it chose not to dig deep and figure out exactly who it was serving.

I still say modern newsrooms should use the word "community" instead of "audience". It's a two-way relationship. And building relationships does not scale.

· Links


Mark Zuckerberg ignored teen and user safety warnings from Meta executives

Over time, I think it's becoming more and more likely that Zuckerberg will step down. I strongly suspect he'll be replaced by Adam Mosseri, whose Instagram and Threads products have been doing very well for Meta (in contrast to Zuckerberg's metaverse shenanigans).

In any event, if he really did veto proposals to protect teens' mental health, it's a pretty damning indictment of his leadership.

Now that the internet's growth is at the other end of the S-curve and we're societally more comfortable with technology and its implications, I think we're likely to see more 2000s-era CEOs replaced with people who have a more nuanced, less exponential-growth-led approach.

· Links


Court rules automakers can record and intercept owner text messages

At least in Washington State, car manufacturers may record and intercept the text messages of drivers who have connected their devices to their cars via Bluetooth or cable.

That data can then be resold or provided to law enforcement without a warrant.

· Links


Experts don't trust tech CEOs on AI, survey shows

"Some critics of Big Tech have argued that leading AI companies like Google, Microsoft and Microsoft-funded OpenAI support regulation as a way to lock out upstart challengers who'd have a harder time meeting government requirements."

Okay, but what about regulation that allows people to create new AI startups AND protects the public interest?

· Links


Why It's Never Been Harder to Make a Living as a Writer

A fascinating discussion of how authorship has changed, and what the demands of new authors from publishing houses really are.

In the old days, an author was someone who created a work. Today, they have to be a brand.

But it also turns out that unionization has a big part to play: many writers moonlight in the entertainment industry, where they can get healthcare and other benefits, all due to the WGA.

· Links


Former Kotaku writers are launching Aftermath, a new video game site

I'm really hopeful for this new generation of worker-owned media outlets. It's a promising model, and obviously hugely empowering.

What will be disempowering is if they start to disappear. So let's support them with our full voices. If you're into video games, check this out, and maybe consider supporting them?

· Links


AI companies have all kinds of arguments against paying for copyrighted content

The technology depends on ingesting copyrighted work, and the business models depend on not paying for it.

But just because the models only work if no payment is involved, that doesn't give the technology the right to operate in this way. It's not the same as a person reading a book: it's a software system training itself on commercial information - and also, that person would have had to pay for that book.

· Links


At 1,500 stories per day, Mail Online is UK's most prolific news website

These numbers are amazing to me. The Daily Mail publishes around 1640 articles every weekday. BBC News, in contrast, which has lots of local newsrooms, "only" publishes around 226 in total.

The Daily Mail, in other words, is a content farm. It's also the largest news publisher on TikTok and one of the largest on the web.

It's famous in Britain for its center right stance and a-bit-upmarket tabloid positioning. I wonder if that reputation translates in the US and beyond?

· Links


Accessing go links across tailnets

Golinks seem like a small thing but actually might be the thing that pushes me over the edge to running my own tailnet.

I like Will's solution here to running multiple otherwise-conflicting golinks servers.

The whole thing seems powerful and I suppose I should just dive in.

· Links


Sam Bankman-Fried found guilty on all seven counts

Unsurprising. Which is the word I will also use for his inevitable early release.

· Links


First Committee Approves New Resolution on Lethal Autonomous Weapons, as Speaker Warns ‘An Algorithm Must Not Be in Full Control of Decisions Involving Killing’ | UN Press

"An algorithm must not be in full control of decisions that involve killing or harming humans, Egypt’s representative said after voting in favour of the resolution. The principle of human responsibility and accountability for any use of lethal force must be preserved, regardless of the type of weapons system involved, he added."

Quite a reflection of our times that this is a real concern. And it is.

· Links


We need to focus on the AI harms that already exist

"AI systems falsely classifying individuals as criminal suspects, robots being used for policing, and self-driving cars with faulty pedestrian tracking systems can already put your life in danger. Sadly, we do not need AI systems to have superintelligence for them to have fatal outcomes for individual lives. Existing AI systems that cause demonstrated harms are more dangerous than hypothetical “sentient” AI systems because they are real."

This is it: we can focus on hypothetical futures, but software is causing real harm in the here and now, and attention to science fiction outcomes is drawn away from fixing those harms.

· Links


Last Chance to fix eIDAS

This kind of legislation is fundamentally against the public interest and, I believe, should always be opposed:

"These changes radically expand the capability of EU governments to surveil their citizens by ensuring cryptographic keys under government control can be used to intercept encrypted web traffic across the EU. Any EU member state has the ability to designate cryptographic keys for distribution in web browsers and browsers are forbidden from revoking trust in these keys without government permission."

If this passes, the government of any EU member will have the power to silently intercept and read encrypted web traffic. It undermines the right to privacy and creates a chilling effect for activists and other targeted, vulnerable groups.

· Links


Confessions of a Venture Capital-Backed Startup Founder

"In the past few years, causality inverted: Start-ups and entire markets were manufactured from whole cloth to meet the demand of overcapitalized venture funds searching for a home run."

I am certain I'll found another startup, and I'm certain it will be a revenue-based business that will not be designed to raise investment. Perhaps ironically, in doing so, it may be more valuable as a result.

· Links


Open Society and Other Funders Launch New Initiative to Ensure AI Advances the Public Interest

This is the kind of AI declaration I prefer.

“As we know from social media, the failure to regulate technological change can lead to harms that range from children’s safety to the erosion of democracy. With AI, the scale and intensity of potential harm is even greater—from racially based ‘risk scoring’ tools that needlessly keep people in prison to deepfake videos that further erode trust in democracy and future harms like economic upheaval and job loss. But if we act now, we can build accountability, promote opportunity, and deliver greater prosperity for all.”

These are all organizations that already do good work; it's good to see them apply pressure on AI companies in the public interest.

· Links


The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023

For me, this paragraph was the takeaway:

"We affirm that, whilst safety must be considered across the AI lifecycle, actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems, including through systems for safety testing, through evaluations, and by other appropriate measures. We encourage all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emerge, in particular to prevent misuse and issues of control, and the amplification of other risks."

In other words, the onus will be on AI developers to police themselves. We will see how that works out in practice.

· Links


Technical Standards Bodies are Regulators

This is an interesting point of view, but I don't think I fully buy it: while these bodies set technical standards, they have no ability to actually enforce.

Consider the situation with Internet Explorer back when it virtually owned the web (and, to a lesser extent, the situation today with Chrome). Standards could be set, directions could be established, but there was no-one to stop Microsoft from going their own way.

· Links


Pushing for a lower dev estimate is like negotiating better weather with a meteorologist

I've lived this, and I'd go so far that it's a sure sign of a dysfunctional team: when non-technical leadership pushes for lower estimates based on their own business hopes and doesn't accept the ones given by their in-house experts.

"Pushing for a lower estimate is like negotiating better weather with the meteorologist" covers it nicely.

· Links