Skip to main content
 

‘It is a beast that needs to be tamed’: leading novelists on how AI could rewrite the future

This runs the gamut, but generally sits where I am: AI itself is not the threat. How it might be used in service of a profit motive is the threat.

Harry Josephine Giles worries about the digital enclosure movement - making private aspects of life that were once public - and I agree. That isn't just limited to AI; it's where we seem to be at the intersection of business and society.

Nick Harkaway: "In the end, this is a sideshow. The sectors where these systems will really have an impact are those for which they’re perfectly suited, like drug development and biotech, where they will act as accelerators, compounding the post-Covid moonshot environment and ambushing us with radical possibilities over time. I don’t look at this moment and wonder if writers will still exist in 2050. I ask myself what real new things I’ll need words and ideas for as they pass me in the street."

[Link]

· Links · Share this post

 

Meet Nightshade—A Tool Empowering Artists to Fight Back Against AI

While empowering artists is obviously a good thing, this feels like an unwinnable arms race to me. Sure, Nightshade can produce incorrect results in image generators, but this will be mitigated, leading to another tool, leading to another mitigation, and so on.

For now, this may be a productive kind of activism that draws attention to the plight of artists at the hands of AI. Ultimately, though, real agreements will need to be reached.

[Link]

· Links · Share this post

 

Experts don't trust tech CEOs on AI, survey shows

"Some critics of Big Tech have argued that leading AI companies like Google, Microsoft and Microsoft-funded OpenAI support regulation as a way to lock out upstart challengers who'd have a harder time meeting government requirements."

Okay, but what about regulation that allows people to create new AI startups AND protects the public interest?

[Link]

· Links · Share this post

 

AI companies have all kinds of arguments against paying for copyrighted content

The technology depends on ingesting copyrighted work, and the business models depend on not paying for it.

But just because the models only work if no payment is involved, that doesn't give the technology the right to operate in this way. It's not the same as a person reading a book: it's a software system training itself on commercial information - and also, that person would have had to pay for that book.

[Link]

· Links · Share this post

 

First Committee Approves New Resolution on Lethal Autonomous Weapons, as Speaker Warns ‘An Algorithm Must Not Be in Full Control of Decisions Involving Killing’ | UN Press

"An algorithm must not be in full control of decisions that involve killing or harming humans, Egypt’s representative said after voting in favour of the resolution. The principle of human responsibility and accountability for any use of lethal force must be preserved, regardless of the type of weapons system involved, he added."

Quite a reflection of our times that this is a real concern. And it is.

[Link]

· Links · Share this post

 

We need to focus on the AI harms that already exist

"AI systems falsely classifying individuals as criminal suspects, robots being used for policing, and self-driving cars with faulty pedestrian tracking systems can already put your life in danger. Sadly, we do not need AI systems to have superintelligence for them to have fatal outcomes for individual lives. Existing AI systems that cause demonstrated harms are more dangerous than hypothetical “sentient” AI systems because they are real."

This is it: we can focus on hypothetical futures, but software is causing real harm in the here and now, and attention to science fiction outcomes is drawn away from fixing those harms.

[Link]

· Links · Share this post

 

Open Society and Other Funders Launch New Initiative to Ensure AI Advances the Public Interest

This is the kind of AI declaration I prefer.

“As we know from social media, the failure to regulate technological change can lead to harms that range from children’s safety to the erosion of democracy. With AI, the scale and intensity of potential harm is even greater—from racially based ‘risk scoring’ tools that needlessly keep people in prison to deepfake videos that further erode trust in democracy and future harms like economic upheaval and job loss. But if we act now, we can build accountability, promote opportunity, and deliver greater prosperity for all.”

These are all organizations that already do good work; it's good to see them apply pressure on AI companies in the public interest.

[Link]

· Links · Share this post

 

The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023

For me, this paragraph was the takeaway:

"We affirm that, whilst safety must be considered across the AI lifecycle, actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems, including through systems for safety testing, through evaluations, and by other appropriate measures. We encourage all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emerge, in particular to prevent misuse and issues of control, and the amplification of other risks."

In other words, the onus will be on AI developers to police themselves. We will see how that works out in practice.

[Link]

· Links · Share this post

 

Notes, Links, and Weeknotes (23 October 2023) – Baldur Bjarnason

Baldur Bjarnason talks frankly about the cost of writing critically about AI:

"It’s honestly been brutal and it’ll probably take me a few years to recover financially from having published a moderately successful book on “AI” because it doesn’t have any of the opportunity multipliers that other topics have."

I worry about the same thing. I've noticed that AI-critical pieces lead to unsubscribes on my newsletter, and that most lucrative job vacancies relate to AI in some way.

I'm not sure I regret my criticism, though.

[Link]

· Links · Share this post

 

I'm banned for life from advertising on Meta. Because I teach Python.

Reuven Lerner was banned from advertising on Meta products for life because he offers Python and Pandas training - and the company's automated system thought he was dealing in live snakes and bears.

And then he lost the appeal because that, too, was automated.

This is almost Douglas Adams-esque in its boneheadedness, but it's also a look into an auto-bureaucratic future where there is no real recourse, even when the models themselves are at fault.

[Link]

· Links · Share this post

 

The Repressive Power of Artificial Intelligence

"Advances in AI are amplifying a crisis for human rights online. While AI technology offers exciting and beneficial uses for science, education, and society at large, its uptake has also increased the scale, speed, and efficiency of digital repression. Automated systems have enabled governments to conduct more precise and subtle forms of online censorship."

[Link]

· Links · Share this post

 

AGI Researchers Stop Quoting White Supremacists Challenge (Impossible)

White supremacist rhetoric is endemic in AI research. An interesting (and complex) point is also made here about preprint journal sites and how they allow companies to whitewash embarrassing mistakes.

[Link]

· Links · Share this post

 

Bing Is Generating Images of SpongeBob Doing 9/11

To be fair, you could draw a picture of this in Photoshop, too. But I suspect a few brands might have a few things to say about Microsoft hosting this tool.

[Link]

· Links · Share this post

 

· Links · Share this post

 

Predictive Policing Software Terrible At Predicting Crimes

"Crime predictions generated for the police department in Plainfield, New Jersey, rarely lined up with reported crimes, an analysis by The Markup has found." In fact, much less than 1% of the time.

[Link]

· Links · Share this post

 

Critics Furious Microsoft Is Training AI by Sucking Up Water During Drought

"Microsoft's data centers in West Des Moines, Iowa guzzled massive amounts of water last year to keep cool while training OpenAI's ChatGPT-4. [...] This happened in the midst of a more than three-year drought, further taxing a stressed water system that's been so dry this summer that nature lovers couldn't even paddle canoes in local rivers."

[Link]

· Links · Share this post

 

How the “Surveillance AI Pipeline” Literally Objectifies Human Beings

"The vast majority of computer vision research leads to technology that surveils human beings, a new preprint study that analyzed more than 20,000 computer vision papers and 11,000 patents spanning three decades has found.”

[Link]

· Links · Share this post

 

DALL·E 3

Once again, this looks completely like magic. Very high-fidelity images across a bunch of different styles. The implications are enormous.

[Link]

· Links · Share this post

 

California governor vetoes bill banning robotrucks without safety drivers

The legislation passed with a heavy majority - this veto is a signal that Newsom favors the AI vendors over teamster concerns. Teamsters, on the other hand, claim the tech is unsafe and that jobs will be lost.

[Link]

· Links · Share this post

 

ChatGPT Caught Giving Horrible Advice to Cancer Patients

LLMs are a magic trick; interesting and useful for superficial tasks, but very much not up to, for example, replacing a trained medical professional. The idea that someone would think it's okay to let one give medical advice is horrifying.

[Link]

· Links · Share this post

 

AI data training companies like Scale AI are hiring poets

These poets are being hired to eliminate the possibility of being paid for their own work. But I am kind of tickled by the idea that OpenAI is scraping fan-fiction forums. Not because it’s bad work, but imagine the consequences.

[Link]

· Links · Share this post

 

John Grisham, other top US authors sue OpenAI over copyrights

"A trade group for U.S. authors has sued OpenAI in Manhattan federal court on behalf of prominent writers including John Grisham, Jonathan Franzen, George Saunders, Jodi Picoult and "Game of Thrones" novelist George R.R. Martin, accusing the company of unlawfully training its popular artificial-intelligence based chatbot ChatGPT on their work.”

[Link]

· Links · Share this post

 

Who blocks OpenAI?

“The 392 news organizations listed below have instructed OpenAI’s GPTBot to not scan their sites, according to a continual survey of 1,119 online publishers conducted by the homepages.news archive. That amounts to 35.0% of the total.”

[Link]

· Links · Share this post

 

Our Self-Driving Cars Will Save Countless Lives, But They Will Kill Some of You First

“In a way, the people our cars mow down are doing just as much as our highly paid programmers and engineers to create the utopian, safe streets of tomorrow. Each person who falls under our front bumper teaches us something valuable about how humans act in the real world.”

[Link]

· Links · Share this post

 

US Copyright Office wants to hear what people think about AI and copyright

I certainly have some thoughts that I will share. Imagine if you could allow an AI agent to create copyrighted works at scale with no human involvement. It would allow for an incredible intellectual property land grab.

[Link]

· Links · Share this post

Email me: ben@werd.io

Signal me: benwerd.01

Werd I/O © Ben Werdmuller. The text (without images) of this site is licensed under CC BY-NC-SA 4.0.