Skip to main content
 

AGI Researchers Stop Quoting White Supremacists Challenge (Impossible)

White supremacist rhetoric is endemic in AI research. An interesting (and complex) point is also made here about preprint journal sites and how they allow companies to whitewash embarrassing mistakes.

[Link]

· Links · Share this post

 

Bing Is Generating Images of SpongeBob Doing 9/11

To be fair, you could draw a picture of this in Photoshop, too. But I suspect a few brands might have a few things to say about Microsoft hosting this tool.

[Link]

· Links · Share this post

 

· Links · Share this post

 

Predictive Policing Software Terrible At Predicting Crimes

"Crime predictions generated for the police department in Plainfield, New Jersey, rarely lined up with reported crimes, an analysis by The Markup has found." In fact, much less than 1% of the time.

[Link]

· Links · Share this post

 

Critics Furious Microsoft Is Training AI by Sucking Up Water During Drought

"Microsoft's data centers in West Des Moines, Iowa guzzled massive amounts of water last year to keep cool while training OpenAI's ChatGPT-4. [...] This happened in the midst of a more than three-year drought, further taxing a stressed water system that's been so dry this summer that nature lovers couldn't even paddle canoes in local rivers."

[Link]

· Links · Share this post

 

How the “Surveillance AI Pipeline” Literally Objectifies Human Beings

"The vast majority of computer vision research leads to technology that surveils human beings, a new preprint study that analyzed more than 20,000 computer vision papers and 11,000 patents spanning three decades has found.”

[Link]

· Links · Share this post

 

DALL·E 3

Once again, this looks completely like magic. Very high-fidelity images across a bunch of different styles. The implications are enormous.

[Link]

· Links · Share this post

 

California governor vetoes bill banning robotrucks without safety drivers

The legislation passed with a heavy majority - this veto is a signal that Newsom favors the AI vendors over teamster concerns. Teamsters, on the other hand, claim the tech is unsafe and that jobs will be lost.

[Link]

· Links · Share this post

 

ChatGPT Caught Giving Horrible Advice to Cancer Patients

LLMs are a magic trick; interesting and useful for superficial tasks, but very much not up to, for example, replacing a trained medical professional. The idea that someone would think it's okay to let one give medical advice is horrifying.

[Link]

· Links · Share this post

 

AI data training companies like Scale AI are hiring poets

These poets are being hired to eliminate the possibility of being paid for their own work. But I am kind of tickled by the idea that OpenAI is scraping fan-fiction forums. Not because it’s bad work, but imagine the consequences.

[Link]

· Links · Share this post

 

John Grisham, other top US authors sue OpenAI over copyrights

"A trade group for U.S. authors has sued OpenAI in Manhattan federal court on behalf of prominent writers including John Grisham, Jonathan Franzen, George Saunders, Jodi Picoult and "Game of Thrones" novelist George R.R. Martin, accusing the company of unlawfully training its popular artificial-intelligence based chatbot ChatGPT on their work.”

[Link]

· Links · Share this post

 

Who blocks OpenAI?

“The 392 news organizations listed below have instructed OpenAI’s GPTBot to not scan their sites, according to a continual survey of 1,119 online publishers conducted by the homepages.news archive. That amounts to 35.0% of the total.”

[Link]

· Links · Share this post

 

Our Self-Driving Cars Will Save Countless Lives, But They Will Kill Some of You First

“In a way, the people our cars mow down are doing just as much as our highly paid programmers and engineers to create the utopian, safe streets of tomorrow. Each person who falls under our front bumper teaches us something valuable about how humans act in the real world.”

[Link]

· Links · Share this post

 

US Copyright Office wants to hear what people think about AI and copyright

I certainly have some thoughts that I will share. Imagine if you could allow an AI agent to create copyrighted works at scale with no human involvement. It would allow for an incredible intellectual property land grab.

[Link]

· Links · Share this post

 

The A.I. Surveillance Tool DHS Uses to Detect ‘Sentiment and Emotion’

Customs and Border Protection is using sentiment analysis on inbound and outbound travelers who "may threaten public safety, national security, or lawful trade and travel". That's dystopian enough in itself, but there's no way they could limit the trawl to those people, and claims made about what the software can do are dubious at best.

[Link]

· Links · Share this post

 

This AI Watches Millions Of Cars And Tells Cops If You’re Driving Like A Criminal

A good rule of thumb is that if technology makes something feasible, someone will do it regardless of the ethics. Here, AI makes it easy to perform warrantless surveillance at scale - so someone has turned it into a product and police are buying it.

[Link]

· Links · Share this post

 

New York Times considers legal action against OpenAI as copyright tensions swirl

Whether this comes to fruition with the NYT vs OpenAI or another publisher vs another LLM vendor, there will be a court case like this, and it will set important precedent for the industry. My money's on the publishers.

[Link]

· Links · Share this post

 

School district uses ChatGPT to help remove library books

Probably inevitable, but it nonetheless made my jaw drop. What an incredibly wrong-headed use of an LLM.

[Link]

· Links · Share this post

 

New York Times: Don't use our content to train AI systems

The NYT's new terms disallow use of its content to develop any new software application, including machine learning and AI systems. It's a shame that this has to be explicit, rather than a blanket right afforded to publishers by default, but it's a sensible clause that many more will be including.

[Link]

· Links · Share this post

 

We need a Weizenbaum test for AI

“Weizenbaum’s questions, though they seem simple—Is it good? Do we need it?—are difficult ones for computer science to answer. They could be asked of any proposed technology, but the speed, scope, and stakes of innovation in AI make their consideration more urgent.”

[Link]

· Links · Share this post

 

· Links · Share this post

 

Google says AI systems should be able to mine publishers’ work unless companies opt out

I strongly disagree with this stance. Allowing your work to be mined by AI models should be opt-in only - otherwise there is no possible way for a publisher or author to apply a license or grant rights.

[Link]

· Links · Share this post

 

AI language models are rife with political biases

Different AI models have different political biases. Google's tend to be more socially conservative - possibly in part because they were trained on books rather than the wider internet. Regardless of the cause, this is proof, again, that AI models are not objective.

[Link]

· Links · Share this post

 

In every reported case where police mistakenly arrested someone using facial recognition, that person has been Black

Black faces are overrepresented in databases used to train AI for law enforcement - and some facial recognition software used in this context fails 96% of the time. This practice is an accelerant for already deeply harmful inequities. Time to ban it.

[Link]

· Links · Share this post

 

Catching up on the weird world of LLMs

This is a really comprehensive history and overview of LLMs. Simon has been bringing the goods, and this talk is no exception.

[Link]

· Links · Share this post