Skip to main content
 

· Links · Share this post

 

Google says AI systems should be able to mine publishers’ work unless companies opt out

I strongly disagree with this stance. Allowing your work to be mined by AI models should be opt-in only - otherwise there is no possible way for a publisher or author to apply a license or grant rights.

[Link]

· Links · Share this post

 

AI language models are rife with political biases

Different AI models have different political biases. Google's tend to be more socially conservative - possibly in part because they were trained on books rather than the wider internet. Regardless of the cause, this is proof, again, that AI models are not objective.

[Link]

· Links · Share this post

 

In every reported case where police mistakenly arrested someone using facial recognition, that person has been Black

Black faces are overrepresented in databases used to train AI for law enforcement - and some facial recognition software used in this context fails 96% of the time. This practice is an accelerant for already deeply harmful inequities. Time to ban it.

[Link]

· Links · Share this post

 

Catching up on the weird world of LLMs

This is a really comprehensive history and overview of LLMs. Simon has been bringing the goods, and this talk is no exception.

[Link]

· Links · Share this post

 

Media Startups Draw Less Backing, But AI Is A Bright Spot

I don’t know that it’s fair to count AI startups as media startups. Given the (justified) labor disputes going on right now, I’d offer that they’re closer to anti-media, and I’m not sure that I’d think of them as a bright spot. There’s plenty of room for AI to assist creatives, but of course the real money is in replacing them or devaluing their work.

[Link]

· Links · Share this post

 

AP strikes deal with OpenAI

This caught my eye: an example of OpenAI licensing content from a publisher in order to make its models better. Other publishers should now know that they can make similar deals rather than letting their work be scraped up for free.

[Link]

· Links · Share this post

 

The AI Dividend

I respect Bruce Schneier a great deal, but I hate this proposal. For one thing, what about people outside the US whose data was used? On the internet, the public is global. Wherever the tools are used, the rights infringed by AI tools are everyone's, from everywhere. Paying at the point of use rather than at the point of scraping cannot be the way.

[Link]

· Links · Share this post

 

OpenAI and Microsoft Sued for $3 Billion Over Alleged ChatGPT 'Privacy Violations'

It's important that lawsuits like this center on the use, not the act of scraping itself - the latter does need to be protected. One to watch.

[Link]

· Links · Share this post

 

Google Says It'll Scrape Everything You Post Online for AI

I think this is a legal challenge waiting to happen. While people who publish publicly online have a reasonable expectation that anyone can read their content, they don't have a similar expectation about content being modeled and analyzed. There's no de facto license to do this.

[Link]

· Links · Share this post

 

Language Is a Poor Heuristic for Intelligence

““Language skill indicates intelligence,” and its logical inverse, “lack of language skill indicates non-intelligence,” is a common heuristic with a long history. It is also a terrible one, inaccurate in a way that ruinously injures disabled people. Now, with recent advances in computing technology, we’re watching this heuristic fail in ways that will harm almost everyone.”

[Link]

· Links · Share this post

 

How Easy Is It to Fool A.I.-Detection Tools?

“To assess the effectiveness of current A.I.-detection technology, The New York Times tested five new services using more than 100 synthetic images and real photos. The results show that the services are advancing rapidly, but at times fall short.”

[Link]

· Links · Share this post

 

AI is killing the old web, and the new web struggles to be born

“AI-generated misinformation is insidious because it’s often invisible. It’s fluent but not grounded in real-world experience, and so it takes time and expertise to unpick. If machine-generated content supplants human authorship, it would be hard — impossible, even — to fully map the damage.”

[Link]

· Links · Share this post

 

A prayer wheel for capitalism

“Auto-generating text based on other people’s discoveries and then automatically summarising that text by finding commonalities with existing text creates a loop of mechanised nonsense. It’s a prayer wheel for capitalism.”

[Link]

· Links · Share this post

 

Google, one of AI’s biggest backers, warns own staff about chatbots

“Human reviewers may read the chats, and researchers found that similar AI could reproduce the data it absorbed during training, creating a leak risk. Alphabet also alerted its engineers to avoid direct use of computer code that chatbots can generate, some of the people said.”

[Link]

· Links · Share this post

 

Researchers discover that ChatGPT prefers repeating 25 jokes over and over

“When tested, "Over 90% of 1,008 generated jokes were the same 25 jokes.”” We have a lot in common.

[Link]

· Links · Share this post

 

Moderation Strike: Stack Overflow, Inc. cannot consistently ignore, mistreat, and malign its volunteers

“The new policy, establishing that AI-generated content is de facto allowed on the network, is harmful in both what it allows on the platform and in how it was implemented.”

[Link]

· Links · Share this post

 

Tech Elite's AI Ideologies Have Racist Foundations, Say AI Ethicists

“More and more prominent tech figures are voicing concerns about superintelligent AI and risks to the future of humanity. But as leading AI ethicist Timnit Gebru and researcher Émile P Torres point out, these ideologies have deeply racist foundations.

[Link]

· Links · Share this post

 

Guy Who Sucks At Being A Person Sees Huge Potential In AI

“Just yesterday, I asked an AI program to write an entire sci-fi novel for me, and [as someone who will die an empty shell of a man who wasted his life doing nothing for the world and, perhaps, should never have been born] I was super impressed.”

[Link]

· Links · Share this post

 

‘This robot causes harm’: National Eating Disorders Association’s new chatbot advises people with disordering eating to lose weight

““Every single thing Tessa suggested were things that led to the development of my eating disorder,” Maxwell wrote in her Instagram post. “This robot causes harm.””

[Link]

· Links · Share this post

 

Generative AI: What You Need To Know

“A free resource that will help you develop an AI-bullshit detector.”

[Link]

· Links · Share this post

 

Google Unveils Plan to Demolish the Journalism Industry Using AI

“If Google's AI is going to mulch up original work and provide a distilled version of it to users at scale, without ever connecting them to the original work, how will publishers continue to monetize their work?”

[Link]

· Links · Share this post

 

Indirect Prompt Injection via YouTube Transcripts

“ChatGPT (via Plugins) can access YouTube transcripts. Which is pretty neat. However, as expected (and predicted by many researches) all these quickly built tools and integrations introduce Indirect Prompt Injection vulnerabilities.” Neat demo!

[Link]

· Links · Share this post

 

ChatGPT is not ‘artificial intelligence.’ It’s theft.

“Rather than pointing to some future utopia (or robots vs. humans dystopia), what we face in dealing with programs like ChatGPT is the further relentless corrosiveness of late-stage capitalism, in which authorship is of no value. All that matters is content.”

[Link]

· Links · Share this post

 

Google Bard is a glorious reinvention of black-hat SEO spam and keyword-stuffing

“Moreover, researchers have also discovered that it’s probably mathematically impossible to secure the training data for a large language model like GPT-4 or PaLM 2. This was outlined in a research paper that Google themselves tried to censor, an act that eventually led the Google-employed author, El Mahdi El Mhamdi, to leave the company. The paper has now been updated to say what the authors wanted it to say all along, and it’s a doozy.”

[Link]

· Links · Share this post

Email me: ben@werd.io

Signal me: benwerd.01

Werd I/O © Ben Werdmuller. The text (without images) of this site is licensed under CC BY-NC-SA 4.0.