Skip to main content
 

AP strikes deal with OpenAI

This caught my eye: an example of OpenAI licensing content from a publisher in order to make its models better. Other publishers should now know that they can make similar deals rather than letting their work be scraped up for free.

[Link]

· Links · Share this post

 

The AI Dividend

I respect Bruce Schneier a great deal, but I hate this proposal. For one thing, what about people outside the US whose data was used? On the internet, the public is global. Wherever the tools are used, the rights infringed by AI tools are everyone's, from everywhere. Paying at the point of use rather than at the point of scraping cannot be the way.

[Link]

· Links · Share this post

 

OpenAI and Microsoft Sued for $3 Billion Over Alleged ChatGPT 'Privacy Violations'

It's important that lawsuits like this center on the use, not the act of scraping itself - the latter does need to be protected. One to watch.

[Link]

· Links · Share this post

 

Google Says It'll Scrape Everything You Post Online for AI

I think this is a legal challenge waiting to happen. While people who publish publicly online have a reasonable expectation that anyone can read their content, they don't have a similar expectation about content being modeled and analyzed. There's no de facto license to do this.

[Link]

· Links · Share this post

 

Language Is a Poor Heuristic for Intelligence

““Language skill indicates intelligence,” and its logical inverse, “lack of language skill indicates non-intelligence,” is a common heuristic with a long history. It is also a terrible one, inaccurate in a way that ruinously injures disabled people. Now, with recent advances in computing technology, we’re watching this heuristic fail in ways that will harm almost everyone.”

[Link]

· Links · Share this post

 

How Easy Is It to Fool A.I.-Detection Tools?

“To assess the effectiveness of current A.I.-detection technology, The New York Times tested five new services using more than 100 synthetic images and real photos. The results show that the services are advancing rapidly, but at times fall short.”

[Link]

· Links · Share this post

 

AI is killing the old web, and the new web struggles to be born

“AI-generated misinformation is insidious because it’s often invisible. It’s fluent but not grounded in real-world experience, and so it takes time and expertise to unpick. If machine-generated content supplants human authorship, it would be hard — impossible, even — to fully map the damage.”

[Link]

· Links · Share this post

 

A prayer wheel for capitalism

“Auto-generating text based on other people’s discoveries and then automatically summarising that text by finding commonalities with existing text creates a loop of mechanised nonsense. It’s a prayer wheel for capitalism.”

[Link]

· Links · Share this post

 

Google, one of AI’s biggest backers, warns own staff about chatbots

“Human reviewers may read the chats, and researchers found that similar AI could reproduce the data it absorbed during training, creating a leak risk. Alphabet also alerted its engineers to avoid direct use of computer code that chatbots can generate, some of the people said.”

[Link]

· Links · Share this post

 

Researchers discover that ChatGPT prefers repeating 25 jokes over and over

“When tested, "Over 90% of 1,008 generated jokes were the same 25 jokes.”” We have a lot in common.

[Link]

· Links · Share this post

 

Moderation Strike: Stack Overflow, Inc. cannot consistently ignore, mistreat, and malign its volunteers

“The new policy, establishing that AI-generated content is de facto allowed on the network, is harmful in both what it allows on the platform and in how it was implemented.”

[Link]

· Links · Share this post

 

Tech Elite's AI Ideologies Have Racist Foundations, Say AI Ethicists

“More and more prominent tech figures are voicing concerns about superintelligent AI and risks to the future of humanity. But as leading AI ethicist Timnit Gebru and researcher Émile P Torres point out, these ideologies have deeply racist foundations.

[Link]

· Links · Share this post

 

Guy Who Sucks At Being A Person Sees Huge Potential In AI

“Just yesterday, I asked an AI program to write an entire sci-fi novel for me, and [as someone who will die an empty shell of a man who wasted his life doing nothing for the world and, perhaps, should never have been born] I was super impressed.”

[Link]

· Links · Share this post

 

‘This robot causes harm’: National Eating Disorders Association’s new chatbot advises people with disordering eating to lose weight

““Every single thing Tessa suggested were things that led to the development of my eating disorder,” Maxwell wrote in her Instagram post. “This robot causes harm.””

[Link]

· Links · Share this post

 

Generative AI: What You Need To Know

“A free resource that will help you develop an AI-bullshit detector.”

[Link]

· Links · Share this post

 

Google Unveils Plan to Demolish the Journalism Industry Using AI

“If Google's AI is going to mulch up original work and provide a distilled version of it to users at scale, without ever connecting them to the original work, how will publishers continue to monetize their work?”

[Link]

· Links · Share this post

 

Indirect Prompt Injection via YouTube Transcripts

“ChatGPT (via Plugins) can access YouTube transcripts. Which is pretty neat. However, as expected (and predicted by many researches) all these quickly built tools and integrations introduce Indirect Prompt Injection vulnerabilities.” Neat demo!

[Link]

· Links · Share this post

 

ChatGPT is not ‘artificial intelligence.’ It’s theft.

“Rather than pointing to some future utopia (or robots vs. humans dystopia), what we face in dealing with programs like ChatGPT is the further relentless corrosiveness of late-stage capitalism, in which authorship is of no value. All that matters is content.”

[Link]

· Links · Share this post

 

Google Bard is a glorious reinvention of black-hat SEO spam and keyword-stuffing

“Moreover, researchers have also discovered that it’s probably mathematically impossible to secure the training data for a large language model like GPT-4 or PaLM 2. This was outlined in a research paper that Google themselves tried to censor, an act that eventually led the Google-employed author, El Mahdi El Mhamdi, to leave the company. The paper has now been updated to say what the authors wanted it to say all along, and it’s a doozy.”

[Link]

· Links · Share this post

 

OpenAI's ChatGPT Powered by Human Contractors Paid $15 Per Hour

“OpenAI, the startup behind ChatGPT, has been paying droves of U.S. contractors to assist it with the necessary task of data labelling—the process of training ChatGPT’s software to better respond to user requests. The compensation for this pivotal task? A scintillating $15 per hour.”

[Link]

· Links · Share this post

 

Schools Spend Millions on Evolv's Flawed AI Gun Detection

“As school shootings proliferate across the country — there were 46 school shootings in 2022, more than in any year since at least 1999 — educators are increasingly turning to dodgy vendors who market misleading and ineffective technology.”

[Link]

· Links · Share this post

 

Will A.I. Become the New McKinsey?

“The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value.”

[Link]

· Links · Share this post

 

Google "We Have No Moat, And Neither Does OpenAI"

“Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months. This has profound implications for us.”

[Link]

· Links · Share this post

 

Economists Warn That AI Like ChatGPT Will Increase Inequality

“Most empirical studies find that AI technology will not reduce overall employment. However, it is likely to reduce the relative amount of income going to low-skilled labour, which will increase inequality across society. Moreover, AI-induced productivity growth would cause employment redistribution and trade restructuring, which would tend to further increase inequality both within countries and between them.”

[Link]

· Links · Share this post

 

ChatGPT is making up fake Guardian articles. Here’s how we’re responding

“But the question for responsible news organisations is simple, and urgent: what can this technology do right now, and how can it benefit responsible reporting at a time when the wider information ecosystem is already under pressure from misinformation, polarisation and bad actors.”

[Link]

· Links · Share this post