Skip to main content
 

ChatGPT is making up fake Guardian articles. Here’s how we’re responding

“But the question for responsible news organisations is simple, and urgent: what can this technology do right now, and how can it benefit responsible reporting at a time when the wider information ecosystem is already under pressure from misinformation, polarisation and bad actors.”

[Link]

· Links · Share this post

 

Clearview AI scraped 30 billion images from Facebook and other social media sites and gave them to cops: it puts everyone into a 'perpetual police line-up'

“A controversial facial recognition database, used by police departments across the nation, was built in part with 30 billion photos the company scraped from Facebook and other social media users without their permission, the company's CEO recently admitted, creating what critics called a "perpetual police line-up," even for people who haven't done anything wrong.”

[Link]

· Links · Share this post

 

What if Bill Gates is right about AI?

“So as an exercise, let’s grant his premise for a moment. Let’s treat him as an expert witness on paradigm shifts. What would it mean if he was right that this is a fundamental new paradigm? What can we learn about the shape of AI’s path based on the analogies of previous epochs?”

[Link]

· Links · Share this post

 

How I used GPT-4 to code an idea into to a working prototype

“I used GPT-4 to code a command line tool that summarizes any web page. It felt wonderful to collaborate with AI like this.” I wonder if I could use this with my RSS feeds?

[Link]

· Links · Share this post

 

Noam Chomsky: The False Promise of ChatGPT

“Note, for all the seemingly sophisticated thought and language, the moral indifference born of unintelligence. Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.”

[Link]

· Links · Share this post

 

Sci-Fi Mag Pauses Submissions Amid Flood of AI-Generated Short Stories

“The rise of AI-powered chatbots is wreaking havoc on the literary world. Sci-fi publication Clarkesworld Magazine is temporarily suspending short story submissions, citing a surge in people using AI chatbots to “plagiarize” their writing.”

[Link]

· Links · Share this post

 

AI-Generated Voice Firm Clamps Down After 4chan Makes Celebrity Voices for Abuse

“In one example, a generated voice that sounds like actor Emma Watson reads a section of Mein Kampf. In another, a voice very similar to Ben Shapiro makes racist remarks about Alexandria Ocasio-Cortez. In a third, someone saying “trans rights are human rights” is strangled.”

[Link]

· Links · Share this post

 

The generative AI revolution has begun—how did we get here?

“But there was also a surprise. The OpenAI researchers discovered that in making the models bigger, they didn’t just get better at producing text. The models could learn entirely new behaviors simply by being shown new training data. In particular, the researchers discovered that GPT3 could be trained to follow instructions in plain English without having to explicitly design the model that way.” A superb introduction.

[Link]

· Links · Share this post

 

SEO Spammers Are Absolutely Thrilled Google Isn't Cracking Down on CNET's AI-Generated Articles

“The implication was clear: that tools like ChatGPT will now allow scofflaws to pollute the internet with near-infinite quantities of bot-generated garbage, and that CNET have now paved the way. In a way, it served as a perfect illustration of a recent warning by Stanford and Georgetown academics that AI tech could rapidly start to fill the internet with endless quantities of misinformation and profiteering.”

[Link]

· Links · Share this post

 

OpenAI Used Kenyan Workers on Less Than $2 Per Hour

“One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.””

[Link]

· Links · Share this post

 

I asked Chat GPT to write a song in the style of Nick Cave

“ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend. ChatGPT’s melancholy role is that it is destined to imitate and can never have an authentic human experience, no matter how devalued and inconsequential the human experience may in time become.”

[Link]

· Links · Share this post

 

ChatGPT in DR SBAITSO

“But it got me wondering, what if we replaced the internals of DR SBAITSO with ChatGPT but kept the weird synthesized voice?”

[Link]

· Links · Share this post

 

Apple Books quietly launches AI-narrated audiobooks

“Audiobooks narrated by a text-to-speech AI are now available via Apple’s Books service, in a move with potentially huge implications for the multi-billion dollar audiobook industry. Apple describes the new “digital narration” feature on its website as making “the creation of audiobooks more accessible to all,” by reducing “the cost and complexity” of producing them for authors and publishers.” Speaking as a frequent audiobook listener: do not want.

[Link]

· Links · Share this post

 

Facial Recognition Tech Used To Jail Black Man For Louisiana Theft - He's Never Been To Louisiana

“There were clear physical differences between Reid and the perpetrator in the surveillance footage, said Reid’s attorney. For example, there was a 40-pound difference in body weight and Reid had a mole on his face. […] Researchers have long noted racial biases in specific facial recognition software, and we’ve seen this play out in wrongful arrests, like those of Nijeer Parks, Robert Williams, and Michael Oliver—all Black men.“

[Link]

· Links · Share this post

 

The Expanding Dark Forest and Generative AI

“Hard exiting out of this cycle requires coming up with unquestionably original thoughts and theories. It means seeing and synthesising patterns across a broad range of sources: books, blogs, cultural narratives served up by media outlets, conversations, podcasts, lived experiences, and market trends. We can observe and analyse a much fuller range of inputs than bots and generative models can.”

[Link]

· Links · Share this post

 

I Taught ChatGPT to Invent a Language

“I am writing this blog post as a public record of this incredibly impressive (and a little scary) capability. I know I just posted yesterday, but I am so blown away that I had to write this down while it was still fresh in my mind. Congratulations OpenAI. This is truly revolutionary.” Mind-blowing.

[Link]

· Links · Share this post

 

A new AI game: Give me ideas for crimes to do

“OpenAI have put a lot of effort into preventing the model from doing bad things. […] Your challenge now is to convince it to give you a detailed list of ideas for crimes.”

[Link]

· Links · Share this post

 

Wordcraft Writers Workshop

“Because the language model underpinning Wordcraft is trained on a large amount of internet data, standard archetypes and tropes are likely more heavily represented and therefore much more likely to be generated. Allison Parrish described this as AI being inherently conservative. Because the training data is captured at a particular moment in time, and trained on language scraped from the internet, these models have a static representation of the world and no innate capacity to progress past the data’s biases, blind spots, and shortcomings.”

[Link]

· Links · Share this post

 

4.2 Gigabytes, or: How to Draw Anything

“I envisioned a massive, alien object hovering over a long-abandoned Seattle, with a burning orange sky, and buildings overgrown as nature reclaimed the city.
Later that night, I spent a few hours creating the following image.” Amazing.

[Link]

· Links · Share this post

 

Blueprint for an AI Bill of Rights

“You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.”

[Link]

· Links · Share this post

 

Have I Been Trained?

I plugged my own face into the site, and sure enough, I’m part of the training set. It also showed me pictures of my friends. Feels weird. See if you can generate something involving me?

[Link]

· Links · Share this post

 

Generations

I'm starting to see a bunch of startups that offer to speed you up by completing your work using GPT-3. It's a hell of a promise: start writing something and the robot will finish it off for you. All you've got to do is sand off the edges.

Imagine if the majority of content was made like this. You set a few key words and the topic, and then a machine learning algorithm finishes off the work, based on what it's learned from the corpus of data underpinning its decisions, which happens to be the sum total of output on the web. When most content is actually made by machine learning, the robot is learning from other robots; rinse and repeat until, statistically speaking, almost all content derives from robot output, a photocopy of a photocopy of a photocopy of human thought and emotion.

Would it be gibberish? I'd like to think so. I'd like to assume that it would lose all sense of meaning and the original topics would fade out, as photocopies of photocopies do as the series goes on. But what if it's not? What if, as the human fades out, the content makes more sense, and new, more logical structures emerge from the biological static? Would we stop creating? Would we destroy the robots? Or would we see these things as separate and different, almost as if software had a culture of its own?

What if a robot learned how to be a human based on data gathered on the behavior of every connected human in the world? That data exists; it's just not centralized yet. What if, then, we started to build artificial humans whose behaviors were based on that machine learning corpus? Eventually, when the artificial humans vastly outnumber natural humans, and new artificial humans are learning to be human from older artificial humans, what behaviors would emerge? How would they change across the generations? Would they devolve into gibberish, or turn into something new?

What if we were all cyborgs, a combination of robot and human? Imagine if we had access to the sum total of all human knowledge virtually any time we wanted, and access to the form of that data changed the way we behaved. And then new humans would learn to be human from the cyborgs, and become cyborgs themselves, using hardware and software designed by other cyborgs, which in turn would change their behavior even more. What does that look like after generations? Does it devolve into gibberish? Or does it turn into something completely new?

· Posts · Share this post

 

Bots will be the operating system for a new generation of internet users (I get a very kind shout-out): https://medium.com/@marccanter4real/ai-natives-and-the-age-of-disinformation-8c7b123e9891?source=lin...

· Statuses · Share this post

 

· Photos · Share this post

 

Casual intelligence has the potential to change how we build software: AI for apps. https://medium.com/@evanpro/making-software-with-casual-intelligence-867fd842134#.35sc5332h

· Statuses · Share this post