This is the kind of AI declaration I prefer.
“As we know from social media, the failure to regulate technological change can lead to harms that range from children’s safety to the erosion of democracy. With AI, the scale and intensity of potential harm is even greater—from racially based ‘risk scoring’ tools that needlessly keep people in prison to deepfake videos that further erode trust in democracy and future harms like economic upheaval and job loss. But if we act now, we can build accountability, promote opportunity, and deliver greater prosperity for all.”
These are all organizations that already do good work; it's good to see them apply pressure on AI companies in the public interest. #AI
[Link]
·
Links
·
Share this post
For me, this paragraph was the takeaway:
"We affirm that, whilst safety must be considered across the AI lifecycle, actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems, including through systems for safety testing, through evaluations, and by other appropriate measures. We encourage all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emerge, in particular to prevent misuse and issues of control, and the amplification of other risks."
In other words, the onus will be on AI developers to police themselves. We will see how that works out in practice. #AI
[Link]
·
Links
·
Share this post
Baldur Bjarnason talks frankly about the cost of writing critically about AI:
"It’s honestly been brutal and it’ll probably take me a few years to recover financially from having published a moderately successful book on “AI” because it doesn’t have any of the opportunity multipliers that other topics have."
I worry about the same thing. I've noticed that AI-critical pieces lead to unsubscribes on my newsletter, and that most lucrative job vacancies relate to AI in some way.
I'm not sure I regret my criticism, though. #AI
[Link]
·
Links
·
Share this post
Reuven Lerner was banned from advertising on Meta products for life because he offers Python and Pandas training - and the company's automated system thought he was dealing in live snakes and bears.
And then he lost the appeal because that, too, was automated.
This is almost Douglas Adams-esque in its boneheadedness, but it's also a look into an auto-bureaucratic future where there is no real recourse, even when the models themselves are at fault. #AI
[Link]
·
Links
·
Share this post
"Advances in AI are amplifying a crisis for human rights online. While AI technology offers exciting and beneficial uses for science, education, and society at large, its uptake has also increased the scale, speed, and efficiency of digital repression. Automated systems have enabled governments to conduct more precise and subtle forms of online censorship." #AI
[Link]
·
Links
·
Share this post
·
Links
·
Share this post
·
Links
·
Share this post
·
Links
·
Share this post
"Microsoft's data centers in West Des Moines, Iowa guzzled massive amounts of water last year to keep cool while training OpenAI's ChatGPT-4. [...] This happened in the midst of a more than three-year drought, further taxing a stressed water system that's been so dry this summer that nature lovers couldn't even paddle canoes in local rivers." #AI
[Link]
·
Links
·
Share this post
·
Links
·
Share this post
·
Links
·
Share this post
·
Links
·
Share this post
·
Links
·
Share this post
"A trade group for U.S. authors has sued OpenAI in Manhattan federal court on behalf of prominent writers including John Grisham, Jonathan Franzen, George Saunders, Jodi Picoult and "Game of Thrones" novelist George R.R. Martin, accusing the company of unlawfully training its popular artificial-intelligence based chatbot ChatGPT on their work.” #AI
[Link]
·
Links
·
Share this post
·
Links
·
Share this post
Customs and Border Protection is using sentiment analysis on inbound and outbound travelers who "may threaten public safety, national security, or lawful trade and travel". That's dystopian enough in itself, but there's no way they could limit the trawl to those people, and claims made about what the software can do are dubious at best. #AI
[Link]
·
Links
·
Share this post
·
Links
·
Share this post
·
Links
·
Share this post
·
Links
·
Share this post
The NYT's new terms disallow use of its content to develop any new software application, including machine learning and AI systems. It's a shame that this has to be explicit, rather than a blanket right afforded to publishers by default, but it's a sensible clause that many more will be including. #AI
[Link]
·
Links
·
Share this post
Werd I/O © Ben Werdmuller. The text (without images) of this site is licensed under CC BY-NC-SA 4.0.