"The vast majority of computer vision research leads to technology that surveils human beings, a new preprint study that analyzed more than 20,000 computer vision papers and 11,000 patents spanning three decades has found.” #AI
· Links
The legislation passed with a heavy majority - this veto is a signal that Newsom favors the AI vendors over teamster concerns. Teamsters, on the other hand, claim the tech is unsafe and that jobs will be lost. #AI
· Links
LLMs are a magic trick; interesting and useful for superficial tasks, but very much not up to, for example, replacing a trained medical professional. The idea that someone would think it's okay to let one give medical advice is horrifying. #AI
· Links
These poets are being hired to eliminate the possibility of being paid for their own work. But I am kind of tickled by the idea that OpenAI is scraping fan-fiction forums. Not because it’s bad work, but imagine the consequences. #AI
· Links
"A trade group for U.S. authors has sued OpenAI in Manhattan federal court on behalf of prominent writers including John Grisham, Jonathan Franzen, George Saunders, Jodi Picoult and "Game of Thrones" novelist George R.R. Martin, accusing the company of unlawfully training its popular artificial-intelligence based chatbot ChatGPT on their work.” #AI
· Links
“The 392 news organizations listed below have instructed OpenAI’s GPTBot to not scan their sites, according to a continual survey of 1,119 online publishers conducted by the homepages.news archive. That amounts to 35.0% of the total.” #AI
· Links
“In a way, the people our cars mow down are doing just as much as our highly paid programmers and engineers to create the utopian, safe streets of tomorrow. Each person who falls under our front bumper teaches us something valuable about how humans act in the real world.” #AI
· Links
I certainly have some thoughts that I will share. Imagine if you could allow an AI agent to create copyrighted works at scale with no human involvement. It would allow for an incredible intellectual property land grab. #AI
· Links
Customs and Border Protection is using sentiment analysis on inbound and outbound travelers who "may threaten public safety, national security, or lawful trade and travel". That's dystopian enough in itself, but there's no way they could limit the trawl to those people, and claims made about what the software can do are dubious at best. #AI
· Links
A good rule of thumb is that if technology makes something feasible, someone will do it regardless of the ethics. Here, AI makes it easy to perform warrantless surveillance at scale - so someone has turned it into a product and police are buying it. #AI
· Links
Whether this comes to fruition with the NYT vs OpenAI or another publisher vs another LLM vendor, there will be a court case like this, and it will set important precedent for the industry. My money's on the publishers. #AI
· Links
Probably inevitable, but it nonetheless made my jaw drop. What an incredibly wrong-headed use of an LLM. #AI
· Links
The NYT's new terms disallow use of its content to develop any new software application, including machine learning and AI systems. It's a shame that this has to be explicit, rather than a blanket right afforded to publishers by default, but it's a sensible clause that many more will be including. #AI
· Links
“Weizenbaum’s questions, though they seem simple—Is it good? Do we need it?—are difficult ones for computer science to answer. They could be asked of any proposed technology, but the speed, scope, and stakes of innovation in AI make their consideration more urgent.” #AI
· Links
· Links
I strongly disagree with this stance. Allowing your work to be mined by AI models should be opt-in only - otherwise there is no possible way for a publisher or author to apply a license or grant rights. #AI
· Links
Different AI models have different political biases. Google's tend to be more socially conservative - possibly in part because they were trained on books rather than the wider internet. Regardless of the cause, this is proof, again, that AI models are not objective. #AI
· Links
Black faces are overrepresented in databases used to train AI for law enforcement - and some facial recognition software used in this context fails 96% of the time. This practice is an accelerant for already deeply harmful inequities. Time to ban it. #AI
· Links
This is a really comprehensive history and overview of LLMs. Simon has been bringing the goods, and this talk is no exception. #AI
· Links
I don’t know that it’s fair to count AI startups as media startups. Given the (justified) labor disputes going on right now, I’d offer that they’re closer to anti-media, and I’m not sure that I’d think of them as a bright spot. There’s plenty of room for AI to assist creatives, but of course the real money is in replacing them or devaluing their work. #AI
· Links
This caught my eye: an example of OpenAI licensing content from a publisher in order to make its models better. Other publishers should now know that they can make similar deals rather than letting their work be scraped up for free. #AI
· Links
I respect Bruce Schneier a great deal, but I hate this proposal. For one thing, what about people outside the US whose data was used? On the internet, the public is global. Wherever the tools are used, the rights infringed by AI tools are everyone's, from everywhere. Paying at the point of use rather than at the point of scraping cannot be the way. #AI
· Links
It's important that lawsuits like this center on the use, not the act of scraping itself - the latter does need to be protected. One to watch. #AI
· Links
I think this is a legal challenge waiting to happen. While people who publish publicly online have a reasonable expectation that anyone can read their content, they don't have a similar expectation about content being modeled and analyzed. There's no de facto license to do this. #AI
· Links