Customs and Border Protection is using sentiment analysis on inbound and outbound travelers who "may threaten public safety, national security, or lawful trade and travel". That's dystopian enough in itself, but there's no way they could limit the trawl to those people, and claims made about what the software can do are dubious at best. #AI
[Link]
·
Links
·
Share this post
·
Links
·
Share this post
·
Links
·
Share this post
·
Links
·
Share this post
The NYT's new terms disallow use of its content to develop any new software application, including machine learning and AI systems. It's a shame that this has to be explicit, rather than a blanket right afforded to publishers by default, but it's a sensible clause that many more will be including. #AI
[Link]
·
Links
·
Share this post
·
Links
·
Share this post
I don’t know that it’s fair to count AI startups as media startups. Given the (justified) labor disputes going on right now, I’d offer that they’re closer to anti-media, and I’m not sure that I’d think of them as a bright spot. There’s plenty of room for AI to assist creatives, but of course the real money is in replacing them or devaluing their work. #AI
[Link]
·
Links
·
Share this post
I respect Bruce Schneier a great deal, but I hate this proposal. For one thing, what about people outside the US whose data was used? On the internet, the public is global. Wherever the tools are used, the rights infringed by AI tools are everyone's, from everywhere. Paying at the point of use rather than at the point of scraping cannot be the way. #AI
[Link]
·
Links
·
Share this post
·
Links
·
Share this post
·
Links
·
Share this post
““Language skill indicates intelligence,” and its logical inverse, “lack of language skill indicates non-intelligence,” is a common heuristic with a long history. It is also a terrible one, inaccurate in a way that ruinously injures disabled people. Now, with recent advances in computing technology, we’re watching this heuristic fail in ways that will harm almost everyone.” #AI
[Link]
·
Links
·
Share this post
“AI-generated misinformation is insidious because it’s often invisible. It’s fluent but not grounded in real-world experience, and so it takes time and expertise to unpick. If machine-generated content supplants human authorship, it would be hard — impossible, even — to fully map the damage.” #AI
[Link]
·
Links
·
Share this post
·
Links
·
Share this post
·
Links
·
Share this post
·
Links
·
Share this post
·
Links
·
Share this post
Werd I/O © Ben Werdmuller. The text (without images) of this site is licensed under CC BY-NC-SA 4.0.