"ShareOpenly breaks the door even wider than sharing to Mastodon, and I intend to be using it to update some of my examples listed above. Thanks Ben for demonstrative and elegant means of sharing."
Thank you, Alan, for sharing!
There's more to come on ShareOpenly - more platforms to add, and some tweaks to the CSS so that the whole thing works better on older devices or smaller phone screens. It's a simple tool, but I'm pleased with how people have reacted to it, and how it's been carried forward.
There are no terms to sign and there's nothing to sign up for; adding a modern "share this" button to your site is as easy as following a few very simple instructions.
[Link]
· Links · Share this post
[Michael Atleson at the FTC Division of Advertising Practices]
"Don’t misrepresent what these services are or can do. Your therapy bots aren’t licensed psychologists, your AI girlfriends are neither girls nor friends, your griefbots have no soul, and your AI copilots are not gods."
The FTC gets involved in the obviously rife practice of overselling the capabilities of AI services. These are solid guidelines, and hopefully the precursor to more meaningful action when vendors inevitably cross the line.
While these points are all important, for me the most pertinent is the last:
"Don’t violate consumer privacy rights. These avatars and bots can collect or infer a lot of intensely personal information. Indeed, some companies are marketing as a feature the ability of such AI services to know everything about us. It’s imperative that companies are honest and transparent about the collection and use of this information and that they don’t surreptitiously change privacy policies or relevant terms of service."
It's often unclear how much extra data is being gathered behind the scenes when AI features are added. This is where battles will be fought and lines will be drawn, particularly in enterprises and well-regulated industries.
[Link]
· Links · Share this post
[Michael Grothaus at FastCompany]
"United Airlines announced that it is bringing personalized advertising to the seatback entertainment screens on its flights. The move is aimed at increasing the airline’s revenue by leveraging the data that it has on its passengers."
Just another reason why friends don't let friends fly United. We should all be reducing our air travel overall anyway, given the climate crisis, and in a world where we all fly less, shouldn't we choose a better experience?
This sounds like the absolute worst:
"United believes its advertising network will be appealing to brands because “there is the potential for 3.5 hours of attention per traveler, based on average flight time.”"
Passengers from California, Colorado, Connecticut, Virginia, and Utah can opt out of having their private information used to show targeted ads to them for the duration of what sounds like an agonizing flight. Passengers from other US States are out of luck - at least until their legislatures also pass reasonable privacy legislation.
Other airlines are removing seat-back entertainment to reduce fuel, so on top of the baseline climate impact of the air travel industry, there's a real additional climate implication here. Planes with seat-back entertainment, in general, use more fuel; United is making a revenue decision with all kinds of negative impacts that they should not be rewarded for.
[Link]
· Links · Share this post
Perplexity AI doesn't use its advertised browser string or IP range to load content from third-party websites:
"So they're using headless browsers to scrape content, ignoring robots.txt, and not sending their user agent string. I can't even block their IP ranges because it appears these headless browsers are not on their IP ranges."
On one level, I understand why this is happening, as everyone who's ever written a scraper (or scraper mitigations) might: the crawler for training the model likely does use the correct browser string, but on-demand calls likely don't to prevent them from being blocked. That's not a good excuse at all, but I bet that's what's going on.
This is another example of the core issue with robots.txt: it's a handshake agreement at best. There are no legal or technical restrictions imposed by it; we all just hope that bots do the right thing. Some of them do, but a lot of them don't.
The only real way to restrict these services is through legal rules that create meaningful consequences for these companies. Until then, there will be no sure-fire way to prevent your content from being accessed by an AI agent.
[Link]
· Links · Share this post
[Chris Bing and Joel Schechtman at Reuters]
"The U.S. military launched a clandestine program amid the COVID crisis to discredit China’s Sinovac inoculation – payback for Beijing’s efforts to blame Washington for the pandemic. One target: the Filipino public. Health experts say the gambit was indefensible and put innocent lives at risk."
Reading this, it certainly seems indefensible, although unfortunately not out of line with other US foreign policy efforts. Innocent people died because of this US military operation.
It's a reflection of the simple idea, which seems to have governed US foreign policy for almost a century, that foreign lives matter less in the quest for dominance over our perceived rivals.
Even if you do care about America more than anywhere else, this will have hurt at home, too. The internet being what it is, it also would make sense that these influence campaigns made their way back to the US and affected vaccine uptake on domestic soil.
The whole thing feels like the military equivalent of a feature built by a novice product manager: someone had a goal that they needed to hit, and this was how they decided to get there. But don't get me wrong: I don't think this was an anomaly or someone running amok. This was policy.
[Link]
· Links · Share this post
"What generative AI creates is not any one person's creative expression. Generative AI is only possible because of the work that has been taken from others. It simply would not exist without the millions of data points that the models are based upon. Those data points were taken without permission, consent, compensation or even notification because the logistics of doing so would have made it logistically improbable and financially impossible."
This is a wonderful piece from Heather Bryant that explores the humanity - the effort, the emotion, the lived experience, the community, the unique combination of things - behind real-world art that is created by people, and the theft of those things that generative AI represents.
It's the definition of superficiality, and as Heather says here, living in a world made by people, rooted in experiences and relationships and reflecting actual human thought, is what I hope for. Generative AI is a technical accomplishment, for sure, but it is not a humanist accomplishment. There are no shortcuts to the human experience. And wanting a shortcut to human experience in itself devalues being human.
[Link]
· Links · Share this post
[Janet Vertesi at Public Books]
"Our lives are consumed with the consumption of content, but we no longer know the truth when we see it. And when we don’t know how to weigh different truths, or to coordinate among different real-world experiences to look behind the veil, there is either cacophony or a single victor: a loudest voice that wins."
This is a piece about information, trust, the effect that AI is already having on knowledge.
When people said that books were more trustworthy than the internet, we scoffed; I scoffed. Books were not infallible; the stamp of a traditional publisher was not a sign that the information was correct or trustworthy. The web allowed more diverse voices to be heard. It allowed more people to share information. It was good.
The flood of automated content means that this is no longer the case. Our search engines can't be trusted; YouTube is certainly full of the worst automated dreck. I propose that we reclaim the phrase pink slime to encompass this nonsense: stuff that's been generated by a computer at scale in order to get attention.
So, yeah, I totally sympathize with the urge to buy a real-world encyclopedia again. Projects like Wikipedia must be preserved at all costs. But we have to consider if all this will result in the effective end of a web where humans publish and share information. And if that's the case, what's next?
[Link]
· Links · Share this post
"Former [Microsoft] employee says software giant dismissed his warnings about a critical flaw because it feared losing government business. Russian hackers later used the weakness to breach the National Nuclear Security Administration, among others."
This is a damning story about profit over principles: Microsoft failed to close a major security flaw that left the government (alongside other customers) vulnerable because it wanted to win their business. This directly paved the way for the SolarWinds hack.
This doesn't seem to have been covert or subtext at Microsoft:
"Morowczynski told Harris that his approach could also undermine the company’s chances of getting one of the largest government computing contracts in U.S. history, which would be formally announced the next year. Internally, Nadella had made clear that Microsoft needed a piece of this multibillion-dollar deal with the Pentagon if it wanted to have a future in selling cloud services, Harris and other former employees said."
But publicly it said something very different:
"From the moment the hack surfaced, Microsoft insisted it was blameless. Microsoft President Brad Smith assured Congress in 2021 that “there was no vulnerability in any Microsoft product or service that was exploited” in SolarWinds."
It will be interesting to see what the fallout of this disclosure is, and whether Microsoft and other companies might be forced behave differently in the future. This story represents business as usual, and without external pressure, it's likely that nothing will change.
[Link]
· Links · Share this post
"Inhale. Exhale. Find the space between… Calm Company Fund is going on sabbatical and taking a break from investing in new companies and raising new funds. Here’s why."
Calm Company Fund's model seems interesting. It's a revenue-based investor that makes a return based on its portfolio companies' earnings, but still uses a traditional VC model to derive its operating budget. That means it makes a very small percentage of funds committed from Limited Partners, rather than sharing in the success of its portfolio (at least until much later, when the companies begin to earn out).
That would make sense in a world where the funds committed were enormous, but revenue-based investment tends to raise smaller fund sizes. So Calm Company Fund had enough money to pay for basically one person - and although the portfolio was growing, the staff size couldn't scale up to cope.
So what does an alternative look like? I imagine that it might look like taking a larger percentage of incoming revenue as if it were an LP itself. Or maybe this kind of funding simply doesn't work with a hands-on firm, and the models that attract larger institutional investors are inherently more viable (even if that isn't always reflected in their fund returns).
I want something like this to exist, but the truth is that it might live in the realm of boring old business loans, and venture likely is able to exist because of the risks involved in those sorts of companies.
[Link]
· Links · Share this post
"Now all three men are speaking out against pending California legislation that would make it illegal for police to use face recognition technology as the sole reason for a search or arrest. Instead it would require corroborating indicators."
Even with mitigations, it will lead to wrongful arrests: so-called "corroborating indicators" don't assist with the fact that the technology is racially biased and unreliable, and in fact may provide justification for using it.
And the stories of this technology being used are intensely bad miscarriages of justice:
“Other than a photo lineup, the detective did no other investigation. So it’s easy to say that it’s the officer’s fault, that he did a poor job or no investigation. But he relied on (face recognition), believing it must be right. That’s the automation bias this has been referenced in these sessions.”
"Believing it must be right" is one of core social problems widespread AI is introducing. Many people think of computers as being coldly logical deterministic thinkers. Instead, there's always the underlying biases of the people who built the systems and, in the case of AI, in the vast amounts of public data used to train them. False positives are bad in any scenario; in law enforcement, it can destroy or even end lives.
[Link]
· Links · Share this post
"Justice Samuel Alito spoke candidly about the ideological battle between the left and the right — discussing the difficulty of living “peacefully” with ideological opponents in the face of “fundamental” differences that “can’t be compromised.” He endorsed what his interlocutor described as a necessary fight to “return our country to a place of godliness.” And Alito offered a blunt assessment of how America’s polarization will ultimately be resolved: “One side or the other is going to win.”"
If what's at stake in the upcoming election wasn't previously clear, this makes it so. This is a Supreme Court justice, talking openly, on tape, about undermining the rights of people in favor of a Biblical worldview.
It's easy to see this sort of rhetoric as the dying gasps of the 20th century trying to claw back regressive values that we've mostly moved away from. But to do so is to discount it; we have to take this seriously.
It's a little bit heartening to hear that Justice Roberts - also a big-C Conservative - felt differently and held a commitment to the Constitution and the working of the Court. But in the light of a far-right majority comprised of Alito, Clarence Thomas, Neil Gorsuch, Brett Kavanaugh, and Amy Coney Barrett, it's not heartening enough.
[Link]
· Links · Share this post
"Open Rights Group has published its six priorities for digital rights that the next UK government should focus on."
These are things every government should provide. I'm particularly interested in point number 3:
"Predictive policing systems that use artificial intelligence (AI) to ‘predict’ criminal behaviour undermine our right to be presumed innocent and exacerbate discrimination and inequality in our criminal justice system. The next government should ban dangerous uses of AI in policing."
It's such a science fiction idea, so obviously flawed that Philip K Dick wrote a novel and there's a famous movie about how bad it is, and yet, police forces around the world are trying it.
I'd hope for beyond an Open Rights Group recommendation: it should be banned, everywhere, as an obvious human rights violation.
The other things on the list are table stakes. Without those guarantees, real democratic freedom is impossible.
[Link]
· Links · Share this post
"The findings suggest the return to office movement has been a poorly-executed failure, but one particular figure stands out - a quarter of executives and a fifth of HR professionals hoped RTO mandates would result in staff leaving."
Unsurprising but also immoral: these respondents believed that subsequent layoffs were undertaken because too few people quit in the wake of return to office policies.
This quote from the company that conducted the survey seems obviously true to me:
"The mental and emotional burdens workers face today are real, and the companies who seek employee feedback with the intent to listen and improve are the ones who will win."
It's still amazing to me that so many organizational cultures are incapable of following through with this.
[Link]
· Links · Share this post
"There’s an ocean of problems with journalism, but the idea that there’s just too damn much woke progressivism is utter delusion. U.S. journalism generally tilts center right on the political spectrum."
This is a story about the founder of Politico creating a "teaching hospital for journalists" that appears to be in opposition to "wokeness". But it's also about much of the state of incumbent journalism, which is still grappling with the wave of much-needed social change that is inspiring movements around the world.
"In the wake of Black Lives Matter and COVID there was some fleeting recommendations to the ivy league establishment media that we could perhaps take a slightly more well-rounded, inclusive approach to journalism. In response, the trust fund lords in charge of these establishment outlets lost their [...] minds, started crying incessantly about young journalists “needing safe spaces,” and decided to double down on all their worst impulses, having learned less than nothing along the way."
Exactly. Asinine efforts like anti-woke journalism schools aren't what we need; we need better intersectional representation inside newsrooms, we need better representation of the real stories that need to be told across the country and across the world, and we need to dismantle institutional systems that have acted as gatekeepers for generations.
All power to the outlets, independent journalists, and foundations that are truly trying to push for something better. The status quo is not - and has not been - worth preserving.
[Link]
· Links · Share this post
"Remember! If you only signed up to hear when this feature is available, or you're wondering what ActivityPub even is: This probably is not the newsletter for you. This is a behind-the-scenes, engineering-heavy, somewhat-deranged build log by the team who are working on it."
And I love it.
Ghost's newsletter / blog about building ActivityPub support into its platform is completely lovely, and the kind of transparent development I've always been into. Here it's done with great humor. Also, they really seem to be into pugs, and that's cool, too.
In this week's entry the team is investigating using existing ActivityPub libraries and frameworks rather than building the whole thing from scratch themselves - and doing it with not a small amount of humility.
And they're building a front-end to allow bloggers to consume content from other people who publish long-form content onto the web using ActivityPub. I'm excited to see it take shape.
[Link]
· Links · Share this post
"After 17 of using Twitter daily and 24 years of using Google daily neither really works anymore. And particular with the collapse of the social spaces many of us grew up with, I feel called back to earlier forms of the Internet, like blogs, and in particular, starting a link blog."
Yay for link blogs! I've been finding this particularly rewarding. You're reading a post from mine right now.
Kellan wrote his own software to do this, based on links stored in Pinboard. Mine is based on Notion: I write an entry in markdown, which then seeds integrations that convert the bookmark into an HTML post on my website and various text posts for social media.
Simon Willison has noted that adding markdown support has meant he writes longer entries; that's been true for me, too. It's really convenient.
Most of all: I love learning from people I connect, follow, and subscribe to. Particularly in a world where search engines are falling apart as a way to really discover new writers and sources, link blogs are incredibly useful. It's lovely to find another one.
[Link]
· Links · Share this post
"Chamber of Progress, a tech industry coalition whose members include Amazon, Apple and Meta, is launching a campaign to defend the legality of using copyrighted works to train artificial intelligence systems."
I understand why they're making this push, but I don't know that it's the right PR move for some of the wealthiest corporations in the world to push back on independent artists. I wish they were actually reaching out and finding stronger ways to support the people who make creative work.
The net impression I'm left with is not support of user freedom, but bullying. Left out of the equation is the scope of fair use, which is painted here as being under attack as a principle by the artists rather than by large companies that seek to use peoples' work for free to make products that they will make billions of dollars from.
The whole thing is disingenuous and disappointing, and is likely to backfire. It's particularly sad to see Apple participate in this mess. So much for bicycles of the mind.
[Link]
· Links · Share this post
“Tweet: Thrilled to have represented [VC firm] in our work with [company]. Huge outcome!”
"Translation: A partner we fired actually did the deal, and no one serviced it well until it was clear it was going to be a homerun. Then we all fought over it and rewrote history in a ‘congrats’ blogpost that never mentioned the original GP."
Hunter didn't come to play.
[Link]
· Links · Share this post
"An important step toward a more interoperable “fediverse” — the broader network of decentralized social media apps like Mastodon, Bluesky and others — has been achieved."
Bridgy has always been a useful product; Bridgy Fed is an easy way for folks on the fediverse and on Bluesky to be able to interact with each other. I've opted in and I expect many other people to do the same.
Ideally it wouldn't be an opt-in - I think this kind of bridge is incredibly useful in its own right. I know it's been fraught on the Mastodon side because of Bluesky's provenance and former relationship to both Twitter and Jack Dorsey. I personally don't see the issue at all: the more the merrier.
Ryan Barrett is brilliant: I really appreciate his ability to quietly add value by creating user-first technology solutions that speak for themselves.
[Link]
· Links · Share this post
Microsoft's Recall software seems like a horrible idea:
"Surprise! It turns out that the unencrypted database and the stored images may contain your user credentials and passwords. And other stuff. Got a porn habit? Congratulations, anyone with access to your user account can see what you've been seeing. Use a password manager like 1Password? Sorry, your 1Password passwords are probably visible via Recall, now."
Worse, it's going to be built into Windows 11 for all compatible hardware, in a way that will make it hard or impossible to disable. This doesn't make sense to me: which privacy-conscious CIO (just for example, one working in a well-regulated industry where privacy is a legal requirement) would allow this to roll out? This is yet another reason for Windows 10 to remain the most popular version.
It also seems like nobody at Microsoft (or nobody at Microsoft with power) has considered the potentially serious social implications of what they're building:
"Victims of domestic abuse are at risk of their abuser trawling their PC for any signs that they're looking for help. Anyone who's fallen for a scam that gave criminals access to their PC is also completely at risk."
I'm increasingly concerned about what Apple will be rolling out on Monday. We're hearing quite believable rumors that it'll be AI-based, but is it going to be Apple's take on the same thing? That, too, has the potential to be a disaster.
Once again, I can't believe that the only way to get away from this stuff will be to run Linux on the desktop.
[Link]
· Links · Share this post
"The pervasive nature of modern technology makes surveillance easier than ever before, while each successive generation of the public is accustomed to the privacy status quo of their youth."
The key, as Bruce Schneier argues here, is not to compare with our own baselines, but to take a step back and consider what a healthy ecosystem would look like in its own right.
The underlying story here is that Microsoft caught state-backed hackers using its generative AI tools to help with their attacks, and people were less worried about the attacks themselves than about how Microsoft found out about them. It's a reasonable worry, and I thought the same thing: if Microsoft found this, then they're likely more aware of the contextual uses of their platform than we might assume.
This is certainly less private than computing was twenty or thirty years ago. But it's not a major iteration on where we were five years ago, and without intervention we're likely to see more erosion of user privacy over the next five years.
So what should our standards for privacy be overall? How should we expect a company like Microsoft to treat our potentially sensitive data? Should we pay more for more security, or should it just be a blanket expectation? These are all valid questions - although I also have ready, opinionated answers.
Perhaps the more important question is: who has the right to come to a conclusion about these questions, and how will they be enforced? As of now, it's still open.
[Link]
· Links · Share this post
"Eghbariah’s paper for the Columbia Law Review, or CLR, was published on its website in the early hours of Monday morning. The journal’s board of directors responded by pulling the entire website offline. [...] According to Eghbariah, he worked with editors at the Columbia Law Review for over five months on the 100-plus-page text."
Regardless of your perspective on the ongoing crisis in Israel and Palestine, this seems like a remarkable action: removing a heavily-reviewed, 100+ page legal analysis because it discusses the Nakba, the mass-displacement of Palestinians during the 1948 Palestine war.
The right thing to do would be to publish it - as the editors tried to do - and allow legal discussion to ensue. Instead, the board of directors chose to simply pull the plug on the website.
As one Columbia professor put it:
“When Columbia Law Professor Herbert Weschler published his important article questioning the underlying justification for Brown v. Board of Education in 1959 it was regarded by many as blasphemous, but is now regarded as canonical. This is what legal scholarship should do at its best, challenge us to think hard about hard things, even when it is uncomfortable doing so.”
If nothing else, this is a reflection of how sensitive these issues are in the current era, whose voices are allowed to be heard, and the conflicts between different ideologies, even on university campuses.
[Link]
· Links · Share this post
This is a lovely piece about Tony Stubblebine, who, as it rightly says, is doing an excellent job as the new CEO of Medium.
"Under Stubblebine’s direction, Medium, a site known for its many pivots, is finally being strategic about what it wants and where it’s headed. Last year, it launched a Mastodon server for premium users, and in March it demonetized AI-generated content on its platform. It is solidly on the side of team human and is finally starting to see that pay off."
I worked at Medium in 2016-2017, and I've known Tony since 2007. I genuinely like Ev, too, but I think Tony was a fantastic choice of leader, and that's really bearing out in his choices over the last few years. I was particularly happy when Medium launched its own Mastodon instance to check out the network and help give it some cloud in certain circles.
"It’s hard not to want to root for Medium. The assumption for more than a decade has been that the way the internet has to work will be determined by what makes the most money for a handful of companies. They wanted us to post content, then they wanted us to share content, then they wanted us to watch it endlessly, and now they want us to use their AI, which will create a bubble we’ll live in forever."
I agree.
[Link]
· Links · Share this post
This is an interesting business model: UK broadcasters are trading unused ad space for equity in digital media startups, turning them into venture-scale investors.
"The move comes as broadcasters continue to face a tough economic downturn where corporate clients have slashed spending on advertising – which is traditionally seen as a bellwether of the economic climate."
The thing about venture investing is that it doesn't have a short time horizon: exits could easily be a decade away. So this is either a deliberately long game or a really short-sighted move on behalf of the broadcasters, who might not be prepared to hold a basket of liabilities for that long. Of course, they could presumably sell the equity, but that pressure on the secondary market would have the potential to drive the startups' share prices down. Really the broadcasters need to hold onto their portfolios.
I'm very curious to see how this plays out. It's definitely an innovative way to use an otherwise illiquid asset (unsold ad space). I want these broadcasters to survive, and I like the ecosystem-building aspect of this, so I hope it all works out for everyone involved.
[Link]
· Links · Share this post
Eric Yuan has a really bizarre vision of what the future should look like:
"Today for this session, ideally, I do not need to join. I can send a digital version of myself to join so I can go to the beach. Or I do not need to check my emails; the digital version of myself can read most of the emails. Maybe one or two emails will tell me, “Eric, it’s hard for the digital version to reply. Can you do that?” Again, today we all spend a lot of time either making phone calls, joining meetings, sending emails, deleting some spam emails and replying to some text messages, still very busy. How [do we] leverage AI, how do we leverage Zoom Workplace, to fully automate that kind of work? That’s something that is very important for us."
The solution to having too many meetings that you don't really need to attend, and too many emails that are informational only, is to not have the meetings and emails. It's not to let AI do it for you, which in effect creates a world where our avatars are doing a bunch of makework drudgery for no reason.
Instead of building better business cultures and reinventing our work rhythms to adapt to information overload and an abundance of busywork, the vision here is to let the busywork happen between AI. It's an office full of ghosts, speaking to each other on our behalf, going to standup meetings with each other just because.
I mean, I get it. Meetings are Zoom's business. But count me out.
[Link]
· Links · Share this post