Skip to main content
 

AP to launch sister organization to fundraise for state, local news

"Governed by an independent board of directors, the 501(c)3 charitable organization will help AP sustain, augment and grow journalism and services for the industry, as well as help fund other entities that share a commitment to state and local news."

Fascinating! And much needed.

I'm curious to learn how this fits into other fundraising efforts, like the $500M Press Forward initiative for local news that was announced last year.

I do also have a question about whether all this centralized philanthropy is sustainable. What happens to these newsrooms if the foundation dollars go away? Are they incentivized to find their own business and fundraising models, or does this create a kind of dependence that might be harmful in the long run?

My hope, of course, is that these efforts are the shot in the arm that journalism needs, and that the newsrooms which receive this funding will be sustainable and enduring. It's certainly lovely to see the support.

[Link]

· Links · Share this post

 

Law enforcement is spying on thousands of Americans’ mail, records show

[Drew Harwell at the Washington Post]

"Postal inspectors say they fulfill [requests from law enforcement to share information from letters and packages] only when mail monitoring can help find a fugitive or investigate a crime. But a decade’s worth of records, provided exclusively to The Washington Post in response to a congressional probe, show Postal Service officials have received more than 60,000 requests from federal agents and police officers since 2015, and that they rarely say no."

I wish this was surprising. Something similar seems to have gone on in every trusted facet of American life: from cell phone providers to online library platforms to license plate readers on the roads. It's all part of an Overton window shift into pervasive surveillance that has been ongoing for decades.

Senator Ron Wyden is right to be blunt:

“These new statistics show that thousands of Americans are subjected to warrantless surveillance each year, and that the Postal Inspection Service rubber stamps practically all of the requests they receive.”

We shouldn't accept it. And yet, by and large, we do.

[Link]

· Links · Share this post

 

The Future of Fashion Commerce Is a Designer's AI Bot Saying You Look Great and Your Personal AI Bot Sifting Through the Bullshit

[Hunter Walk]

"The best commerce platforms will be constantly grooming you, priming you, shaping you to buy. The combination of short-term and long-term value that leads to the optimal financial outcome for the business."

I think this is inevitably correct: the web will devolve into a battle between different entities who are all trying to persuade you to take different actions. That's already been true for decades, but it's been ambient until now; generative AI gives it the ability to literally argue with us. Which means we're going to need our own bots to argue back.

Hunter's analogy of a bot that's supposedly in your corner calling bullshit on all the bots trying to sell things to you is a good one. Except, who will build the bot that's in your corner? Why will it definitely be so? Who will profit from it?

What a spiral this will be.

[Link]

· Links · Share this post

 

Why does moral progress feel preachy and annoying?

[Daniel Kelly and Evan Westra in Aeon]

"Many genuinely good arguments for moral change will be initially experienced as annoying. Moreover, the emotional responses that people feel in these situations are not typically produced by psychological processes that are closely tracking argument structure or responding directly to moral reasons."

This is a useful breakdown of why arguments for social progress encounter so much friction, and why the first emotional response may be to roll our eyes. It's all about our norm psychologies - and some people have stronger reactions than others.

As the authors make clear here, people who are already outside of the mainstream culture for one reason or another (immigration, belonging to a minority or vulnerable group, and so on) already feel friction from the prevailing norms being misaligned with their own psychology. If that isn't the case, change is that much harder.

But naming it is at least part of the battle:

"Knowing this fact about yourself should lead you to pause the next time you reflexively roll your eyes upon encountering some new, annoying norm and the changes its advocates are asking you to make. That irritation is not your bullshit detector going off."

Talking about these effects, and understanding their origins, helps everyone better understand their reactions and get to better outcomes. Social change is both necessary and likely to happen regardless of our reactions. It's always better to be a person who celebrates progressive change rather than someone who creates friction in the face of it.

[Link]

· Links · Share this post

 

Systems: What does a board of directors do?

[Anil Dash]

"I realize that most people who've never been in the boardroom have a lot of questions (and often, anxieties) about what happens on a board, so I wanted to share a very subjective view of what I've seen and learned over the years."

This is great, and jibes with my experiences both being on boards and supporting them as a part of various organizations.

The most functional boards I've seen do what Anil describes here: they're pre-briefed and are ready to have a substantive discussion in a way that pushes the organization forward. Board meetings have a heavy reporting component, for sure, but the discussion and working sessions are always the most meaningful component.

This is also often true, and a challenge:

"I believe in the structure of a board (usually along with some separate advisors) to help an organization reach its fullest potential, in much the same way as I believe in governments having separate branches with separate forms of accountability and appointment. In practice, having nearly all-powerful executives select the membership of the organization that's meant to hold them accountable tends to fail just as badly in business or non-profits as it does in governments."

The board meetings I've attended that are the most robust and open to discussion and genuine debate have also been the ones attached to the most successful companies. I don't think it's quite causation, but rather two things that come from a particularly pragmatic attitude towards running a business: one where outside perspectives and differences of opinion are a strength, not a threat.

[Link]

· Links · Share this post

 

I Will Piledrive You If You Mention AI Again

[Nikhil Suresh at Ludicity]

"This entire class of person is, to put it simply, abhorrent to right-thinking people. They're an embarrassment to people that are actually making advances in the field, a disgrace to people that know how to sensibly use technology to improve the world, and are also a bunch of tedious know-nothing bastards that should be thrown into Thought Leader Jail until they've learned their lesson, a prison I'm fundraising for."

I enjoyed this very much.

Here's the thing, though: I don't think what Nikhil wants will happen.

I mean, don't get me wrong: it probably should. The author is a leader in his field, and his exasperation at the hype train is well-earned.

But it's not people like Nikhil who actually make the decisions, or invest in the companies, or make the whole industry (or industries) tick over. Again: it should be.

What happens again and again is that people who see that they can make money out of a particularly hyped technology leap onto the bandwagon, and then market the bandwagon within an inch of everybody's lives. Stuff that shouldn't be widespread becomes widespread.

And here we are again with AI.

This is exactly right:

"Unless you are one of a tiny handful of businesses who know exactly what they're going to use AI for, you do not need AI for anything - or rather, you do not need to do anything to reap the benefits. Artificial intelligence, as it exists and is useful now, is probably already baked into your businesses software supply chain."

And this:

"It did not end up being the crazy productivity booster that I thought it would be, because programming is designing and these tools aren't good enough (yet) to assist me with this seriously."

There is work that will be improved with AI, but it's not something that most industries will have to stop everything and leap on top of. The human use cases must come first with any technology: if you have a problem that AI can solve, by all means, use AI. But if you don't, hopping on the hype train is just going to burn you a lot of money and slow your actual core business down.

[Link]

· Links · Share this post

 

New ALPR Vulnerabilities Prove Mass Surveillance Is a Public Safety Threat

[Dave Maass and Cooper Quintin at EFF]

"When law enforcement uses ALPRs to document the comings and goings of every driver on the road, regardless of a nexus to a crime, it results in gargantuan databases of sensitive information, and few agencies are equipped, staffed, or trained to harden their systems against quickly evolving cybersecurity threats."

As the EFF points out, it's often vulnerable software - and even when it's not, it violates the security principle of only collecting the information you need. Information security and data strategies are not core law enforcement skillsets, and the software they buy is often oversold.

As the EFF explains:

"That partially explains why, more than 125 law enforcement agencies reported a data breach or cyberattacks between 2012 and 2020, according to research by former EFF intern Madison Vialpando. The Motorola Solutions article claims that ransomware attacks "targeting U.S. public safety organizations increased by 142 percent" in 2023."

The use of these tactics seems uncontrolled - perhaps this is one area where legislation could help.

[Link]

· Links · Share this post

 

Social-Media Influencers Aren’t Getting Rich—They’re Barely Getting By

[Sarah E. Needleman and Ann-Marie Alcántara at the Wall Street Journal]

"Earning a decent, reliable income as a social-media creator is a slog—and it’s getting harder. Platforms are doling out less money for popular posts and brands are being pickier about what they want out of sponsorship deals."

For many kids, becoming an influencer has become the new becoming a sports star: in enormous numbers, it's what they want to be. More broadly, if you dare to say that it's not a real job, you're likely to be drowned out by complaints and contradictions.

But it isn't, and this article makes it clear:

"Last year, 48% of creator-earners made $15,000 or less, according to NeoReach, an influencer marketing agency. Only 13% made more than $100,000."

Of course, some people really did shoot to fame and have been doing really well. But there aren't many Mr Beasts or Carli D'Amelios of this world, and the lure of being famous has trapped less lucky would-be influencers in cycles of debt and mental illness.

This is despite having sometimes enormous followings: hundreds of thousands to millions of people, with hundreds of millions of views a month. The economics of the platforms are such that even at those numbers, you can barely scrape by.

I like the advice that, instead, you should cultivate a genuine expertise and use social media to promote offsite services you provide around that. It might be that a following can land you a better job, or help you build up a consultancy. Trying to make money from ads and brand sponsorships is a losing game - and thousands of people are losing big.

[Link]

· Links · Share this post

 

Sharing Openly About ShareOpenly

[Alan Levine at CogDogBlog]

"ShareOpenly breaks the door even wider than sharing to Mastodon, and I intend to be using it to update some of my examples listed above. Thanks Ben for demonstrative and elegant means of sharing."

Thank you, Alan, for sharing!

There's more to come on ShareOpenly - more platforms to add, and some tweaks to the CSS so that the whole thing works better on older devices or smaller phone screens. It's a simple tool, but I'm pleased with how people have reacted to it, and how it's been carried forward.

There are no terms to sign and there's nothing to sign up for; adding a modern "share this" button to your site is as easy as following a few very simple instructions.

[Link]

· Links · Share this post

 

Succor borne every minute

[Michael Atleson at the FTC Division of Advertising Practices]

"Don’t misrepresent what these services are or can do. Your therapy bots aren’t licensed psychologists, your AI girlfriends are neither girls nor friends, your griefbots have no soul, and your AI copilots are not gods."

The FTC gets involved in the obviously rife practice of overselling the capabilities of AI services. These are solid guidelines, and hopefully the precursor to more meaningful action when vendors inevitably cross the line.

While these points are all important, for me the most pertinent is the last:

"Don’t violate consumer privacy rights. These avatars and bots can collect or infer a lot of intensely personal information. Indeed, some companies are marketing as a feature the ability of such AI services to know everything about us. It’s imperative that companies are honest and transparent about the collection and use of this information and that they don’t surreptitiously change privacy policies or relevant terms of service."

It's often unclear how much extra data is being gathered behind the scenes when AI features are added. This is where battles will be fought and lines will be drawn, particularly in enterprises and well-regulated industries.

[Link]

· Links · Share this post

 

United Airlines seat ads: How to opt out of targeted advertising

[Michael Grothaus at FastCompany]

"United Airlines announced that it is bringing personalized advertising to the seatback entertainment screens on its flights. The move is aimed at increasing the airline’s revenue by leveraging the data that it has on its passengers."

Just another reason why friends don't let friends fly United. We should all be reducing our air travel overall anyway, given the climate crisis, and in a world where we all fly less, shouldn't we choose a better experience?

This sounds like the absolute worst:

"United believes its advertising network will be appealing to brands because “there is the potential for 3.5 hours of attention per traveler, based on average flight time.”"

Passengers from California, Colorado, Connecticut, Virginia, and Utah can opt out of having their private information used to show targeted ads to them for the duration of what sounds like an agonizing flight. Passengers from other US States are out of luck - at least until their legislatures also pass reasonable privacy legislation.

Other airlines are removing seat-back entertainment to reduce fuel, so on top of the baseline climate impact of the air travel industry, there's a real additional climate implication here. Planes with seat-back entertainment, in general, use more fuel; United is making a revenue decision with all kinds of negative impacts that they should not be rewarded for.

[Link]

· Links · Share this post

 

Perplexity AI Is Lying about Their User Agent

[Robb Knight]

Perplexity AI doesn't use its advertised browser string or IP range to load content from third-party websites:

"So they're using headless browsers to scrape content, ignoring robots.txt, and not sending their user agent string. I can't even block their IP ranges because it appears these headless browsers are not on their IP ranges."

On one level, I understand why this is happening, as everyone who's ever written a scraper (or scraper mitigations) might: the crawler for training the model likely does use the correct browser string, but on-demand calls likely don't to prevent them from being blocked. That's not a good excuse at all, but I bet that's what's going on.

This is another example of the core issue with robots.txt: it's a handshake agreement at best. There are no legal or technical restrictions imposed by it; we all just hope that bots do the right thing. Some of them do, but a lot of them don't.

The only real way to restrict these services is through legal rules that create meaningful consequences for these companies. Until then, there will be no sure-fire way to prevent your content from being accessed by an AI agent.

[Link]

· Links · Share this post

 

Pentagon ran secret anti-vax campaign to incite fear of China vaccines

[Chris Bing and Joel Schechtman at Reuters]

"The U.S. military launched a clandestine program amid the COVID crisis to discredit China’s Sinovac inoculation – payback for Beijing’s efforts to blame Washington for the pandemic. One target: the Filipino public. Health experts say the gambit was indefensible and put innocent lives at risk."

Reading this, it certainly seems indefensible, although unfortunately not out of line with other US foreign policy efforts. Innocent people died because of this US military operation.

It's a reflection of the simple idea, which seems to have governed US foreign policy for almost a century, that foreign lives matter less in the quest for dominance over our perceived rivals.

Even if you do care about America more than anywhere else, this will have hurt at home, too. The internet being what it is, it also would make sense that these influence campaigns made their way back to the US and affected vaccine uptake on domestic soil.

The whole thing feels like the military equivalent of a feature built by a novice product manager: someone had a goal that they needed to hit, and this was how they decided to get there. But don't get me wrong: I don't think this was an anomaly or someone running amok. This was policy.

[Link]

· Links · Share this post

 

On being human and "creative"

[Heather Bryant]

"What generative AI creates is not any one person's creative expression. Generative AI is only possible because of the work that has been taken from others. It simply would not exist without the millions of data points that the models are based upon. Those data points were taken without permission, consent, compensation or even notification because the logistics of doing so would have made it logistically improbable and financially impossible."

This is a wonderful piece from Heather Bryant that explores the humanity - the effort, the emotion, the lived experience, the community, the unique combination of things - behind real-world art that is created by people, and the theft of those things that generative AI represents.

It's the definition of superficiality, and as Heather says here, living in a world made by people, rooted in experiences and relationships and reflecting actual human thought, is what I hope for. Generative AI is a technical accomplishment, for sure, but it is not a humanist accomplishment. There are no shortcuts to the human experience. And wanting a shortcut to human experience in itself devalues being human.

[Link]

· Links · Share this post

 

The Encyclopedia Project, or How to Know in the Age of AI

[Janet Vertesi at Public Books]

"Our lives are consumed with the consumption of content, but we no longer know the truth when we see it. And when we don’t know how to weigh different truths, or to coordinate among different real-world experiences to look behind the veil, there is either cacophony or a single victor: a loudest voice that wins."

This is a piece about information, trust, the effect that AI is already having on knowledge.

When people said that books were more trustworthy than the internet, we scoffed; I scoffed. Books were not infallible; the stamp of a traditional publisher was not a sign that the information was correct or trustworthy. The web allowed more diverse voices to be heard. It allowed more people to share information. It was good.

The flood of automated content means that this is no longer the case. Our search engines can't be trusted; YouTube is certainly full of the worst automated dreck. I propose that we reclaim the phrase pink slime to encompass this nonsense: stuff that's been generated by a computer at scale in order to get attention.

So, yeah, I totally sympathize with the urge to buy a real-world encyclopedia again. Projects like Wikipedia must be preserved at all costs. But we have to consider if all this will result in the effective end of a web where humans publish and share information. And if that's the case, what's next?

[Link]

· Links · Share this post

 

Microsoft Refused to Fix Flaw Years Before SolarWinds Hack

[Renee Dudley at ProPublica]

"Former [Microsoft] employee says software giant dismissed his warnings about a critical flaw because it feared losing government business. Russian hackers later used the weakness to breach the National Nuclear Security Administration, among others."

This is a damning story about profit over principles: Microsoft failed to close a major security flaw that left the government (alongside other customers) vulnerable because it wanted to win their business. This directly paved the way for the SolarWinds hack.

This doesn't seem to have been covert or subtext at Microsoft:

"Morowczynski told Harris that his approach could also undermine the company’s chances of getting one of the largest government computing contracts in U.S. history, which would be formally announced the next year. Internally, Nadella had made clear that Microsoft needed a piece of this multibillion-dollar deal with the Pentagon if it wanted to have a future in selling cloud services, Harris and other former employees said."

But publicly it said something very different:

"From the moment the hack surfaced, Microsoft insisted it was blameless. Microsoft President Brad Smith assured Congress in 2021 that “there was no vulnerability in any Microsoft product or service that was exploited” in SolarWinds."

It will be interesting to see what the fallout of this disclosure is, and whether Microsoft and other companies might be forced behave differently in the future. This story represents business as usual, and without external pressure, it's likely that nothing will change.

[Link]

· Links · Share this post

 

Calm Company Fund is taking a break

[Calm Company Fund]

"Inhale. Exhale. Find the space between… Calm Company Fund is going on sabbatical and taking a break from investing in new companies and raising new funds. Here’s why."

Calm Company Fund's model seems interesting. It's a revenue-based investor that makes a return based on its portfolio companies' earnings, but still uses a traditional VC model to derive its operating budget. That means it makes a very small percentage of funds committed from Limited Partners, rather than sharing in the success of its portfolio (at least until much later, when the companies begin to earn out).

That would make sense in a world where the funds committed were enormous, but revenue-based investment tends to raise smaller fund sizes. So Calm Company Fund had enough money to pay for basically one person - and although the portfolio was growing, the staff size couldn't scale up to cope.

So what does an alternative look like? I imagine that it might look like taking a larger percentage of incoming revenue as if it were an LP itself. Or maybe this kind of funding simply doesn't work with a hands-on firm, and the models that attract larger institutional investors are inherently more viable (even if that isn't always reflected in their fund returns).

I want something like this to exist, but the truth is that it might live in the realm of boring old business loans, and venture likely is able to exist because of the risks involved in those sorts of companies.

[Link]

· Links · Share this post

 

These Wrongly Arrested Black Men Say a California Bill Would Let Police Misuse Face Recognition

[The Markup]

"Now all three men are speaking out against pending California legislation that would make it illegal for police to use face recognition technology as the sole reason for a search or arrest. Instead it would require corroborating indicators."

Even with mitigations, it will lead to wrongful arrests: so-called "corroborating indicators" don't assist with the fact that the technology is racially biased and unreliable, and in fact may provide justification for using it.

And the stories of this technology being used are intensely bad miscarriages of justice:

“Other than a photo lineup, the detective did no other investigation. So it’s easy to say that it’s the officer’s fault, that he did a poor job or no investigation. But he relied on (face recognition), believing it must be right. That’s the automation bias this has been referenced in these sessions.”

"Believing it must be right" is one of core social problems widespread AI is introducing. Many people think of computers as being coldly logical deterministic thinkers. Instead, there's always the underlying biases of the people who built the systems and, in the case of AI, in the vast amounts of public data used to train them. False positives are bad in any scenario; in law enforcement, it can destroy or even end lives.

[Link]

· Links · Share this post

 

Justice Alito Caught on Tape Discussing How Battle for America ‘Can’t Be Compromised’

"Justice Samuel Alito spoke candidly about the ideological battle between the left and the right — discussing the difficulty of living “peacefully” with ideological opponents in the face of “fundamental” differences that “can’t be compromised.” He endorsed what his interlocutor described as a necessary fight to “return our country to a place of godliness.” And Alito offered a blunt assessment of how America’s polarization will ultimately be resolved: “One side or the other is going to win.”"

If what's at stake in the upcoming election wasn't previously clear, this makes it so. This is a Supreme Court justice, talking openly, on tape, about undermining the rights of people in favor of a Biblical worldview.

It's easy to see this sort of rhetoric as the dying gasps of the 20th century trying to claw back regressive values that we've mostly moved away from. But to do so is to discount it; we have to take this seriously.

It's a little bit heartening to hear that Justice Roberts - also a big-C Conservative - felt differently and held a commitment to the Constitution and the working of the Court. But in the light of a far-right majority comprised of Alito, Clarence Thomas, Neil Gorsuch, Brett Kavanaugh, and Amy Coney Barrett, it's not heartening enough.

[Link]

· Links · Share this post

 

ORG publishes digital rights priorities for next government

"Open Rights Group has published its six priorities for digital rights that the next UK government should focus on."

These are things every government should provide. I'm particularly interested in point number 3:

"Predictive policing systems that use artificial intelligence (AI) to ‘predict’ criminal behaviour undermine our right to be presumed innocent and exacerbate discrimination and inequality in our criminal justice system. The next government should ban dangerous uses of AI in policing."

It's such a science fiction idea, so obviously flawed that Philip K Dick wrote a novel and there's a famous movie about how bad it is, and yet, police forces around the world are trying it.

I'd hope for beyond an Open Rights Group recommendation: it should be banned, everywhere, as an obvious human rights violation.

The other things on the list are table stakes. Without those guarantees, real democratic freedom is impossible.

[Link]

· Links · Share this post

 

Study finds 1/4 of bosses hoped RTO would make staff quit

"The findings suggest the return to office movement has been a poorly-executed failure, but one particular figure stands out - a quarter of executives and a fifth of HR professionals hoped RTO mandates would result in staff leaving."

Unsurprising but also immoral: these respondents believed that subsequent layoffs were undertaken because too few people quit in the wake of return to office policies.

This quote from the company that conducted the survey seems obviously true to me:

"The mental and emotional burdens workers face today are real, and the companies who seek employee feedback with the intent to listen and improve are the ones who will win."

It's still amazing to me that so many organizational cultures are incapable of following through with this.

[Link]

· Links · Share this post

 

Former Politico Owner Launches New Journalism Finishing School To Try And Fix All The ‘Wokeness’

"There’s an ocean of problems with journalism, but the idea that there’s just too damn much woke progressivism is utter delusion. U.S. journalism generally tilts center right on the political spectrum."

This is a story about the founder of Politico creating a "teaching hospital for journalists" that appears to be in opposition to "wokeness". But it's also about much of the state of incumbent journalism, which is still grappling with the wave of much-needed social change that is inspiring movements around the world.

"In the wake of Black Lives Matter and COVID there was some fleeting recommendations to the ivy league establishment media that we could perhaps take a slightly more well-rounded, inclusive approach to journalism. In response, the trust fund lords in charge of these establishment outlets lost their [...] minds, started crying incessantly about young journalists “needing safe spaces,” and decided to double down on all their worst impulses, having learned less than nothing along the way."

Exactly. Asinine efforts like anti-woke journalism schools aren't what we need; we need better intersectional representation inside newsrooms, we need better representation of the real stories that need to be told across the country and across the world, and we need to dismantle institutional systems that have acted as gatekeepers for generations.

All power to the outlets, independent journalists, and foundations that are truly trying to push for something better. The status quo is not - and has not been - worth preserving.

[Link]

· Links · Share this post

 

What if we worked together

"Remember! If you only signed up to hear when this feature is available, or you're wondering what ActivityPub even is: This probably is not the newsletter for you. This is a behind-the-scenes, engineering-heavy, somewhat-deranged build log by the team who are working on it."

And I love it.

Ghost's newsletter / blog about building ActivityPub support into its platform is completely lovely, and the kind of transparent development I've always been into. Here it's done with great humor. Also, they really seem to be into pugs, and that's cool, too.

In this week's entry the team is investigating using existing ActivityPub libraries and frameworks rather than building the whole thing from scratch themselves - and doing it with not a small amount of humility.

And they're building a front-end to allow bloggers to consume content from other people who publish long-form content onto the web using ActivityPub. I'm excited to see it take shape.

[Link]

· Links · Share this post

 

A Link Blog in the Year 2024

"After 17 of using Twitter daily and 24 years of using Google daily neither really works anymore. And particular with the collapse of the social spaces many of us grew up with, I feel called back to earlier forms of the Internet, like blogs, and in particular, starting a link blog."

Yay for link blogs! I've been finding this particularly rewarding. You're reading a post from mine right now.

Kellan wrote his own software to do this, based on links stored in Pinboard. Mine is based on Notion: I write an entry in markdown, which then seeds integrations that convert the bookmark into an HTML post on my website and various text posts for social media.

Simon Willison has noted that adding markdown support has meant he writes longer entries; that's been true for me, too. It's really convenient.

Most of all: I love learning from people I connect, follow, and subscribe to. Particularly in a world where search engines are falling apart as a way to really discover new writers and sources, link blogs are incredibly useful. It's lovely to find another one.

[Link]

· Links · Share this post

 

AI Lobbying Group Launches Campaign Defending Tech

"Chamber of Progress, a tech industry coalition whose members include Amazon, Apple and Meta, is launching a campaign to defend the legality of using copyrighted works to train artificial intelligence systems."

I understand why they're making this push, but I don't know that it's the right PR move for some of the wealthiest corporations in the world to push back on independent artists. I wish they were actually reaching out and finding stronger ways to support the people who make creative work.

The net impression I'm left with is not support of user freedom, but bullying. Left out of the equation is the scope of fair use, which is painted here as being under attack as a principle by the artists rather than by large companies that seek to use peoples' work for free to make products that they will make billions of dollars from.

The whole thing is disingenuous and disappointing, and is likely to backfire. It's particularly sad to see Apple participate in this mess. So much for bicycles of the mind.

[Link]

· Links · Share this post