The EFF we need now

Why the next era of digital civil liberties requires a tighter mission, a bolder strategy, and a clearer view of how power works.

Surveillance cameras watching a Black Lives Matter protest
Image by EFF, released under a CC Attribution 2.0 Generic license

I used to walk past a nondescript grey office building at the intersection of 2nd and Folsom in San Francisco. It’s the kind of corporate architecture that litters every city but nobody really loves: featureless grey walls stretch to the sky, interrupted by the occasional column of sliding windows. At its feet, a brick courtyard. Trees line the sidewalk but somehow only serve to highlight the lifelessness of the construction.

I walked past it every day, making my way through the San Francisco gloom to catch a BART train home. I used to think nothing of it.

The internet backbone — the high-speed fiber-optic cables and routers that carry data across countries and continents — is where the internet's heavy lifting happens. Despite the internet’s theoretically decentralized design, this is vital infrastructure, operated by major commercial, governmental, and educational organizations. It's where ISPs plug in to get the connectivity they provide to the rest of us: almost all internet traffic flows along it.

In that nondescript grey building, across the street from a liquor store, a device in room 641A was intercepting the traffic from the internet backbone and sending it straight to the NSA. As it turned out, it was just one of many secret rooms set up to spy on global internet traffic.

This was a clear civil liberties violation: the NSA and AT&T were actively collaborating on a wide-scale, illegal program to wiretap and mine the communications of American citizens. The Electronic Frontier Foundation filed a class action lawsuit, Hepting v AT&T, on behalf of AT&T customers. Ultimately, it was dismissed after a Bush-era Act of Congress gave American communications companies retroactive immunity for conducting mass surveillance. But filing the case exposed the scale of warrantless surveillance to the public: a pivotal moment that fundamentally changed how Americans understood the government’s abuse of their trust and privacy.

That was back in 2006. The internet is orders of magnitude larger now, and exponentially more influential. However you feel about him, it’s hard to argue that President Trump’s second administration would be in place without the ways online social media has reshaped our public discourse. And the first few months of that administration were very much of the internet: it hired meme-pilled internet software engineers and inserted them into government institutions, exposing their databases to proprietary LLMs and threatening the privacy of every American. LLMs themselves have become load-bearing in our economy, while their vendors seek to connect them to data about our businesses, our personal lives, and our civil societies.

Technology is now so embedded in daily life, so entangled with power, that digital civil liberties are civil liberties. It’s a world where police reports are written by AI using software designed to avoid public accountability; where networked public safety sensors are being used to surveil activists; where technology is increasingly used to ratchet up authoritarian agendas. We need the EFF more than ever.

So what would it look like for EFF to meet this moment head-on? Understanding that the way technology intersects with society has evolved, and knowing that its executive director Cindy Cohn is stepping down, I thought I’d explore — hypothetically — how I would lead it into its next era.

A focused mission

If I were leading the EFF into its next chapter, the first thing I’d do is narrow its mission to the core civil liberties most under threat today. This isn’t because I believe the organization should do less, but because the threats themselves have evolved. The EFF’s foundational battles were fought against wiretaps, data retention laws, corporate surveillance, and NSA overreach. Those battles still matter. But the nature of power has shifted.

In 2006, surveillance required government action and corporate complicity: a secret room, a communications company that could be compelled to cooperate, a classified program. Today, surveillance is a product you can buy off the shelf. It’s no longer a back-room conspiracy; it's a business model.

Axon sells AI-generated police reports designed to evade transparency. When asked how to identify which reports were AI-written, the company admitted there's “no filter” for that. Flock Safety operates nationwide license plate readers used to track protest activity; more than 50 agencies searched the network hundreds of times targeting activists in 2024. Clearview AI scrapes billions of faces. Palantir automates deportations. The surveillance market is booming, and government agencies are eager customers.

Foundation model companies — OpenAI, Anthropic, Google, Meta — are positioning themselves as the infrastructure layer beneath all of it, building AI systems into every consequential decision about our lives while the Trump administration experiments with connecting federal databases directly to these proprietary systems.

The result is a world where private companies hold the levers of surveillance. The most intrusive forms of data extraction don’t need secret rooms in AT&T buildings anymore; all you need is API access or a subscription to a data broker. The systems that shape public life — from policing to immigration to healthcare to elections — are increasingly governed by software owned by a handful of corporations, which only they can inspect or question. And governments have learned to exploit this privatized surveillance infrastructure to sidestep constitutional protections that would limit them if they built these systems themselves.

EFF's stated mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. I don’t think that goes far enough. In this era, much technology has become a tool for the powerful to erode the civil rights of the vulnerable in service of their own wealth and power. To rise to meet this new threat, the EFF’s mission should be dialed up to confront it directly and explain exactly how the organization operates.

Perhaps something like: To defend civil liberties in the digital age through law, code, and advocacy.

These three strands, in turn, could be explained as follows:

  • Law: Strategic litigation to establish precedents that constrain how technology can be used against people's rights, from forcing transparency in algorithmic decision-making to protecting encryption from backdoors to defeating mandates that expand surveillance.
  • Code: Technical infrastructure for accountability and freedom: tools that let people protect themselves (like Privacy Badger and Certbot), systems that let researchers audit AI in government (like investigative frameworks for exposing products like Axon’s Draft One), and support for decentralized protocols that resist centralized control.
  • Advocacy: Public campaigns, coalition building, and policy development to reshape how technology intersects with power. This can include storytelling about systems designed to be invisible, movement-building as a force multiplier against abuses of technology, and building strong partnerships with adjacent organizations and grassroots communities to make sure the EFF’s work is effective and rooted in real needs.

A unified threat model

To make that mission real, the EFF needs a coherent, modern understanding of how power operates today: a threat model that doesn’t carve the world into separate issues like “privacy,” “free speech,” “innovation,” or “AI,” but sees all of them as manifestations of the same underlying problem. This model should anchor everything the organization does, across law, code, and advocacy.

As I’ve tried to describe above, I think the current threat can be summarized as follows:

Technology is increasingly being used as a force multiplier for institutional power, at the expense of the civil rights and freedoms of people around the world.

It needs to be more than just a headline, of course. Translating it into action means articulating a detailed, unified threat model that explains where power is accumulating, how technology is being used to accumulate it, which rights are most at risk, and where these risks are most urgent.

It should be continually developed through three core mechanisms:

  • Analysis: the EFF employs experts across law, code, and activism, each of whom should be expected to stay on top of current threats and articulate how they have changed.
  • Research: it should be in constant communication with researchers and experts at similar organizations, research centers, and universities, and should draw on that expertise and new research.
  • Community: it should also be in constant communication with the people who are most affected by oppressive policies and misuse of technology. It’s one thing to have an academic understanding of the current landscape; it’s quite another to know the people most affected, understand their lives holistically, and be able to have insights into the effects that might be invisible if it was operating at a purely analytical level. Qualitative research is vital.

This model should be updated at least annually and published in public. It should be clear to any new reader how the organization actively uses it to make decisions. Most importantly, it should tie every strand of EFF’s work back to the same core question: how is technology being used to undermine rights and freedoms, and how can we stop it?

By grounding its litigation, technical research, and public advocacy in a shared analytic framework, the EFF can move from a collection of related projects to a strategically unified organization: one that understands the full connected system of repression through technology.

This also gives the EFF a mandate to be more selective depending on what the current threats are. Not every tech policy debate requires EFF's voice. Patent reform, while important, doesn't fit today’s threat model as directly as accountability for AI in policing. 

For example, it’s worth re-evaluating the EFF’s position on AI training data in the light of the current threat model. The organization has argued against licensing requirements for training, framing them as barriers to innovation. But if the threat is centralized AI power, the biggest risk isn’t to other AI companies who might find it harder to compete with licensing restrictions in place. It’s to the people whose data and work is being trained on in service of increasingly centralized power. With a clearer threat model, it becomes more obvious that any policies and activities need to center them.

Unified priorities

From the unified threat model, you can derive priorities that cut across law, code, and advocacy. 

Based on the model I outlined above, one might derive the following strategic priorities:

  • Building accountability infrastructure for automated systems in government. When AI makes consequential decisions about core elements of civic life like policing, immigration, benefits, or housing, those systems must be transparent and challengeable. The Axon investigation model needs to scale: mandatory disclosure requirements, public registries, standardized audit processes, enforcement with actual consequences.
  • Preventing centralization of AI capabilities. Foundation models are becoming infrastructure for power. This requires both fighting for decentralized alternatives and imposing constraints on concentrated control, including the data practices and exclusive partnerships that entrench monopolies.
  • Defending the infrastructure of dissent. Surveillance of organizing attacks the First Amendment directly. When Flock Safety can track activists across 50+ agencies, democratic organizing is under threat. Protect encryption, anonymity, secure communications, and the right to organize without surveillance. (I’ve used dissent rather than speech deliberately; it ties more directly to the specific threats in the current model and demands a more focused response than broader First Amendment advocacy.)

From there, you’d break down each strand into activities across law, code, and advocacy. 

To make this more concrete, let’s take the infrastructure of dissent as an example, and examine some of the EFF’s existing activities through this lens.

  • Law: the EFF and ACLU jointly filed a lawsuit to prevent the San Jose Police Department from conducting warrantless searches of “the stored records of millions of drivers’ private habits, movements, and associations”. It has filed amicus briefs defending the right to criticize government on social media, and helped to defeat a California bill requiring age verification (which would create infrastructure for tracking who accesses what content). It also filed a FOIA lawsuit to determine whether “officials crossed the line from communicating with tech companies into unconstitutional coercion” when websites and apps designed to report on ICE activities were removed.
  • Code: The EFF already works on tools like Privacy Badger and Certbot, which help prevent web users from being surveilled in two different ways (one by preventing spying in the browser, and the other by making it easier for web services to be encrypted). With a dissent focus, it could build and support tools actively designed to keep protesters safe in the field.
  • Advocacy: The EFF publishes work like Surveillance Self-Defense, a guide to protecting yourself from surveillance using secure software and smart personal digital security practices that already includes safety information about attending protests and a protest pocket guide. A dissent focus could lead it to providing more information about how to keep attendees safe when you’re organizing a protest, or about publishing dissenting material online in a way that keeps you safe and is resistant to censorship.

I could walk through similar breakdowns for the other priorities, showing how building accountability for automated systems requires litigation forcing disclosure, investigative frameworks for auditing, and campaigns for transparency laws; or how preventing AI centralization requires challenging monopolistic practices, supporting open alternatives, and advocating for interoperability. But the pattern should be clear: each priority integrates law, code, and advocacy, all anchored in the same threat model.

Of course, each of those should be shaped through the analysis, research, and community threads I identified earlier. A tool or a work of advocacy needs to be rooted in the real needs of real communities and produced in partnership with them. Open source and participative models of engagement are crucial ways to not just multiply the impact of the organization’s work, but ensure that it doesn’t become blinkered by its own internal culture.

I don’t know what I don’t know

Of course, none of this is easy. I don’t run the EFF. Legal work, building open source communities, and growing coalitions all take time, effort, and resources. There are also practical realities I haven’t necessarily addressed: I’ve reduced the importance of protecting innovation in its mission, but doing so could jeopardize its funding from some big-dollar donors. The people who know how to run the EFF best are the people who already do.

But I do think this moment demands focus and a more confrontational mission, and sometimes an outside perspective can help sharpen that.

In 2006, the NSA tapped the internet backbone in rooms across the country that were like the one at 2nd and Folsom. EFF filed a lawsuit, exposed the program, and changed how Americans understood surveillance. Today’s threat is more diffuse: not one grey building but thousands of companies selling surveillance products; not one secret program but a thriving marketplace. The cumulative risk is not just the immediate threat to civil liberties, but that these technologies will establish a new normal that will ratchet up surveillance and oppression forever.

Strategic resistance is more critical than ever. The EFF has so much in its favor: legal expertise, technical capability, public trust, and thirty-five years of hard-won precedent. What it needs now is the strategic clarity to deploy those tools where they'll matter most: not reacting to every threat, but taking a systems thinking approach to understanding how power operates through technology and fighting that system directly.

It’s a truly great, historically important organization. I think it can meet the moment. And we need it to.