Lifelogging under fascism

How self-tracking became self-incrimination

Lifelogging under fascism

The quantified self movement encouraged everybody to measure their life through sensors and self-tracking. The term was coined by WIRED in 2007 and really came into its own by 2011, with its own conferences, blogs, meetups, and so on. When the Apple Watch hit the market in 2015, with its slick fitness rings and other tracking data, it had the benefit of learning from a whole spectrum of wearable sensors that enthusiasts had been prototyping for years.

People interested in quantified self used wearable tech to record basic stats like details about their physical activity, biometrics, mood, health, and sleep. This was all stored in one place for easy access and, theoretically at least, experimentation. The idea was that by measuring your life, you could more easily make changes to be healthier and more productive. It was all about optimization.

While quantified self involved quantitative data that was typically private and optimization-focused, lifelogging was its qualitative equivalent, typically public and sharing-focused. People would check into every location they visited with apps like Foursquare (later Swarm), take photos of their day, and so on. Some people wore a camera, or installed one in their homes, and recorded or even live broadcast everything they did. Many people took their quantified self data and published it online; the core ethos was one of living life in public.

Beyond hitting my fitness goals every day on my Apple Watch, I was never really into the deeper quantified self movement. But I was a heavy Foursquare / Swarm user. I have many thousands of checkins in multiple countries. I can still go back and look at my Swarm map to see how many US states I’ve visited (47). It still shows where I used to hang out when I lived in Scotland, and restaurants I ate at in other countries around the world. I also used it to learn about new places to go; if someone I knew had great taste kept going back to the same cocktail bar, I knew I also wanted to try it.

But both movements feel different in 2025. We live in a world where warrantless surveillance is well-documented, where ICE agents are smashing their way into peoples’ cars in order to seize and deport people with no due process, and threats of using the military on US soil are becoming clearer (with the National Guard already deployed against protestors in cities like LA). Outside of the US, Britain just detained 466 people who were protesting the war on Gaza. Hard-won freedoms are under attack.

At the same time, widespread surveillance is becoming easier and more prolific. In the US, legislation like the CLOUD Act allows a user’s data to be obtained by law enforcement without their involvement, and often without their notification. (Service providers might, at their own discretion, notify you if your data is the subject of a civil subpoena; they won’t if it’s the subject of a criminal subpoena.) In the UK, the government has been hard at work trying to establish backdoors into encrypted data, making their own intentions to surveil the population clear. In both cases, they can and do often purchase data straight from service providers, bypassing the court system entirely.

In this environment, lifelogging and many quantified self activities backed by cloud services amount to creating more surveillance for you to be tracked by. The old anti-privacy cry of “I have nothing to hide!”, which was always nonsense, rings even more hollow when your political alignment or liking an Instagram post can put you at risk of surveillance and worse. Those thousands of location check-ins that once felt like harmless social sharing now represent a detailed surveillance profile that law enforcement can potentially access without my knowledge.

As soon as Trump was sworn in for a second term, I stopped sharing my location with tools like Swarm. On the other hand, I’ve continued to use my Apple Watch as a fitness band, measuring my activity every day — and, so far, have maintained a year-long streak of hitting my fitness goals every single day. Being healthier feels particularly important as an older father with a young son, and because Apple Fitness data is encrypted both in transit and at rest, and is fully under my control, I feel reasonably secure in continuing to use it.

Which makes me wonder about sousveillance. (I am fun at parties.)

Whereas surveillance is close observation that is usually conducted by an authority like an employer, the police, or the government, sousveillance is conducted by members of the public on an authority. People filming police using their iPhones is an example of sousveillance. It is a necessary part of equiveillance, where a person can counter potentially incriminating evidence gathered by surveillance with evidence they’ve gathered themselves using sousveillance.

While lifelogging in public just gives authorities more data to mine, what if it could be used to maintain a record that could be used in the event of an unlawful detention? For example, we know that police often misuse their mandated body cameras, turning them off when force is used and deleting footage. If we all had our own footage that lay outside of the police’s control, these abuses would be effectively meaningless.

Imagine a device that lifelogged and stored quantified self data using strong, key-based encryption. You could access the information yourself for your own purposes, but it would remain fully encrypted and behind multi-factor login protection. If you needed to, you could release footage to predefined trusted parties: family members, a trusted member of the community, organizations like the ACLU, and as a tip to trusted newsrooms. If you didn’t get a chance to do that before being detained or otherwise incapacitated, a set number (three or five, perhaps) trusted people could agree to release your footage by taking action on their own devices together — a bit like needing multiple people to launch a nuclear missile. Every time footage was released, that action would be indelibly logged, as a guard against abuse.

It would keep a multi-day charge and sync data via a connection to a phone to cloud storage (safely, because the data would remain fully encrypted). There would be a companion app to easily log qualitative data like location check-ins. (The latter could perhaps be shareable with trusted friends in a group using something like the Signal Protocol.) It would be based on open protocols, with an open source design, so any manufacturer could make one, and so that the security of the system could be fully audited.

The data would be usable in all the same ways quantified self and lifelogging data was always usable to a person: as a way to optimize your health, remember where you’ve been, and even learn from your friends. But it would also be a black box that could provide provable, indelible information if you were unlawfully detained or if someone took action against you.

We have a choice when we build technology. We can take the easy way out and build something optimized for growth that doesn’t necessarily keep users safe — which, in turn, can be actively used against them in the hands of an adverse regime. Or we can build it to be truly user-first, with their well-being in mind. Many of the user experience details will be the same. The difference is in intention, care for the user, and ultimately, the values of the developer. We’ve entered an age where protecting users is more important than ever before, and isn’t an abstract idea; for many people, particularly from the most vulnerable communities, it can be life or death.