You’re being watched (and maybe you’re okay with it)


You’re being watched online

I know, because I’m one of the ones watching you.

My day job is in web analytics, which is a euphemistic way of saying “spying on people on the internet.” I work in the private sector rather than for the government, writing code to track users’ actions on companies’ websites. This technology, like everything on the internet, has improved dramatically over the last few years. Sites used to almost exclusively use cookies — small text files, saved to your computer’s hard drive — to store information about you; these could be cleared by deleting the file, and you would be “forgotten.”

While most sites still use cookies to one extent or another, with the increased network speeds of the last few years, it’s become increasingly feasible to send your information to an external server that you, the user, have no access to and no ability to clear. The same information is retrieved from that server when you return to the site and can be used to select content and ads that they think will be relevant to you. The end goal is “conversion,” which is industry speak for whatever action it is that the company wants you to perform; buying a product, watching a video, clicking an ad, opening an account, voting for a candidate.

You can, of course, beat the facial recognition algorithms by not having a face.

I’m sure none of this comes as a surprise to most of you, though you might still be surprised by the sheer extent of the surveillance. There are products that watch every action you take on a site, for example, to the extent that they can play your session back for the site owner to watch. And more and more surveillance is piggybacking off of the omnipresent technology in your life — listening to everything that’s spoken in range of your smartphone; watching you from your Xbox One; running facial recognition on ever photo you upload.

Should I be worried?

Well, first of all, it depends on what you’re likely to be worried about.

My biggest hipster “in before it was cool” claim is that I’ve never assumed I had privacy on the internet. When I was an “edgy”, recently ex-Mormon fifteen-year-old with a boy-band-style crush on basically all of Rammstein, I restrained myself from spending too much time Googling them because, I mean, what would the people at Google think of me? But my privacy fears have always been mostly about personal embarrassment. I have no truer self than the opinions that I write on social media at 3 AM and then delete before posting; the idea of someone seeing that, seeing me in all my half-baked, armchair-philosopher glory, is horrifying.

If you’re like me, I have good news for you: for all the data-gathering, the likelihood that anyone will actually look at your individual information is next to nothing. You’re one pixel in a pointilist mural, meaningless on your own but useful in the context of the big picture. You’re more likely to suffer at the hands of an underobserved algorithm run amok than to have another human being side-eye your weird porn searches.


On the other end of the spectrum, if you’re knowingly using the internet for something illegal, like buying drugs or sex, you probably don’t have anything to learn from me. You know how to avoid surveillance (as much as is possible). You should be worried and you know it, so you’re probably reading this from behind your VPN or something.

But most people seem to be somewhere in the middle. They’re uncomfortable with the surveillance but often seem hard-pressed to come up with specific, tangible fears. This is changing as more information becomes available about how your data is used, but in order to make my point more clearly, I’m going to distinguish between public and private surveillance. They have similar methods but different goals and, in my opinion, different threats.

The goal of private surveillance is to make money. They track your actions and gather data about you so that they can target you more specifically for products and services they think you’ll want. Or they watch you so that they can sell the information to the people who do want to sell things to you — in either case, your data might be the most valuable economic contribution you don’t know you’re producing.

The goal of public — that is, governmental — surveillance is generally public order. They want to identify threats both to the public and to “civil society” and stop them. This can be benign, like the CDC watching Twitter mentions to try and detect a flu outbreak early, or it can be Orwellian as shit, like when a failing authoritarian state uses phone records to identify participants in a protest.

I might write another post specifically about public surveillance, but it’s not really my area of expertise, so I’m going to mostly leave it alone and focus on the private sector. I‘ll also be conflating surveillance and advertising to some extent. That’s because in the private sector, surveillance is for advertising, to make it as effective as possible. You don’t really have one without the other.

Private surveillance is a mixed bag

Private surveillance is some sketchy shit; look no further than the Facebook/Cambridge Analytica mess. Studies have also shown that companies are able to engineer the behaviors of some of their customers, to a greater or lesser extent, based on information learned from surveillance. I read a very interesting book last year, Addiction by Design, that discusses this specifically in the context of casinos.

Most people know that companies are watching them, but we haven’t stormed the gates at Adobe yet. We put up with it, and there are a lot of reasons why. One of them is that the whole mobile ecosystem (and “traditional” web, to a slightly lesser extent) is set up to make it as difficult as possible for you to make informed choices about your privacy. Every app asks for a bunch of permissions to access data on your phone, but they’re completely opaque about how they use that data or what they need it for. Desktop applications mostly have an EULA that contains those details, even if they are buried in legalese; phone apps are just a complete black box.

So it’s pretty fucked up. However, there’s something that often gets lost in this conversation: the positives. Another reason we put up with surveillance is that we benefit from it. Private surveillance makes internet services free, or much cheaper than they would be otherwise, like with MoviePass. Your ability to store accounts and passwords so you don’t have to login every time you visit a site uses the same technology as surveillance, and that’s hugely convenient.

Recommendation and prediction algorithms have become extremely sophisticated, too, and will only become moreso. With technology that currently exists, if they wanted to, Facebook could:

This probably isn’t your dog, but this post is really long and I’m trying to keep people from totally checking out
  • Figure out that you own a dog, because you post too many photos of it (it’s okay, we all do)
  • Estimate your income, based on your listed job, location, and employment history
  • Estimate how likely you are to splurge on pet supplies, based on things like your demographic data and likely how often you post about your dog
  • Estimate how important certain values are to you, like “natural”ness, based on things like your known political affiliations
  • Then, sell the data to pet food companies so that they can target you with advertising…

…and the thing is, as scary as all that sounds, it would probably recommend exactly the right dog food, exactly the one that you would pick for yourself given all the options. These algorithms aren’t perfect — I’m a vegetarian who lives in Los Angeles but I’m staunchly pro-GMO, which seems to confuse them — but they’re getting better all the time and they actually serve a useful function in a free market.

So, should you be worried? Basically, it depends on what you’re worried about. But one thing you should remember as you consider these questions is that surveillance, particularly in the private sector, isn’t a monolithically negative thing. Everyone would draw the line between useful and intrusive surveillance in a different place, but you can’t forget that there is a useful side of the spectrum.

I am worried. What can I do?

Let’s say you are worried about surveillance. There are a few different ways you could try to protect yourself. In a nutshell, they are:

Personal habits

This is what my hypothetical internet drug-buyer from earlier uses. There are tools out there that would allow you to stay more or less off the radar: web browsers like Tor, chat applications like WhatsApp, and email clients like ProtonMail are (theoretically) secure against both public and private surveillance. There are also lighter duty options, like the Chrome extension Ghostery that allows you to block trackers (this doesn’t, of course, block Chrome from tracking you (which it does, believe me)).

One of the problems with this approach is that really, you’re only as private as your least-private friend. Maybe you don’t have a Facebook account, but your friend posts photos of a party that you were at. Or your friend has your contact information, and downloads an app that accesses all of their contacts. Or in the Cambridge Analytica example, takes a “what brand of laundry detergent pod are you?” personality test (they were Tide, of course — everyone is Tide) and that app accesses all of their contacts.

So it’s not great. You can get pretty good privacy (lol pgp) by policing your own habits, but even if you’re willing to accept a truly massive amount of privacy-focused overhead on everything you do online, you’re not going to be invisible.

Company self-regulation

So, what if you change the market incentives to make companies value privacy more? In theory, if enough users were able to band together and #DeleteFacebook until it promised to get better about handling privacy, you might see some positive change. But I’m really skeptical that this would work, for a couple of reasons:

“The Amorphous Threat of the Panopticon”, the latest single by hot new indie band Fisting Foucault

1. It’ll be incredibly hard to change the incentive structure like this. People want free shit and don’t care about or understand privacy. The “user data for $$$” model is the reason that most things online are free, and study after study has shown that people are just awful at weighing their choices here; when the one side is “totally free music, news, etc” and the other is “the amorphous threat of the panopticon” — well, we’ve proven over and over what even intelligent, informed people choose. I literally do this for a job and I still can’t make myself care.

2. The technology just makes it too easy to silently, invisibly gather terrifying amounts of data, and there are more strong incentives for it to stay that way. Even if we managed to put enough collective pressure on Facebook to get them to change their policies, any Android app developer can still request access to all your contacts on your phone (or your GPS data, etc) with just a few lines of code. There are reasons why this is the case; software development is always a tug-of-war between developers who want to “move fast and break things” and security. The more secure a technology is, the higher the barrier of entry to building cool shit with it; the less cool shit is built on it, the lower the adoption rate is for it; you end up with a kind of race to the bottom on security standards.

Legal regulations

So if you can’t manage your own privacy, and you probably can’t pressure everyone into self-policing, what about new legislation around privacy? Well, that probably won’t work either. Even the most well-intentioned lawmakers would probably bungle this stuff. Surveillance is a really thorny cross-section of philosophy, practical concerns, and technical know-how. Politicians are notoriously tech-illiterate, and writing genuinely useful regulations would be particularly difficult for something like this.

If the regulations were too strict, it would be massively disruptive of how the internet currently works; it would be punitive to companies that use (fairly benign) advertising to avoid putting up a paywall, and disrupt web developers’ ability to understand how people are using their sites. Meanwhile, if the regulations were too lenient, nothing would change.

Then there’s the political feasibility of even passing regulations in the first place; since lawmakers wouldn’t understand the issues, they would rely on outside experts to guide them, and most of those would probably be industry lobbyists who want to keep things unregulated (not to mention the GOP’s knee-jerk “every regulation is a handcuff on the invisible hand” position).

So it’s impossible to fix?

I mean… yeah, maybe.

As with so many other things, the problems here go to the very root of capitalism, American democracy, and psychology. Companies have a strong economic incentive to gather data and use it in sketchy ways. Politicians are incentivized by those companies (through our broken-ass political system) to allow it — and even if they wanted, they probably couldn’t affect meaningful, positive change. And all of us with our caveman brains are terrible at perceiving surveillance as a direct threat that requires action; even when people make a whole lot of noise about this, how much really changes? Don’t tell me that you’ve never rolled your eyes at someone when you saw them working themselves into a frenzy about online privacy.

But I can’t write 2,700 words about this and not at least make a suggestion, so here goes. If I were to design a solution, it probably would be through legislation, and this is what it would look like:

1. Require every website, digital advertising provider, and phone app to make an easily available, plain-English explanation of what data they’re gathering and what they’re doing with it. No legalese or bullshit. This would sound something like “We track your location before and after you go to a movie so that we can advertise restaurants to you.” Fine the absolute living shit out of companies that don’t comply (and, in a perfect world, give that money to the company’s customers).

2. Require all of the above to make all of your data available to you, if you ask. I think Europe already has laws like this in place.

3. Require that all of the above provide to you, if you ask, a dollar amount of how much your data has been worth to them.

Number 3 is the big one. That’s the only thing I can think of that has any chance of actually changing the average person’s mentality about this. Show us what “free” actually means, then we can finally make an informed decision about whether a service is worth it. I think a related, positive step would be to allow people to opt out of tracking and personalization experiences, possibly in exchange for paying a fee no greater than the average user’s value to the service.

Is any of this feasible? How would we get there? How would this change the internet?


Don’t ask me. I’m just a code monkey. I do have a vision of the future where robots take all the jobs and we end up with a universal basic income that’s paid to you by private companies in exchange for your data, which you then use to buy the products that they advertise to you…

…but enough about my screenplay. I’m gonna go watch the new season of Jessica Jones and try to forget about the fact that Netflix is sketchy too.