Mance, H. (2019, July 18). Is privacy dead? Financial Times. https://www.ft.com/content/c4288d72-a7d0-11e9-984c-fac8325aaa04
- The problem with “mutually assured surveillance” is that power is not equally shared. More and more, those with power bend the laws to their own whim–or simply ignore them as inconvenient gnats to be swatted aside. Surveillance may capture police misdeeds, but how many of those actually face the same repercussions that regular citizens would? And how many of them get some form of immunity or deference to the assumption that they were “just trying to do their job”? Turn and witness how many people in the US currently are being deported against court orders, or being grabbed off the streets and out of courtrooms without any chance to invoke due process! Power is distributed asymmetrically, and without power, surveillance cannot be acted upon.
- Privacy is a hindsight problem only for those who have always had it. I grew up in a house where my parents decided to remove my bedroom door so that I would have no privacy in my own room, where showers could be interrupted by somebody flinging aside the curtain with no warning, where all access to outside media (newspapers, books, TV, movies, Internet, radio) was strictly monitored and controlled, where even schooling was delivered in the home and friends were generally only seen at church. Where even imagination was monitored and adjudicated to be permissible or evidence of demonic possession. Privacy is not a hindsight problem to me. It is a foresight problem. From my perspective, everybody else is suffering from a lack of foresight, imagination, experience, or all of the above.
- Google may say the right words about us being in control of our data, but notably, they reserve the right to use that data at their whim among their own properties. Don’t listen to their words; read their terms of service.
- “Informed consent” is often not informed at all. How many people just click “Accept” to move on without reading anything? That they chose to move forward didn’t make their consent “informed”! Companies don’t have to put a pile of legalese in front of us; it’s entirely possible to have a plain language terms of service saying what kinds of things you will and won’t do, and why. “We need the right to ‘republish’ the content you submit so that we can display your uploaded avatar and journal posts to others. We won’t do anything else with it without your express permission.” That would be simple and honest! Doesn’t take five pages of legal boilerplate.
- You know what could be done? Mandatory de-identification upon data export from any system, or any time the purpose changes. If you have home surveillance cameras, you should be able to view the feed yourself, within that system. Export the data to post something on YouTube, and all faces are automatically blurred, timestamps scrubbed, etc. Any time a company is bought out by a competitor, all customer data is automatically wiped (and customers can choose whether to re-enter their data in the new company system). If police need to see something from your home cameras, they can come over and videotape the screen.
You know, I administer systems absolutely stuffed with personal data. Not just in a business context! I also work with a non-profit serving a community with many people who would face incredibly adverse consequences if their membership in the community became known. That could be loss of housing, loss of employment, or worse. We need personal data in order to secure events and address problem individuals, but privacy is fiercely guarded. Our community entrusts us with their information–sometimes very reluctantly–and it’s a responsibility I have to take seriously.
I’ve occasionally considered my ethical obligations if I were to be directed to move that data to an open, insecure, unaudited platform, or if the organization decided to share that data for marketing purposes or something of that sort. Is my greater obligation to the organization who has legally collected that data and on whose behalf I manage the system, or to the community who entrusted their personal information to us for a specific purpose in a specific context? Would I be prepared to unilaterally erase the database and face whatever consequences came from that action? Can I hide behind the cloak of “Well, I didn’t know for sure that this would happen…” or would that just be a convenient lie to try and make myself feel better?
The answer will always depend on the exact circumstances, and so I can never know for sure in advance what my decision would be. But it’s something I regularly ask myself, even so.
Leave a Reply