Chernobyl and Privacy: Are We Talking About The Right Thing

chernobyl-abandoned-ferris-wheel-CHERNOBYL0619.jpg

Last night, my wife and I finished watching HBO’s poignant mini-series “Chernobyl”. The series explores the Chernobyl nuclear accident and its causes, both human and scientific. The series demonstrates how the dynamics of Soviet bureaucracy and their clandestine mentality exasperated and negatively compounded the effects of the accident and following response.

Without spoiling too much in case you are thinking about watching “Chernobyl” (which you should!), what made the accident possible and more importantly the effort at containing the accident so error-filled was rooted in Soviet hierarchy. In short, people were told to do their jobs whether or not it was safe to the worker or the entire community. Pushing back against a superior was something that just did not happen. Even worse, after the accident, as scientists and researchers tried to get to the bottom of what happened, in order to stop the progression of radiation across Europe and to prevent future nuclear accidents, the KGB would spy on all the scientists to ensure the government talking point of “Soviet nuclear reactors are safe” wasn’t betrayed by scientific proof. As the investigators phones were tapped and some were jailed, the truth was stifled preventing action aimed at fixing other, still active nuclear reactors. There are plenty of movies that show KGB agents spying on their own citizens, but “Chernobyl” profoundly displays the real world consequences of an oppressive monitoring regime on people trying to do the right thing.

As I sat thinking about the series, Maciej Cegłowski’s excellent article “The New Wilderness” published on his blog Idle Words came to mind. In this article, Cegłowski proposes that when it comes to online privacy we are perhaps not talking about the right thing at all, and not focusing on the core issue might lead us to a society with profound consequences not too dissimilar to “Chernobyl”. Sounds pretty dramatic doesn’t it? Let me explain. 

Cegłowski starts his piece with two wonderful anecdotes. First, in May 2019, Google CEO Sundar Pichai penned an op-ed in the New York Times stating that it is “vital for companies to give people clear, individual choices around how their data is used.” On that online article, multiple Google tracking scripts were installed. Second, in March 2019, Facebook CEO Mark Zuckerberg wrote an op-ed in the Washington Post calling for Congress to pass privacy laws similar to Europe’s GDPR. That editorial had a number of Facebook tracking pixels installed. Cegłowski posits:

No two companies have done more to drag private life into the algorithmic eye than Google and Facebook. Together, they operate the world’s most sophisticated dragnet surveillance operation, a duopoly that rakes in nearly two thirds of the money spent on online ads. You’ll find their tracking scripts on nearly every web page you visit. They can no more function without surveillance than Exxon Mobil could function without pumping oil from the ground.

So why is it that Google and Facebook, who benefit more than anybody on collecting and using data on individuals, are so interested in talking about privacy? Cegłowski argues that it is because we are talking about the wrong type of privacy. When talking about privacy, Google, Facebook, and Congress are talking about protecting our data from getting into the wrong hands, not the action of taking it in the first place.

In the eyes of regulators, privacy still means what it did in the eighteenth century—protecting specific categories of personal data, or communications between individuals, from unauthorized disclosure. Third parties that are given access to our personal data have a duty to protect it, and to the extent that they discharge this duty, they are respecting our privacy. The question we need to ask is not whether our data is safe, but why there is suddenly so much of it that needs protecting.

In order to answer that question, we need to instead talk about a different type of privacy, one that Cegłowski defines as “ambient privacy.” The idea of “ambient privacy” is that everyday interactions with others, whether that be at home, church, work, school, or in our free time, should stay outside the reach of monitoring and “should pass by unremembered.” Furthermore, all these interactions “do not belong in the permanent record” or need to be available in a deposition.

We live more and more in a world where data collection cannot be opted in or out of, such as facial recognition at the airport, being tagged by others on social media, and being in the presence of always on Alexa microphones.

Until recently, ambient privacy was a simple fact of life. Recording something for posterity required making special arrangements, and most of our shared experience of the past was filtered through the attenuating haze of human memory. Even police states like East Germany, where one in seven citizens was an informer, were not able to keep tabs on their entire population. Today computers have given us that power. Authoritarian states like China and Saudi Arabia are using this newfound capacity as a tool of social control. Here in the United States, we’re using it to show ads. But the infrastructure of total surveillance is everywhere the same, and everywhere being deployed at scale.

To bring it back to “Chernobyl,” and my dramatic proposal of whether the proliferation of data collection and growing lack of privacy could lead to an eventual Soviet-era, oppressive spy regime, Cegłowski says:

My own suspicion is that ambient privacy plays an important role in civic life. When all discussion takes place under the eye of software, in a for-profit medium working to shape the participants’ behavior, it may not be possible to create the consensus and shared sense of reality that is a prerequisite for self-government. If that is true, then the move away from ambient privacy will be an irreversible change, because it will remove our ability to function as a democracy.

 

(Post 280)