This ain’t your granddaddy’s privacy battle.
Times were simpler when postcards were the big privacy invasion scare.
Today, our personal privacy is under siege by veiled government surveillance programs and the countless tech company Trojan Horses.
Privacy, per Merriam-Webster, is defined as the quality or state of being apart from company or observation, or freedom from unauthorized intrusion.
Technical innovations in the past twenty years have blurred the lines of “apart from company” and “unauthorized intrusion,” and now our personal privacy is under attack from multiple fronts.
Our locations are constantly being tracked on our phones, which are borderline inseparable from our bodies. We are under constant surveillance.
Social media platforms know more about us than we should be comfortable with.
Our sensitive information is floating around and being exchanged for a myriad of unauthorized purposes.
Many personal privacy advocates have taken to blockchain and cryptocurrency entrepreneurship to build solutions that address the concerns of our dwindling right to privacy in the digital world.
Technological advancements like blockchain and zero-proof have given the pro-privacy debate a new gust of wind. The beauty of these solutions is that they offer encryption or at least partial obfuscation on a massive scale.
Privacy coins such as Monero and Zcash give us the freedom to transact without being tracked, but this could come at the prohibitively high cost of empowering and enabling criminal activity.
Blockchain-based browsing and social media platforms like BAT, Steemit, and Sapien offer an escape from a manipulative data-mining browsing and social experience.
The following article explores the evolution of privacy in contemporary society, how the digital world has warped the reality of privacy and the rumbling dangers that come with it, and how blockchain and cryptocurrency projects offer a solution.
A Contemporary Legal History to Privacy
Privacy as we know it is a relatively recent development in human society. Our right to privacy isn’t explicitly stated in our Constitution and has been primarily defined by legal precedents, many of which haven’t accounted for the rapid societal change ushered in by the digital era.
The rise of a private tech oligarchy posed new paradigms in which a slow-moving bulwark of a government is continually playing a game of iron-fisted catch-up.
The government is in a precarious position when it comes to dishing out judgments against tech companies. These cases require light but decisive footwork to avoid stepping over and stifling private enterprise, while simultaneously protecting civilians from a very real bogeyman in the dark.
The following are a handful of the legal precedents that have helped to dictate where the United States stands on personal privacy today:
The fact that the technology was physically placed on the Jeep matters, but the line starts to blur. Our locations are constantly being tracked on our smartphones and wearables, and we don’t really seem to mind. In fact, it’s quite the value-add to navigate the world by opening an app, or having your watch let you know how much you didn’t exercise today.
Here’s where it gets real: a small loophole in the judgments of Smith v. Maryland and Jones v. the United States exposes anyone and everyone to mass surveillance. Your autonomy, privacy, and security seem to hang by a thread if the government (or anyone) can gain access to your location history and current location at any moment.
If the companies you give your location, thumbprint, and other such information are considered “third parties”, then the government technically should be able to access them if warranted.
This situation is relevant because it shows that while your data may be currently preserved by whichever third party you’ve entrusted it to, this protection is next on the government chopping block.
Situations such as the FBI versus Apple squabble help paint the contest between anonymity and safety. The privacy debate often ends in an unresolved quagmire; a state of stasis that inevitably moves towards the extinction of privacy due to rapid advancements in technology.
To avoid complicating the issue, let’s use Occam’s razor to split the issue of privacy into two simple camps: for (government) power and for (corporate) profit.
The government’s primary utility for surveillance is for control, whether that be protecting its citizens from harm or becoming some dystopian 1984 Orwellian authority.
A corporation’s primary utility for surveillance is to harvest and commoditize the information, whether that be facilitating more profitable advertisements/sales or auctioning off consumer information.
The evolution of data and privacy protection within both groups is interesting, but the case for government power takes the ethical dilemma cake. The search for company profit pales in comparison to the government’s tug of war between their duties of protection and supporting their citizen’s rights.
Uncle Sam likely doesn’t give a shit if you bought a slow cooker on Amazon, nor does he want to upsell you a cookbook based on your browsing behavior.
A government has a responsibility to keep its citizens safe, and surveillance and data monitoring have become a critical tool to keep the criminal underworld at bay.
The reality is that the world can be a nasty place, and not everyone wants to hold hands and sing Kumbaya. Human trafficking, child pornography, and terrorism are just a few of the unfortunate realities that governments around the world try to stop and are able to do so with moderate success. Without some sort of public surveillance, the government’s ability to stop the bad guys is substantially undermined.
The guiding question presents itself: how do we keep power (money, resources) away from the bad guys, and simultaneously keep the good guys from infringing on our privacy?
According to a 2016 statement by the Assistant Secretary of the Treasury for Terrorist Financing, Daniel Glaser, ISIL (ISIS) raised a whopping $360 million in revenue per year from taxing, extorting, and other activities.
This money was being used to fund the day-to-day activities, as well as support ISIS terrorist cells around the world. The majority of this money is likely fiat and can potentially be confiscated or throttled when tracked. The faster the money gets traced, the slower terrorism can spread and lives are potentially saved.
However, what if ISIS were to make use of cryptocurrency, an often untraceable monetary asset that can be sent in massive sums from anywhere to anywhere at any time? The ability to send an untraceable amount of money nearly instantly anywhere in the world is an attractive feature of private cryptocurrency but could be catastrophic if utilized by criminals.
Privacy projects are decentralized and don’t have a central authority to shut down any illicit activity. As you can imagine, this poes an enormous issue for counter-terrorism units. Granting the government the ability to track our transactions in exchange for saving our lives seems like a more than fair deal, but it’s a poor hedge against an omnipotent totalitarian regime in the future.
One side of the financial tracking debate views privacy coins as dangerous enablers of chaos and disorder, and rightfully so.
The other side of the debate views privacy coins as what could potentially be our last beacon for future generations’ sovereignty, and rightfully so.
The ability to spend our hard-earned income as we please, within reason, is a critical component of our personal autonomy, and limiting it would throttle our existence.
The more popular examples hover around transactional privacy and include privacy coins such as Monero, Zcash, Dash, and PIVX. The nucleus of the privacy feature is the use of stealth addresses, encryption, or some other sort of identity masking feature to disguise the identity of the user(s).
Privacy may actually be an anomaly”
Today’s companies seem to know us better than we know ourselves; like a creepy neighbor that’s always trying to make enough small talk to sell you something.
There’s little we can do, or should do, to stop businesses attempting to make a profit, but the rapid advances in data collection and audience targeting could have scary unintended consequences.
Companies like Google or Facebook don’t technically sell your data, but they do make it available in ad networks to advertisers that use their ad-buying tools – and generate some meaty profit doing so.
The better data a company has, the more informed sales, marketing, and advertising decisions it can make. Instead of throwing ad spaghetti on a wall and hoping something sticks, advertisers can tailor messages to a specific targeted audience. Since these ads are more relevant to these audiences, they are more likely to purchase the good or service.
Data is used to better serve more relevant ads. I just got an ad for dog toys, which is great because I spoil my dog. If there wasn’t any data to use, I could be getting something way less relevant like ads for discount oil changes from a repair shop across the country.”
While data will always play an essential role in the consumer economy, social media has increased the ability to collect data and raised the rate of collection to unprecedented levels. Since the transition happened in the wake of the enormous value-add of social media, the average person hasn’t really been bothered by how much of their data is constantly being collected.
People have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people. That social norm is just something that has evolved over time.”
The danger of online companies luring you into new comfort zones and collecting your data is deeper than merely trying to sell you stuff. The peril lies when these large pools of data are mismanaged and fall into the hands of malicious third parties.
In May 2018, an Oregon couple was at home talking about hardwood floors. The husband received a phone call from one of his employees in Seattle who said he received an email with the full conversation. The couple’s Amazon Echo (Amazon’s “smart speaker”), recorded the conversation and sent it over.
Amazon’s explanation of the situation was as follows:
“Echo woke up due to a word in background conversation sounding like ‘Alexa.’ Then, the subsequent conversation was heard as a ‘send message’ request. At which point, Alexa said out loud ‘To whom?’ At which point, the background conversation was interpreted as a name in the customer’s contact list. Alexa then asked out loud, ‘[contact name], right?’ Alexa then interpreted background conversation as ‘right’. As unlikely as this string of events is, we are evaluating options to make this case even less likely.”
While this story alone should be unsettling for anyone with a smart device in their home, that’s just the tip of the iceberg.
All things considered, this could have gone much worse. Once it hears its wake word, Alexa, the Echo activates and starts sending a recording to Amazon’s computers. Woe to be named Alex or Alexa and have an Echo.
As was revealed in the Snowden leaks, the National Security Agency has been able to secretly hack into the main communication links between Google and Yahoo data centers and potentially collect the data from hundreds of millions of user accounts.
What if hackers managed to extract what could be millions of conversations from Amazon’s database?
If this type of coordinated Internet of Things hacking sounds a bit far-fetched, think again.
Lappeenranta is a city in eastern Finland and is home to around 60,000 people. In late October 2016, hackers launched a Distributed Denial of Service (DDoS) and attacked the heating systems, leaving the residents of at least two housing blocks without heat in subzero weather.
Now imagine a hack at the scale of millions of IoT devices for intimate conversations/videos, or worse, forcing every smart speaker to play DJ Khaled at the same time.
Unless you were living under a rock in 2018 (you may have been better off!), you’ve probably heard of the Facebook-Cambridge Analytica data scandal.
The scandal revolved around the personally identifiable information of over 87 million Facebook users that was sold to politicians to potentially influence voters’ opinions.
The majority of the information was harvested through personality quizzes that require users to check a box that gave the page or site access to everything from your profile information to that of your friends.
To users fueled by a frantic need or pure boredom, this was a bargain.
Lo and behold, millions of profiles ended up in the hands of Cambridge Analytica. The information likely contained the public profile, page likes, and birthdays of users, as well as access to users’ news feeds, timelines, and messages. Cambridge Analytica would then create psychographic profiles of the data subjects, which may have been used to create the most effective advertising that could influence a particular individual for a political event.
The politicians and campaigns who purchased the information were behind the 2015 and 2016 campaigns of Donald Trump and Ted Cruz, as well as the 2016 Brexit vote.
An important distinction many people blur is that the Facebook-Cambridge Analytica scandal wasn’t a hack. People voluntarily consented to give up their information for something as innocuous as a quiz. However, just a glimpse behind the scenes of the impacts and movements of the data economy is all it takes to unnerve a nation.
Even worse, the credit reporting agency Equifax was actually hacked for even more sensitive information (social security numbers, birth dates, addresses, etc.) of 143 million Americans in 2017.
So, not only do we not know who potentially has our information, but this information can directly be used to pry open our bank accounts, take out loans, and make purchases in our name.
In the boardrooms of any publicly traded company, like Facebook and Google, a major conflict of interest exists between maximizing shareholder value and safeguarding their users’ data.
With $39.94 billion and $95.38 billion in advertising revenue respectively in 2017 alone, it’s not hard to imagine scenarios where Facebook and Google may have tipped the scales towards profit.
Although the looming threat of advertisers cashing in on our privacy is concerning, the actual danger still lies in third parties that can and will use this information with bad intentions.
Up until now, anyone concerned with their personal privacy has been forced with a dauntingly uncomfortable decision: put up with it and live a normal life, or forego the luxuries afforded by the Internet and social media and go off the grid.
Anonymity and data-privacy focused blockchain projects aim to protect your online activity, account information, and browsing behavior from unknowingly falling into the corporate coffers, personal information data markets, or the hands of malicious third parties.
One such project, the Basic Attention Token (BAT), helps power and incentivize the use of its anonymity-focused browser. BAT’s Brave browser utilizes smart contracts to allow advertisers to send ads with locked payment tokens directly to users. Users can then use their earned BAT on several things like premium articles and products, donations to content creators, data services, or high-resolution pictures.
BAT, and many other projects with Facebook and Google in their scopes, have business models that revolve around replacing the third-party intermediary component of ad networks. As a result, platforms can offer a browsing or social experience without collecting or storing extensive personal data.