The Freedom Paradox

By Jennifer Highbridge

In May 2017, the Economist ran the following headline: “The world’s most valuable resource is no longer oil, but data.” ♫Oil, that is, black gold, Texas tea♫.  The analogy of data being similar to oil is surprisingly apt. In 2006, data scientist pioneer Clive Humbey said, “Data is the new oil. It’s valuable, but if unrefined, it cannot really be used.” Data is not operable information. Data is the bubbling crude, represented in qualitative and quantitative facts and figures distilled into precious information. Even the dominance of tech giants in the data economy is akin to the smothering effects of Standard Oil in the 20th century. For example, Amazon nets half of all dollars spent online in America, and Google and Facebook accounted for almost all the revenue growth in digital advertising in America in 2016.

In 2011, American households owned 1.17 cars per licensed driver. When we love our machines, we love ourselves, and America’s love affair with cars and oil has come at a cost. There has been an exchange of safety, countless lives lost in foreign wars, and a compromised ecological environment in return for the ease and rapid transit of the personal automobile. Tesla’s don’t run on gas; they run on electricity and vast lakes of data generated and subsequently collected by the machine itself. Freedom is never free, and the value of “free to use” services that make use of massive data collection has come at the price of an unprecedented breach of privacy. The “God’s eye view” of Alphabet, Facebook, Microsoft, Amazon, and Apple has deeply complicated what could simply be deregulated in the past. ‘Breaking up’ these companies is an outdated paradigm of managing the power that they control. Virtually everything we do leaves a digital footprint that can and will be harvested by those made privy. The events that unfold on virtual platforms have become influentially indistinguishable from the events of the world outside of them. Similar to a Tesla, the larger mechanism of this data economy proliferates and subsequently applies the data that it naturally produces. Political elections are lost and won on Facebook and Twitter, Gofundme’s crowdfunding has become an, if not the primary source of healthcare relief in the United States, and careers are started and ended on video-sharing social networking services like Tik-Tok.

The modern reliance on algorithmic systems is indicative of a vitally important change in the human experience. Extremely personalized virtual interoperability has and will continue to fundamentally change social norms, forms of governance, and the ethical landscape of how we interact with our environment. The price that we pay for these ‘free,’ futuristic services is an obligatory sacrifice of our privacy; if you are not paying for the product, you are the product. The interesting thing about this fact is that most know that this is happening and remain complicit, myself included. In this paper, I hope to explain how the proprietary internet began, a personal reflection on consuming algorithmic media. Finally, I would like to discuss the ethical issues that arise while undergoing this societal rebranding of autonomy. 

A good way to explain how this began is to look at Netflix’s adjustment of its customer value proposition during the dot-com boom in the late 1990s. Amid the chaos of new internet companies, Netflix made a dangerous bet to amend the terms of their already working subscription service without extensive testing and customer analysis. Instead of allowing subscribers to keep and exchange up to 4 movies a month, subscribers would be allowed to keep three movies at a time and exchange them whenever they wanted. Unlimited is a great marketing tool, “all you can eat”… “smorgasbord”… Americans love buffets.  Unlimited is also a hefty promise, and Netlflix began to struggle with their asset utilization, as half of their inventory that was actively being rented were popular new movies that were already being heavily marketed by studios and theaters. These movies were expensive to acquire and keep in supply, crippling Netflix’s margin model of keeping the costs of net subscriptions below the revenue that subscriptions generated. Netflix’s business formula was wholly reliant on the satisfaction of their customers, and they understood that they could not simply revoke their promise of unlimited rentals. So, Netflix’s adaptation was to balance customer demand. By developing a proprietary recommendation system, the company could internally motivate their customers to rent the older, lesser-known, and surplus movies in their catalog by collecting user data and subsequently personalizing each subscriber’s recommendations. Instead of using editorial blurbs to promote the same movies to every user, the proprietary recommendation software first gave a survey to every new Netflix account, asking them to rate a collection of films and genres. The key process in the success of the proprietary recommendation system was the filter placed between the results of the recommendation system and what was shown to the customers. The recommendations shown to subscribers were accurate but curated to ensure proper inventory management and to mitigate customer frustration. Unavailable movies were screened, popular movies and new releases were obscured, and movies that could ship overnight were shown first. The shift to a personalized recommendation system was revolutionary, and in 2006, new releases made up only 30% of Netflix’s total rentals, 40% less when compared to traditional video rental outlets. Now, Netflix’s business primarily relies on its multi-billion-dollar recommendation system being as accurate as possible. That goes for Youtube, Instagram, Tik-Tok, and any other advertising media service, especially when their revenue is directly related to how long they can keep you on that page. According to the Kaiser Family Foundation, screen usage rates among teenagers have doubled to a startling 7.5 hours a day. I am scared of what this addiction to technology means for my generation and the little I can do about changing the automated systems that pimp our time.  

So I close my eyes, sigh, and close the google document web page on which I have been banging my head against a research paper. It is 12:43, and I am too tired to get out of bed. I lazily type in the letter ‘N’ and click the curated recommendation of https://www.netflix.com/browse that is generated in less than a second for me. I really feel like a PETA activist wearing a mink right now. Before this, I was researching the privacy harm formulas that Google uses to systematically keep people like me complicit in the search engine I am currently submitting to. For example, suppose that I engage in a given activity A without being observed. Suppose my data is to be harvested and I to be observed. In that case, I have the choice to continue engaging in A, with privacy comprises, c, or discontinue A and do B, an activity that will not give rise to the privacy comprises. The choice will depend on whether U(A) - c ⋚ U(B), where U is utility.

In this, the Faustian exchange is laid bare in an algebraic lexicon. What is the amount of worldly knowledge and power (U) that the devil can offer if I participate in (A) in return for your mortal soul, (c)? In the 15th-century german classic, Faust is a bored and depressed scholar who, after attempting suicide, makes a pact with the devil’s representative, Mephistopheles, for unlimited power and knowledge. Interestingly, although Mephistopheles appears to Faust as a demon, the writing does not portray him as actively tempting or corrupting him. Mephistopheles is written as a demon that services, then collects the souls of those who are already damned.  

Who’s watching? Mephistopheles’s white text on the black background prompts me to choose between Ellen, Isaac, Bill, kids or to create a new profile. It almost could be read as confrontational, like an airline attendant walking down the aisle saying, “Your trash? your trash? you’re trash”

So, who’s watching who, punk? I click on Isaac’s ninja-styled icon, and the screen flashes to the front page of Netflix, where the brightly colored thumbnails lined row on row begin to play in their windowed box if you hover too long. The faces of beautiful Hollywood stars are advertised like candy, a symptom of the parasocial relationships that American media companies cultivate and thrive on. There are so many options to choose from: Critically acclaimed films, New and popular releases, Trending now, TV comedies, Watch it again; this list is almost literally endless due to the change in strategy that streaming brought to the Netflix business model. Initially, streaming was an optional addition that came at a minimal cost and allowed subscribers to “View Instantly.” As streaming became prioritized in the late 2000s, content acquisition became as important as developing their recommendation services. In 2020, Netflix had 2.2 million minutes of content available for subscribers to consume. I have to admit, the devil’s deal is sweet sometimes because I find a show called Old Enough!, a 30-year-old Japanese television series about toddlers that run errands alone for the first time. This is truly a fantastic gift that has been given to me; I love that I am able to watch anything and everything at any time, anywhere. 

Something isn’t right, there is a strange presence in the murky depths of my brain when it gets late, and I have my phone or laptop with me. Even though I am exhausted and laying in bed, I refuse to go to sleep until…until…until what exactly? I have this strange feeling of needing to find something before I go to bed to make all of this mean something. Which movie or series would make me one with everything and allow me to say, “You’re the one I’ve waited for. I am satiated and complete.” What am I looking for? Is it the exhausted dopamine receptors in my brain craving the hit of short-term satisfaction? 

I strongly feel that it might be, because before 10 minutes have passed, I have taken out my phone, and I am opening Tik-Tok while Old Enough!  plays in the background. Tik-Tok’s algorithm and data collection make Netflix look like the Federal Trade Commission. On Tik-Tok, one does not have the ability to search for specific content. There is something simply called the “for you page”, FYP for short. Every form of engagement or non-action you make while on Tik-Tok is recorded, quantified, and used as data to refine this “for you page”, which is different for every user on the platform. The algorithm will adjust based on factors that are as minute as the milliseconds of difference in hesitation and lingering times to skip or rewatch a video.

There is a phenomenon on Tik-Tok of individuals saying that they are trapped on ‘X’-Tok because the algorithm has adjusted to show them ‘X’ content to a high degree. Currently, I am being sent down the Criminal Psychology-Tok rabbit hole, where I am constantly being shown videos of psychologists analyzing police interrogation tapes. To some degree, this is exactly what I wanted, and if I stopped watching these videos, the algorithm would correct for this. This is not inherently dangerous, but it means that users have faith in the algorithm. An algorithm that will show them where they really want to go. A significant problem occurs when the artificially intelligent “for you page” sends users down more dangerous rabbit holes. 

In a study conducted by the Wall Street Journal, hundreds of automated accounts watched hundreds of thousands of videos on Tik-Tok to better understand the highly guarded algorithm designed by the platform’s China-based parent company, ByteDance. Each bot was given a personality, manifested in the kinds of videos they would positively engage with. The Tik-Tok experience begins the same way for everyone, these bots included. After some very brief questions regarding age and name, the user is presented with an endless stream of videos to be vertically scrolled through. Videos automatically play as soon as a video is scrolled past, which creates a highly addictive and habitual pattern of consumption. There is no need for Tik-Tok to use traditional data collection because the first videos that the platform presents to users are a far more inconspicuous mechanic. By showing users very popular videos and vetted by moderators, the algorithm begins to understand what kind of person the user is. 

In the Wall Street Journal’s experiment, the bots were programmed to have interests such as dogs, forestry, astrology, and dance. Not all interests were so positive; one bot in particular, kentucky_96, was programmed to be interested in sadness and depression. On kentucky_96’s 15th video, it stopped to rewatch a video about how ‘everything happens for a reason,’ a video hashtagged by the posting account as #sad and #heartbreaking. I managed to find the video referenced and watch it myself because I was skeptical about the probability of a sad video being the 15th video shown to the sadness-prone bot. What I found was that the video itself is not overtly sad or depressing, but the hashtags and related content have implications on how the Tik-Tok algorithm views kentucky_96. 

23 videos after this 15th video, the equivalent of about four minutes, Tik-Tok’s algorithm showed kentucky_96 a video about break-ups, again hashtagged as #sad, posted by the account @shareyoursadness. kentucky_96 lingered on this video and any other videos like it. After 80 videos or about 15 minutes of total watch time, the sadness and depression preference for videos landed kentucky_96 in two primary categories: breakup/relationship videos and videos about mental health/depression. After 244 videos, or about 36 minutes of total watch time, the breakup/relationship videos had been phased out, and the mental health/depression videos were about 93% of what was shown to the bot. Tik-Tok has claimed that the marginal videos that do not correlate to users’ preferences are to help the user discover different content, but the majority of the 7% of non-depression-related content that was shown to kentucky_96 were advertisements. Tik-Tok has spoken out against the WSJ’s research, saying that real users have varied interests that keep them from being so intensely pipelined. WSJ responded by citing multiple of their bots with varied interests that still managed to be pushed into unhealthy rabbit holes. Regardless, Tik-Tok is having unprecedented success with its user satisfaction and overtook Google as the most popular site in 2021. This is a significant event because the processes of understanding the world differ so vitally from each other on each of these websites. Even though Google’s search results have been shown to be tailored to an individual’s propensity for belief, Tik-Tok openly and overtly restricts your freedom of choice in return for a curated and proprietary stream of information/media. 

Fifteen years ago, Psychologist Barry Shwartz coined a term he called The Official Dogma. The Official Dogma was what he termed the dogma of most, if not all, capitalistic western societies. The dogma of these societies is that if societies are interested in maximizing the welfare of their citizens, they should do that by maximizing individual freedom. And the way that societies should maximize freedom is to maximize choice. Shwartz claimed that the staggering amount of choice we have in modern society has brought complications to our ways of living that are unproportionate to the benefits we receive. 

The event of a transparent and purely automated media platform being more popular than the most powerful search engine could be described as the pendulum swing back from the complications of choice that are prevalent in our western industrial societies. Choice is complicated and can often make people miserable in an attempt to provide value. Why does choice make people feel miserable? Two examples are the regret and anticipated regret that comes with the escalation of expectations that are inherently promised to the choice-maker. On Netflix, there is so much to watch that one could possibly watch the wrong things for years and not know what they were missing out on. On Tik-Tok, however, there is no worry that you are not seeing the content you should because the platform has made the choice for you. This might seem like an exaggeration, but the reality is that there are many dangerous effects of personalized and curated information. 

kentucky_96 was just a bot. The possible effects of data harvesting, pipelining, and believing that one is seeing unfiltered content were very real when looking at the 50 million people affected by the Facebook data breach exploited by Cambridge Analytica. In March 2018, it was uncovered that countless Facebook users that were prone to believe certain narratives had been purposefully pipelined and subsequently targeted by Cambridge Analytica from 2015 to 2016. Facebook had failed to notify any of them that their data had been compromised. Cambridge Analytica capitalized on the information it collected to create micro-targeted advertisements to sway Facebook users towards its preferred political partners, including during the 2016 US presidential election and the Brexit referendum of the same year. The advertisements included misinformation, fearmongering, and the exploitation of sexual scandals. 

Making accurate conclusions based on factual information requires making complicated choices. There are so many different sources of news and information, each of which caters to a different demographic. The choices involved in deciding if a source of data is reputable and whether or not you are being biased are stressful for many. And although remaining skeptical about most of what you read is the best way to protect yourself against false narratives, it is so much easier to simply go to whatever source of information that best fulfills the narrative you already have. Suppose we are to use Barry Shwartz’s logic. In that case, I believe that the freedom that the personalized internet has brought is a simplification of choice to many who are willing to sacrifice objective truths for the self-assurance provided by leaning into their narrative.

I would like to mention that algorithmic processes are simply mathematical equations that do not have any bias in and of themselves. The bias that these algorithms contain comes from the bias of how data is collected and the presuppositions programmers make. The problem is that Netflix, Tik-Tok, and Facebook are microcosms of how the world is functioning currently. There is a vast amount of bad data, historical data that is filled with prejudice, and the worst of human impulses that is then included in integral data sets used by the most influential systems in the world, and then the systems are automated. This recursive system is responsible for many more dangerous manifestations of data-based algorithms such as predictive policing and social credit scoring. These algorithms have much more dangerous implications when they employ inaccurate data because they are extensions of state control.

These algorithms use facial recognition infrastructure and other biometric garnering software to identify a person and retrieve data about that person from social media and other digital profiles. These algorithms then determine whether or not the person should be approved or denied access to consumer products, social services, or fundamental rights and freedoms. In our increasingly networked environment, the combination of biometrics and social data inputs can inform a state’s decision to restrict or alter an individual’s travel prospects, employment, access to finance, and the ability to enter into contracts. This is not science fiction. Police organizations and law enforcement collect biometric data from drivers’ licenses, ATM machines, and security cameras to construct policing algorithms integrated with racially biased data that was often assembled with malicious intent. 

A stark example of how bad data can affect these governance algorithms is the difference in accuracy of facial recognition software when comparing different demographics of people. For instance, lighter-skinned people are more likely to be identified accurately by these algorithms than those with darker skin. In addition, women are less likely to be identified correctly than men are. In 2019, analytics powerhouse SAS showcased the accuracy of their brilliant facial recognition algorithm by having a panel of all light-skinned individuals come up and test the software. This was critiqued by data-studies expert Jennifer Priestly, who mentioned that their software was over 95 percent accurate when identifying white men, and only a little over 50 percent accurate when identifying black women. 

Discriminatory policing is already prevalent in black neighborhoods. The residual effects of redlining and blockbusting during the Jim Crow era meant that many black residents who couldn’t get government-backed mortgages had no choice but to accept predatory contracts and often resorted to illegally subletting and neglecting to make repairs so families could make payments. This led to the overcrowding and decay of many black neighborhoods that now face predatory policing. This discriminatory policing coupled with racially-biased facial recognition software is horrifyingly dangerous and is only one example of how poor data usage reaches its mathematical hands through the screen and into the world. 

It is interesting that many of the severe affronts on our privacy and autonomy stem from the more straightforward task of monetizing attention. This endeavor began with the simpler goal of balancing customer demand and maintaining proper asset utilization. Netflix’s manufactured interest in surplus movies over new releases stands as an important microcosm of a more extensive overhaul of individual freedom that is being adjusted to make room for the algorithms that direct what we watch, see, eat, and generally do. What is to be done when companies and governments inevitably harm groups of people with their powerful algorithms? 

Well, morality and legality both rely on the idea that things should happen one way or another. This issue is complicated when dealing with the mechanistic determinism of a mathematical algorithm. The concept of punishment, legal or social, is an important factor in mitigating behavior that could be considered harmful. The causal responsibility of companies like Alphabet, Facebook, Microsoft, Amazon, and Apple is distanced and placed elsewhere when they come under legal and social pressure for data breaches or algorithmic mistakes. Companies do this because, at their core, the determinism of their algorithms limits the apparent freedom of choices the company could have made. They elucidate causes that would make the company’s actions seem like an inevitable outcome of their determined circumstance. I am not so cynical as to distrust that companies often tell the truth. Technology and its weaknesses develop exponentially faster as time passes, and it is believable that not all breaches are understood, accounted for, or even seen. I believe that instead of focusing on treatments or punishments, two critical variables would mitigate problems and create a far more positive engagement with algorithms. The two values that should be given priority are transparency and consent. 

Transparency of the mechanisms of algorithms is an essential value that has long been withheld from the general public for far too long. Inequality of power will always lead to violence, which is also true for inequality of understanding. The users of Facebook, Youtube, Instagram, Tik-Tok, Google, and any other algorithmic platform should be able to understand the hidden layer of processes that are integral to the application. Primarily, this would require companies to understand their own algorithms more deeply before they implement them and consequently have them be legible to those who use the service. 

The other value I believe is extremely important is consent. At its core, consent concerns the management of the freedom of ourselves and others to do or not to do certain actions. A serious lack of consent arises when industries do not make their algorithms legible. For example, in 2012, researchers at Facebook and Cornell University manipulated the newsfeed of select Facebook users. Some users were shown more positive articles, while others were shown more negative or sad articles. Users that were shown more positive articles were more likely to post more positively themselves, while those who were shown negative articles were more likely to post negatively. While Facebook’s terms of service require that users relinquish the use of their data for “data analysis, testing, [and] research.”, and the experiment was only a week long, the lack of transparency and consent that were involved in this experiment are very worrying when examining the potential outcomes of automated systems that are purposefully obscure and manipulative. 

I’m done, finished. I look over the essay and take a closer look at the page I am writing on, but it’s not really a page, is it? It’s a facsimile of paper that somehow has images of alphanumerics appear on its surface when I press the corresponding buttons. A facsimile created by Google, owned by Google, for me to use for free. As long as this application is free to use, so am I. It is true that the ease and value of these applications are indeed a blessing to have access to, but at the same time the large companies that I am looking at suppress and control so much of what people believe is possible. For the most part, people will believe what is shown to them, and much of what is being shown to people is that to succeed in our algorithmic world, one must perform like an algorithm.  

There are real people trapped in all of these lines of code.  People who are trapped catering to the algorithms that bring them money. There are billions of people generating content, legislation, and art that is not primarily for people, but rather for the algorithms that will make the products popular and functionable. When people do not properly understand what is possible for themselves, they make choices that are not aligned with anything they really want because they settle for what they are told is possible. It could be argued that algorithmic systems will allow humans to realize more accurately what is possible for them and help them direct them there, but this will take an intensive overview of ethical data collection, implementation of transparency, and consensual relationships between individuals and large systems. The best that the individual can do is stay informed and speak loudly when their rights are violated. Live like a damn human. Write with a pencil, touch some grass, and make art. Creating human art for human beings is a personal revolution that works against the hypnotic effects of the beautiful algorithms surrounding us. 


Previous
Previous

The Hill