4. The Red Herring of Data Collection

Technology and AI ethicists have it wrong. The danger of technology and AI is not the tired old tropes of surveillance, tracking, or Cambridge Analytica. My goal is to show you these things aren’t as scary as they make it out to be. The focus on surveillance and tracking is really a red herring for what’s really going on behind the scenes. One major outcome of is the erosion of capacity for agency, affecting our capacity to act and know.

My credentials to speak on this topic? I have a patent for one of these systems. These systems don’t really work as well as promised. In fact, they don’t work more than they do work, and the evidence against statistical models and their reliability mounts daily. I wrote about its impending death in 2016 and the references and criticisms I made then are still valid today.

Rewind the clock to 2015, when “Big Data” was the buzzword of the day. The tech research outfit Gartner predicted “by 2030, 90% of jobs as we know them today will be replaced by smart machines.” IBM spent billions on its Watson platform, betting big on Big Data. Billions, if not trillions, of venture dollars went into Silicon Valley startups promising that Big Data would eventually rule the world. Yet today, it’s no longer cool to say “Big Data.” Why did it fall out of favor? Because despite all the hype and billions spent on marketing, the technology simply does not work.

Let me give you a simplified example by asking a simple question:

Have you voluntarily and non-accidentally clicked more advertisements over the past five years or less?

Google and Facebook are advertising engines. Their business model is better tracking and watching your movements online, providing links to click and products to buy. If surveillance and tracking were their goals then these tracking technologies would have gotten better the past five years. It follows that if you are not clicking more ads, then the data collection is all for naught. It is simply not doing what they were promised of doing – properly supporting the core business model of these companies. The widespread notion of what surveillance, tracking, and targeting can do is undeservedly optimistic.

Even Aleksander Kogan, the inventor of the algorithm for Cambridge Analytica is on record as saying, “that shit doesn’t work.” Human behavior is not so predictable. Nor is it so easily controllable.

Two earlier articles were about the erosion of agency via nudges and compliance. When read sequentially, it should become apparent that the goal is to constantly apply the pressure of nudging you towards some form of compliance. The real danger of our new hyper-connected reality is what UC Berkeley professor Stuart Russell calls enfeeblement – a loss of striving and understanding, the erosion of the foundations of civilization that leave us as “passengers on a cruise ship run by machines, on a cruise that goes on forever.”

But where is the cruise ship going? Truth be told, there really is no one true intended outcome, but many. Rather, there is one overarching effective outcome: behavioral and knowledge compliance through large-scale behavioral experimentation. This is the real use of “Big Data”: The constant barrage of information, emotions, and behavioral debris we encounter dozens or hundreds of times per day change the way we think, feel, and act. The data is used to track and manipulate these changes — at scale. This is our new reality.

Furthermore, it is not one single entity that is doing this, but multiple, each with their own desired outcomes and reasons. As such, we must take some time out to parse what’s going on and what is happening. But simply put, we lack the capacity to manage the explosion in information, opinions, and nudges we come across on a daily basis, and this has a human cost that has yet to be determined. As we are exposed to increasing amounts of messages and information in the form of gentle and suggestive nudges, we don’t know what messages and behavioral changes become ingrained in ourselves and which ones do not.

Every single day, there are marketers and algorithms that suggest things to you such as what to watch (Netflix), what to buy (Amazon), what to read (Google), and even who to date (Tinder). The ultimate effects of these nudges, alone or in aggregate, are largely unknown but their effective goal is to become epistemic and behavioral agents on your behalf. That is, the internal goal of these companies is to become your single source of truth – and that truth usually defined by business metrics. These systems are increasingly becoming automated as they shape our perspective on the world via statistics and averages. Moreover, the systems are also becoming increasingly aggregated and centralized. And as the number of total potential epistemic agents decreases, their reach increases — over time, there are fewer epistemic agents selling you a more concentrated message. And with that, comes an enfeeblement where we lose some capacity to know, choose, and act in a world where our choices are increasingly made for us.

These algorithms increasingly create a false choice architecture nudging us daily, often to the point of learned helplessness, anxiety, and other emotional extremes. They punish us for unwanted actions and reward us for engaging, and so we become more dependent on them to feel good about ourselves, raise our self-esteem, and to receive acknowledgement, affirmation, and attention from strangers who themselves found us via the same algorithms.

I do not believe that this enfeeblement will ever be total. Even with the constant encroachment of epistemic agents in our lives, we will not become zombies connected to some VR headset, tap tap tapping away in our decrepit sleep pods. I am no alarmist, and I am not proposing that we shut all of it down to save humanity. But I do believe that there is a true and palpable erosion of agency as a result; and it is apparent that it is taking away from our capacity to choose and to have power over what we feel and what we know.

The thing about learned helplessness, we are starting to discover, is we actually don’t learn that we are helpless. By default, we assume control is not present. Instead, what we learn is that we indeed have the capacity to help ourselves; that agency is more within our possibilities than we have given ourselves credit for.

So the punchline: why am I writing Digital Agency? My thesis is that by understanding what is taken away in this erosion of agency, of truth and in action, that we might learn how to preserve it and fortify it. This links deeply into old questions surrounding belief systems, meaning, and truth at the intersection of technology, algorithms, humanity and philosophy. And this is where it starts to get really interesting. [1]

Over the next several articles, I will explore some of the several outcomes from living in our new reality. First, by exploring the mechanisms behind the scenes: why companies run these algorithms and how they run bad science to reach targeted business outcomes. Then I will talk about the Frame Problem and its relation to ourselves as epistemic agents. Then, Frame pollution, Bayesian poisoning, and more on enfeeblement. And after that, trust and the release of epistemic agency.*


[1] For example, if we discover that behavioral experiments are eroding away our sense of free will, then that implies that there is free will to be lost. If there is free will to be lost, then free will exists.

* No promises made on the ordering of those topics!