This personal data is then used to deliver individually customized experiences to park-goers, and as a by-product, Disney gets to do all sorts of analysis on the data to figure out how to squeeze you for all you’re worth.
My personal tale with the MagicBands is one of pirates. My kids rode Pirates of the Caribbean all day, so when they saw Mickey, he talked not about Buzz or about Peter Pan but about Jack Sparrow. Bam! Big data in action. Mickey knows.
This kind of tracking is unnerving for some. Indeed, one of my post’s readers called me an asshole for so flippantly discussing the topic.
Well, weeks after my trip to Disney, the rides, the churros, the vomiting, and the tears, I found myself still mulling over this data privacy trade-off. Why do we make this trade? What are our reasons? Am I a flippant asshole as the commenter so articulately pointed out?
What’s the Worst That Could Happen?
Ultimately, people are willing to trade their data with companies like Disney for a couple of reasons.
First, humans are bad at discerning the value of their data. Personal data just appears out of nowhere, exhaust out of life’s tailpipe, so why not trade it for something small? I’m personally willing to hand over my own GPS location full time just so I have access to Angry Birds and a smartphone flashlight app.
Our brains evolved to assess trade-offs best in the face of immediate, physical needs and threats. Should I run from that predator? Absolutely. Unfortunately, we still have these same brains. That’s why the camel crickets in my crawl space make me flip my shit, but giving my kids’ data to Disney World feels perfectly acceptable.
Second, most of us feel that giving our data over to a private corporation, like Disney or Facebook or Google, has limited scope. They can only touch us in certain places, e.g. their parks, their websites. And what’s the worst those parks and websites are going to do? Market crap to us.
Feels low risk. No big deal since the power lies with me, the purchaser, to act. Right?
The NSA has a High View of Humanity
Contrast the data gathering of the Facebooks and the Googles and the Disneys of this world with the recent spying revelations concerning the NSA. Unlike private corporations, the U.S. government can use your data to detain, silence, and prosecute you. While a police state is a valid fear, no one worries about Disney creating a “police park.”
The personal data collection of the NSA undoubtedly poses a greater physical risk to humanity. It is unlikely that Netflix will assassinate me. Make me binge-watch House of Cards til I melt into my couch? Yes. Drones with bombs? Doubtful.
At the same time, the NSA’s data collection efforts at their core have what I’d call a “high view of humanity.” The NSA, in a sick way, respects you. They don’t respect your privacy, but they do respect you as a human.
It’s the same high view of humanity that a blackmailer might have. Or that the mafioso might have for you as he garrotes you in a car and sends you to “sleep with the fishes.”
Humans are dangerous. Why else would you seek to control them? That’s a high view of humanity.
What Google and Facebook are doing with our data, indeed what most private companies want to do with our data, while safer in the big brotherly sense, is none-the-less more fundamentally disconcerting. It’s frightening not as a threat to our current physical wellness. No, I believe that Google and Facebook may be able to use data to actually increase our happiness (did you watch your Facebook “look back” video?).
But while they increase our happiness, these companies may be doing nothing short of destroying humanity as we know it.
Now, that’s an outrageous claim. But this is an opinion piece, so as Robert Redford put it so elegantly in Sneakers, “It’s my dime. I’ll ask the questions.”
Why are so many private organizations jumping on the big data bandwagon? For most, they want your personal data so that they might better sell things to you. And in order to discuss this idea of personal data as fuel for advertising, we need to establish a few points about advertising’s intimate relationship with emotion.
Advertisements are arguments.
But why appeal to someone’s reason when you can bypass it?
Emotions often guide decision-making. While the neurological mechanisms that make this possible are still up for debate (see such topics as the Somatic Marker Hypothesis), researchers have shown that in those with brain damage in areas of the brain affecting emotion, decision-making can become crippled. The human brain developed emotions as shortcuts to faster decision-making. I can’t take all day deciding whether or not to eat something, but if I have fear wash over me every time I smell coconut because a decade ago I puked up half a bottle of coconut rum, that emotion saves my body time and keeps it safe. (Keep Malibu rum away from me, thanks.)
Let’s take Gillette’s “manscaping” campaign as an example. Gillette could argue on rational grounds: “a man should shave their legs because it’s marginally more aerodynamic (if you’re into that kind of thing) and bandages adhere better.” Or they could just have Kate Upton tell viewers she likes manscaping and be done with it.
While all advertisements are arguments, ads such as these are at best disingenuous. They are well-crafted logical fallacies. And to a large extent I’m thankful for these disingenuous ads, because at least they’re less boring than a list of valid points.
But such ads betray what companies implicitly know about humans – we’re weak. Our rational defenses can be flanked and overcome by tweaking our emotions.
And Here’s Where Your Data Comes In
While advertisers have for a long time been making emotional appeals for our dollars, now they can bring our personal data to bear on the problem.
We can think of advances in ad targeting as increases in image resolution.
In the beginning, advertisers had a single dry ad. They didn’t know or target you, the consumer, all that well. The picture they had of you as a consumer might as well have been a stick figure drawn in crayon.
Then came demographic targeting and focus grouping. All of a sudden, the stick figure got some detail. Maybe some hair got drawn on the stick figure, a briefcase, some nether-regions, a dog at the stick figure’s feet.
Then data aggregation and tracking came on the scene. The caricature started to gain actual real-life pixels and features. Shopping cart data, IP geolocation, MAC address tracking, parsing user agent strings, social data, etc.
So where does this increasingly realistic picture of the consumer go from here? This data inevitably has gaps. And while many of those gaps will be filled by better and more varied sensors (mobile data, connected automobiles, Jawbone, Nest, etc.), there’s another tool for filling them in: machine learning.
Data left online and in the real world form anchor points in the photo of you from which machine learning algorithms can project the rest of your image. And as machine learning models grow in accuracy and sophistication, particularly at companies with an incentive to ad target, so does the interpolated image of exactly who you are. Target’s prediction and subsequent targeting of pregnant customers is an excellent example of machine learning filling in the gaps in the grainy picture painted by your data.
Via machine learning, a person’s future actions can be predicted at the individual level with a high degree of confidence. No longer are you viewed as a member of a cohort. Now you are known individually by a computer so that you may be targeted surgically.
This is where Facebook and Google are investing huge amounts of dollars. Recruiting directly from the professor pool, these companies are grabbing up the top machine learning minds in the world, such as Facebook’s recent hire of Yann Lecun to lead a new AI lab.
So if the story of advertising in recent years as been one of disingenuous emotional appeals from the Dos Equis man, the story of the future of advertising will be one of laser-guided disingenuous arguments.
Your posts online betray your burgeoning interest in home brews, your medical issues, your fears, your fascinations, your willingness to spend, your crusade against gluten, your insecurities. And if you can dash a faint line between a question and the data breadcrumbs you scatter willy-nilly, you’d better believe a model can fill that line in with Sharpie.
If an AI model can determine your emotional makeup (Facebook’s posts on love certainly betray this intent), then a company can select from a pool of possible ad copy to appeal to whatever version of yourself they like. They can target your worst self, i.e. the one who’s addicted to in-app payments in Candy Crush Saga. Or they can appeal to your aspirational best self, selling you that CrossFit membership at just the right moment.
And this is where the low view of humanity comes in. Unlike the NSA’s tracking, personal data tracking for AI-driven individual targeting assumes we needn’t be controlled. We haven’t agency, and we’re certainly not dangerous.
Our past data betrays our future actions, and rather than put us in a police state, corporations have realized that if they say just the right thing, we’ll put the chains on ourselves.
And this should be more frightening than the NSA. It’s the fear that the enemy is not external to ourselves but rather is in residence in our own weak and predictable minds. That’s the same fear that makes zombie narratives so compelling.
Yet this loss of our internal selves to the control of another is the promise of AI in the hands of the private sector.
In the hands of machine learning models, we become nothing more than a ball of probabilistic mechanisms to be manipulated with carefully designed inputs that lead to anticipated outputs.
This is disrespect at its worst. It is an acknowledgement by these businesses that we are meat. We are sums of externalities. We are sad robots. This is the implication when Disney slaps a band on your fat wrist and tracks your purchase of candied pecans like a real-life rendition of their own film, WALL-E.
The whole movement is not dissimilar to phrenology and biological determinism, only instead of feeling your skull to predict who you are as a person, a company may now read your data. If you’ll permit me an oxymoron, it’s a kind of data-driven probabilistic determinism.
The promise of better machine learning is not to bring machines up to the level of humans but to bring humans down to the level of machines.
Going Along With It
How should we respond to this distillation of human motivation into predictable models where mystery is replaced with math?
Well, one response would be to go along with it. There is no doubt that these models can make us happier. They’ll be able to place in front of us products and services that purport to match our needs. Or as the AI in Minority Report puts it, “Welcome back to the Gap Mr. Yakamoto! How did those assorted tank tops work out for you?”
But while happiness might increase, there can be no doubt that the meaning of our lives will decrease. As understanding of each person increases, as we all become predictable systems, our individual meaning and worth takes a hit.
The famous neurologist Viktor Frankl once said, “Everything can be taken from a man but one thing: the last of human freedoms - to choose one's attitude in any given set of circumstances, to choose one's own way.”
But in the face of sophisticated modeling and targeting the question is raised: in the future will we know our own mind enough to choose our attitudes? Or will the disingenuous arguments directed at us be so powerful that it will become impossible to know our own mind?
To Frankl, “A human being is a deciding being,” but if our decisions can be hacked by corporations then we have to admit that perhaps we cease to be human as we’ve known it. Instead of being unique or special, we all become predictable and expected, nothing but products of previous measured actions.
That’s a downer. But don’t worry. It only gets more depressing from here. Other than merely riding the machine learning wave, what else might we do?
Outbursts of Originality
In the face of this dissection of humanity quantitatively (very much predated ideologically by ideas such as bricolage in postmodern theory), some will try to break out of the probabilistic boxes placed around them.
In order to break out, one must do something unpredictable. Something that is a jump from their past selves.
But what does originality look like in a world where the data on billions of souls is gathered, and for whatever original thought that your targeted and conditioned brain can come up with, a data stream from someone else has likely already been there?
We’re left, sadly, with nothing but novelties. What do I mean? To illustrate, I’ll use perhaps the most unintentionally depressing film scene of the 21st century. The scene is from Garden State.
“You know what I do when I feel completely unoriginal?” Natalie Portman’s character asks.
Natalie Portman then proceeds to make a series of strange noises.
“I make a noise or I do something that no one has ever done before,” she continues.
This type of response, to act in a novel but ultimately disposable way, is really no response at all. It adds meaning to our lives in much the way eating a lollipop allows you to run an ultra-marathon. It’s at best a brief hit of energy.
Data Scientists and their Monsters
Who’s to blame for this sad state of affairs?
Arguably, the not-so-humble data scientist. Data scientists are demi-Christs. They are half-human, themselves targets of their own and other organizations’ machine learning models. Their own meaning is eroded by their products. And yet they are also half-god, the creators of these faux-sentient models.
Data scientists, not unlike Dr. Frankenstein, create unholy life by surging electricity in the form of computation through the sloughed off data skin of society. But similar to the monster in Shelley’s novel, there will always be unintended consequences.
Just as Frankenstein’s monster could not shake its criminal past, so these machine learning models for all their advances cannot shake the past data they are trained on.
Models learn a behavior, a tendency, a personality, a propensity from past data and then they predict that thing they’ve learned with cold accuracy. But in bringing past personal data into present predictions, these models are like echo chambers, reinforcing past truths in the present. Whether the “hot potato trading” presented by models in the 2010 Flash Crash or the price optimization death spiral that hotel chains can get into (although it’s often seen on Amazon), examples of this echo chamber effect already pervade the big data landscape.
Even more frightening, these echo chambers can reinforce societal problems. This is a concern with Chicago’s crime hotspot targeting model. What happens when a model “shits where it eats?” Police focus in on a hot spot and generate more arrests there. Those hotspots become hotter. The neighborhood gets less desirable. Education and jobs suffer. Those hotspots become hotter. The model sends more police. And on and on the death spiral goes.
As machine learning on top of personal data is used to dissect us more, the odds of someone breaking out of marginalized society decrease.
Models say they’re a credit risk, so they’re fed terrible loans. Poverty is reinforced.
Data says they’ll like Skittles, so they’re advertised Skittles. Obesity is reinforced. And rainbows.
They’re predicted to like bottom of the barrel culture, they’re sold Ed Hardy. We all suffer.
These models become the mathematical equivalent of Javert in Les Miserables refusing to allow Jean Valjean’s redemption. They are data-laundered discrimination.
In What Do We Place Hope?
Can we place our hope in the fact that these models will never get so good as to know us intimately enough that they rob us of our humanity? Perhaps.
As a machine learning practitioner, I know that these models are only as good as their inputs. And for most businesses, data sets are crude, dirty, and incomplete.
But that’s changing.
Just this past year, Facebook (the biggest social data store) and Acxiom (arguably the biggest meat space data store) banded together to share their data.
Perhaps then we can put our faith in the government to slow this down? To place limits on what’s possible?
I doubt it. We’ve already touched on the NSA, which is a prime example of the fact that governments are motivated to collect and model private data as much as businesses are. Furthermore, politicians and governments purchase data from private companies like Acxiom to do their own targeting. The Obama reelection campaign was touted for its use of machine learning models to target voters better.
I just don’t see enough will in the government to slow this train down.
What about the courts?
When we look at startups like Juristat (a company that predicts things like your likelihood to win an appeal in a patent lawsuit), we see that, even in the judicial system, data modeling will be brought to bear to take essentially human endeavors (a jury trial) and boil them down to probabilities. Can we rely on such a vulnerable system to protect us against the models that can manipulate it?
Let Us Eat and Drink, for Tomorrow We’re Modeled
This past year Mark Zuckerberg attended one of the big AI conferences called the Neural Information Processing Systems (NIPS) conference. This is kindof like David Bowie stopping at your house to catch up on some Game of Thrones with you.
To learn, to recruit, to cozy up to the machine learning community. Because Facebook is invested in the dismantling of its users piece by piece, using data and machine learning, to process humans into a segmentation-ready data slurry that’s more palatable to its customers, the advertisers.
I attend a lot of conferences on these topics. There’s an excitement in the air. Machine learning and other analytics techniques have been reinvigorated by the business applications of combining AI with distributed computing and large datasets. I like to imagine that at these conferences I’m feeling a smidgen of what it was like to attend one of the earliest World’s Fairs.
But it’s an open question what these technologies will become. Are we birthing our own psychic destruction? Maybe.
Or maybe, like the characters in Disney’s WALL-E, we’ll all end up too fat to get our MagicBands off, surrounded by crap we don’t want but were too well targeted to pass up.