How Artificial Intelligence Can Poison or Power Your Brand

Apr 23, 2018

Article ImageIn 2011, The Onion aired a satirical news report called “CIA’s ‘Facebook’ Program Dramatically Cut Agency’s Cost.” Ahead of its time, The Onion captured the conundrum that marketers would face in the era of big data and artificial intelligence (AI).

In the video, a fake CIA director says, “After years of secretly monitoring the public, we were astounded so many people would willingly publicize where they live, their religious views and political views, an alphabetized list of all their friends’ personal email addresses, phone numbers, hundreds of photos of themselves, and even status updates about what they’re doing moment to moment. It’s truly a dream come true for the CIA.”

Facebook was a dream come true for marketers too. And, like The Onion’s CIA, marketers have tracked people and used their data without second thoughts. Why shouldn’t marketers feed that data into artificial intelligence algorithms? Shouldn’t customers want more personalized services, advertisements, and advice?

Today, marketers who make that assumption put their brands at serious risk. How we frame AI has the power to poison or supercharge a brand.

A Polarizing Dialog

Until recently, most Facebook users had no idea how the network and its peers commodified their data. The Russian election hacking, alarming soundbites from former Facebook execs, and the Cambridge Analytica scandal, among other news items, made the public wary. #DeleteFacebook has swept Twitter, which is a bit ironic. Would you protest the sugar levels in Coca-Cola by drinking Pepsi?

Currently, the privacy issues overshadow AI, the technology that processes the raw ingredients of our online identities into irresistibly sweet content feeds and marketing experiences. In the public dialog, AI is either benevolent and useful or creepy and abusive. It’s our friendly helper or Terminator overload.

There’s little room for nuance in contemporary culture, and for marketers, that raises the stakes of how their brand is perceived. Marketers must address data privacy and the ways they use AI. But how? 

The Spectrum of Privacy

Privacy is a cultural construct shaped by everything from generational differences and historical events to novels, TV shows, and your cousin Mary whose identity was stolen. It’s subjective.

I find it creepy when my phone tells me to snap a photo of the restaurant where my wife and I are eating. Teenagers who can’t remember life before smartphones might feel differently. Data collection is like a prism that segments our society into a spectrum of attitudes.

Millennials, on average, express less concern for data security and privacy compared to Gen X, Boomers, traditionalists, and Gen Z. However, age is one variable among a complex set of factors. We can control what we say, but we cannot rewrite experiences that shape people’s perspectives of privacy and AI.

Framing AI for Your Audience

Brands must find a way to account for that spectrum of privacy. Just because people surrendered their data in a 20-page unreadable privacy agreement, doesn’t mean they’re on board. Brands have options for framing AI transparently and responsibly, and here are a few places to start:

  1. Provide Context--Amazon was one of the first companies to make AI palatable to consumers. The website explicitly recommends goods based on what you’ve bought previously. Likewise, Netflix contextualizes its recommendations with the well-known phrase, “Because you watched ____.” When consumers understand how and why AI processes their information, they feel empowered. They can modify their privacy options or stop using the service. The opposite experience is to receive a personalized ad about something extremely sensitive – like a medical condition – and wonder how the advertisers knew. The winning formula for context is transparency: “We suggest X because of Y.”
  2. Earn Acceptance--Intentionally or not, Apple launches technologies that stretch our comfort zones. For example, when Apple Pay debuted, people weren’t ready to store credit cards on their phone. Apple had to train people to think of mobile wallets as normal and practical. New technologies, including AI services, must also weather an acceptance period. The marketer who introduces personalized ads then removes them after the first complaint validates the users’ criticisms. The marketer who is honest and willing to take the flack can earn trust gradually.
  3. Offer Utility--AI must provide some sense of value, otherwise customers will focus on the negatives. For example, I recently bought a pool filter for my jacuzzi. I replace it once every two years. Nevertheless, the company that sold me the filter then bombarded me with retargeting ads for the next six months—from small banners to entire site takeovers. In other words, an algorithm spent money trying to sell me a filter I just bought and wouldn’t need again for two years! They had the data right but inadvertently merchandised their inability to use it wisely. As a result, I won’t buy from that seller again. Had the AI waited two years to advertise replacement filters, I might have felt grateful rather than annoyed.

How marketers frame AI will determine whether the public celebrates or attacks their brands. Some companies should ask the fundamental question: If our customers knew what we were doing, how would they feel about it?

If you can’t tell your customers the simple, unadulterated truth, you’re likely misusing their data. Privacy and security laws are written by people who cannot anticipate all the ways technology will use personal information. Things are simply changing too fast. Marketers can follow the law yet still breach the public’s trust for the Sisyphean cause of meeting our insatiable expectations.

Perhaps the real conundrum is that no experience is ever fast, convenient, personal, and on-demand enough for the mythical consumer who speaks through surveys and analyst reports. Do we feel more entitled to privacy? Or to the speed and convenience that AI can provide? Maybe data abuse and invasive AI are byproducts of our entitlement chasing its own tail.

The Onion’s stroke of genius was to recognize how closely a digital advertising platform resembles a CIA surveillance program. As long as The Onion can credibly compare tech brands to intelligence agencies, we marketers have work to do.

Related Articles

As the General Data Protection Regulation (GDPR) closes in, publishers are still unsure about where the line is being drawn. If you're confused about the status of your data collection strategy under the GDPR, read on.
Rep. Blackburn's question about control of the virtual-self hits at the epicenter of our Facebook problem: regaining personal control of our virtual identities is the answer to data privacy. Only when my real world-self owns my virtual-self can I truly have the benefit of data privacy and set both the boundaries and price for its use. This is achievable today, and it will be the largest disruption in the history of digital advertising.
It's clear why publishers integrate these outside vendors into their website ecosystems; each generates unique value in reaching desired user experience and business objectives. The intentions are good. What's less clear is the hidden risks associated with deployment. With these good intentions come potentially negative impacts on site performance, as well as risks to privacy and security.