At 5 p.m. on a weekday, the center of the Internet is blissful and bustling. The white buses that will shuttle employees back home are circling the giant drive. Oversize, colorful lawn chairs play host to a few meetings in the late sun.
I’ve come to Google’s Mountain View headquarters to meet one of the company’s Product Philosophers, Tristan Harris, who earned that title by selling his company to the behemoth at the tender age of 28. I would have figured that someone like him would be enjoying the sweet leisure of tech-world millennial sinecure. But throughout our conversation, Harris’ expression is one of perpetual concern. He is genuinely afraid — of Big Tech ruining our lives. “The Internet was supposed to be this exciting potential to do all this good,” he says mournfully, sitting on one of those comical lawn chairs, here in the heart of Internet optimism. Not only has the Web failed to realize its full potential but it, and the people building it, also might be doing us quite a lot of harm.
And it’s all because, Harris ventures, the masters of the Internet are using the wrong metrics. The word is more often associated with drab meetings than an activist’s pitch, but Harris’ spiel is the latter. Companies on the ad-driven Web judge their success by how many users they have, and how long those users spend on the site. Instead, Harris wants us — consumers, startups, investors, designers and everyone else implicated in the soft social contract that is the Internet — to use a different standard: whether or not we spend our time well.
Harris, who is quick to remind me that he speaks for himself and not for Google, has a bunch of handy analogies. The news feed is like crappy food; we need an organic option. We need B Corps to counteract the megacorps. Average folks who blithely surf the Internet, Harris believes, are pretty much screwed from the outset. Just look at the sad numbers: Nearly a quarter of teenagers, according to a Pew Research Center study, self-report going online “almost constantly,” and a Department of Health and Human Services report from 2013 shows adolescents spend almost eight hours a day consuming media, from videos to picture-posting to emailing. And forget teens for a moment — what about the workplace? Gloria Mark, a professor of informatics at the University of California, Irvine, studied workers and found that they lost some 23 minutes to each interruption. Harris places our attention — and how we use it — in urgentterms: “If the average person surrenders to the world,” he says, “it’s not safe — you get plagued by credit card debt, bad food. The default world doesn’t represent your best interest.”
This rather intuitive pitch belies something more complex: a potentially rich philosophical conversation about ethics that’s brewing over our daily, seemingly mundane clicks and views. “All design is persuasive,” whether digital or physical, from the rectangular Web browsers we stare at all day to shopping malls and movie theaters and schools, says James Williams, a D.Phil. candidate at the Oxford Internet Institute who studies persuasive design ethics. Williams, like the digital civil liberties groups that have come to prominence on Edward Snowden’s coattails, believes “digital rights are human rights.”
With such high stakes, what is to be done? The Time Well Spent movement is itself small — many of the Silicon Valley– savvy folks I spoke with hadn’t yet heard of Harris’ specific thinking, though they did say they’d had plenty of philosophical discussions along the same lines with colleagues and bosses. The antidotes floated in such conversations? A few might sound familiar. Harris suggests “new attention contracts” between you and your devices: Workplace messaging systems, for example, might create a “focused” status. Or, more important, create more-nuanced “focused” statuses — not just the Do Not Disturb mode on your phone, but, say, “I want to read for the next two hours,” or “I’m driving for 30 minutes,” while also making sure you don’t miss your grandma’s calls, if that’s what you desire.
Other solutions are a bit more radical. Harris explains that Big Tech itself might need to be #disrupted — third-party developers like your average hacker need a shot at designing with the little guy in mind. Or, still wilder … we might (gasp!) step away from screens altogether, suggests user experience designer Golden Krishna, author of The Best Interface Is No Interface, or switch up those “metrics.” Or, some suggest, designers might one day take a Hippocratic oath. All their work would be grounded in a first principle: Do no harm.
The big ideas that once lived largely in the Athenaeum, about free will and determinism, about what it means to be human, about persuasion and how to wield it, are now being worked out by … product managers. Given all that power, perhaps those designing our Internet experiences could do with some training in ethics, or a professional code. But, OK — the notion may not be terribly realistic, according to some professional philosophers. Shannon Vallor, an associate professor of philosophy at Santa Clara University and president of the Society for Philosophy and Technology, says she’s had many engaging conversations on topics like the ethics of Web design, or notions like Time Well Spent with the leaders of large tech companies. But in the end, she says, the moguls “very strongly resist the idea that those things should matter to them” and what they create. And she bristles a bit at my suggestion that technologists these days are applied philosophers; even if they spend long design meetings considering ethical decisions around a conference table, Vallor doubts quasi-philosophers’ credentials as professional ethicists. Codes of morality birthed in corporations tend toward “groupthink,” she says.
If a Hippocratic-type oath did go down, though, what might ethically bound designers start paying attention to? Whitney Hess, who spent a decade as a user experience designer, actually wrote such an oath a few years ago in a quick blog post; her sketch compares those who design websites with physicians, and those who use them with patients. “I will remember,” one of her tenets reads, “that I do not treat a Web form, a social networking site, but a vulnerable human being, whose one wrong click may affect the person’s friends and economic stability. My responsibility includes these related problems.”
It’s a next-generation consumer advocacy battle; victory depends on getting consumers to “vote with their feet.”
I ask Hess, who is based in San Diego and working as an empathy coach, for an example of that one wrong click. She points me to a common experience: A new social networking site once led her to invite everyone she knew to sign up — ex-boyfriends, not-so-great friends — when she thought it was just searching her contacts for existing connections. “It’s mortifying,” she says; a social Pandora’s box opens. On the heavier end of things, there are no shortage of poorly designed financial websites that make it easy to lose track of your money, or hit buy when you meant to sell, Hess says.
Oath or no oath, the consensus seems to be that it’s all in those nasty littlemetrics, in companies’ refusal to “measure the hard stuff.” Neither Google nor Facebook responded to requests for comment, but Harris argues that Facebook might start quizzing users qualitatively, asking at the end of each week which news feed experiences you got something out of, and which you regretted; it might customize your feed to eliminate cat quizzes if you so prefer. Or social media sites might add a stamp to “shared” articles to reward people for passing on stories they’ve read in full, instead of on impulse. Krishna suggests news sites might stop prioritizing “most shared stories” on the side of the site, since that suggests sharing is the most valuable metric.
An army of oath-sworn startuppers probably can’t flip the bird to convention, Harris says; especially because most oaths don’t have anything “contrarian” in them. Even the most visionary and ethically guided developers must operate within the occasionally oppressive constraints of pre-existing designs, browsers and operating systems, Harris points out. So hopes depend on investors and big companies hopping on board. Perhaps that isn’t quite as dreamy-eyed as it may sound: Engineers often have a taste for open-source software, or making tech that’s free and available to hackers to mess with and customize themselves; corporations that need to make money on the software don’t often love giving stuff away. But idealistic engineers have gotten their way more than once, working on projects within the auspices of a big tech company that then get released to the wider world.
Let’s say, though, that the whole Hippocratic oath thing doesn’t work out. What are we really looking at? A next-generation consumer advocacy battle, one in which a victory depends not on class action lawsuits or government oversight but on popular awareness and education. The ultimate goal would be getting consumers to “vote with their feet,” says Vivek Krishnamurthy, clinical instructor at Harvard Law School’s Cyberlaw Clinic. He cites the digital civil liberties groups’ attempts to create a “nutrition label” for privacy. Indeed, Harris believes in “public awareness first,” more than a possibly nebulous oath.
It’s not unprecedented: Something like that happened with the organic food movement. Fifteen years ago, an “organic” label connoted little more than pallid produce. But companies up and down the food chain took a nascent consumer demand for organics and leveraged it into thriving businesses, from upscale grocers to kid-friendly mac ’n’ cheese. (Yay, capitalism!) One lesson from the organics movement, though, is that these things take a long time — and need user buy-in.
One fix to all this might involve paying our way to a healthier Internet, say, buying an upgraded browser or operating system with buttons or flags that help you spend your time well, shopping at the Whole Foods of Web search, so to speak, with someone else grabbing a leaky milk carton from the corner bodega. The information superhighway is big enough that plenty of people could miss the exit to the mindful-tech promised land.
But there’s another way that other ethical thinkers could win out, Krishnamurthy ventures, by persuading us in a different direction … with technology: “If they came up with a killer app.”
This is the second installment in a five-part OZY series that explores the Big Ideas shaping our tech-driven future. Part 1 is here.
This article was originally posted on OZY.com on January 1, 2016. Read the original article.