nemil

  
FB Share
 


License CC0
Free stickers available here

Make Something People Need

(Drafted 2016, Published March 2017)

Jon Stewart: “What ...is the role for CNBC? … There were literally shows called Fast Money.”

Jim Cramer: "There's a market for it and you give it to them …"

Jon Stewart: "There's a market for cocaine and hookers. So what? What is the responsibility of the [journalists] who cover Wall Street?"

The Daily Show interview with Mad Money’s Jim Cramer, March 2009 (italics added)1



Startups have a mortality rate just south of a 16th century dodo. For a startup founder, you need simple, but effective advice in the face of such odds. For my startup — like so many other YCombinator companies — it was “Make Something People Want”.

Coined by YC in the early days, “Make Something People Want” is powerful advice for the aspiring entrepreneur. But what happens when what our users want is widely different from what our users need?

We are surrounded by well engineered — and increasingly addictive — technology that shapes what we do and how we think. Especially when you work at a large tech company, how do you trade giving users what they want and your own business’s profitability, with the negative effects that this leads to?

Wants and Needs

Often, what we want is the same as what we need. But this isn’t always the case.

An extreme example is drugs for an addict. Even though it impinges on individual choice, society actively works to prevent the proliferation of addictive drugs. This comes at a cost to some (drug users, farmers, drug manufacturers, distributors, retail shops, healthcare providers), with a longer term benefit to others (to drug users, to society). Beyond illegal drugs, doctors reflect on what their patients need when prescribing medicine, not just what their patients want.

A less severe example is education. To get over the hump of how hard education can be, society has created compulsory schooling laws, provided subsidies, and encouraged parental pressure. This impinges on free will, but often provides benefits. In both cases, there are benefits to thinking about what we need over the long term, rather than what we want in the moment.

In technology, the focus on engagement to the exclusion of much else can similarly create a huge gap between wants and needs. Consider an addictive and notification-heavy mobile game that is profitable, but can quickly turn into an addictive pursuit for users.

This debate is difficult because it can encroach on each person’s free will and feels paternalistic. Benedict Evans, a partner at a16z, espouses a common Silicon Valley view: for Facebook to do anything that falls short of maximizing engagement will lead to the eventual destruction of their business. Others will go further and argue that it’s facile and inappropriate for any company to presume what its users need, and impinges on our free will.

While I’m a huge believer in personal responsibility from users, three things give me pause in technology:

  • Speed of change: The changes in technology have been especially rapid, without enough time for our society to develop the appropriate antibodies
  • Asymmetry of effort: Large tech companies have some of the smartest minds in the world single mindedly focused on maximizing user engagement
  • Large Tech Scale and Founder Values: Most large tech companies have worldwide monopolies in their core business, meaning that the values of their leaders become the norm for billions, with little to no societal input — or adequate critical thinking

Beyond these, worldwide anger has increasingly made it financially rational for top tech leaders to address many of the issues that technology has unleased.

This debate may seem to have political overtones, but it doesn’t need to. Regardless of our political beliefs, we’ve created structures to temper our wants when they conflict with our longer-term needs in industry after industry. Everything from The Hippocratic Oath, to auto-enrollment in IRAs, to journalistic ethics in newsrooms are areas where parties reflect on the impact their work has, despite user demand, engagement, and profit that points in the other direction.2

In technology, Google’s “Don't Be Evil" for years helped engineers think about the interests of their users and our society, even though it conflicted with maximizing profit and engagement.

Much of the debate of our times is over where the line is — and what role users, companies, employees, and our society should play.

Wants and Needs in Gaming

In 2010, social games like Zynga's Farmville were all the rage. Driven by VC expectations and the land grab of the early Facebook app days, each day at Zynga was focused on maximizing key metrics: daily active users, 7-day retention, acquisition cost, lifetime value. The priority was reach, user retention, and eventually revenue.3

An aspiring product manager at the time, I forced myself to play Farmville so I could learn these triggers: the activity bar that showed how tantalizingly close I was to the next level, the notifications from my friends to re-engage, the forced delays before I could reach the next level ensuring that I would come back each day. Soon I began to learn the tricks and apply them to my own social media products. But I rarely thought about the impact my product had on users.

Much later, I would interview a single “extreme” user of my product — and find she spent 7 hours a day using my product 5 days a week. I remember being horrified, but justifying it to myself as her making her own explicit choice.

In games and social apps, like in so many other products, just a few users (“whales”) fund the vast majority of profits — and the goal is to keep this revenue engine going. If psychologists looked at these whales, they might find a number of them suffered from symptoms not that different from compulsive gambling.

This is what we do in technology. We design experiences that people can’t stop using — letting us succeed in the marketplace, while providing lucrative jobs and returns for those who support our mission.

If my experience is a guide, we also rarely reflect on the negative impact of our products on our users, arguing that users have made a free choice to engage.4

Product designers like me will cite the Nir Eyal’s Hooked: How To Build Habit Forming Products as gospel, focusing on variable rewards and regular notification triggers. Eyal runs a Habit Summit each year that lets psychologists, marketers, and product managers share notes to build products that users “can’t put down”. Product owners use the term “sticky” to explain our objective, which is often a euphemism for addictive.

And even though my focus is gaming, many other technology sectors — swipe dating apps, messaging apps, social media, YouTube autoplays, even Uber — leverage these lessons from games. They use notifications, variable rewards, and a gusher of targeted content to get you to re-engage — without thinking much about the cost to their users in their quest to win.

Users themselves are left navigating a new world with so many shiny choices and still limited training on how best to manage.

Wants and Needs in the Facebook Newsfeed

A key incentive in media is making content that people want to engage with. At its best the goal of news is to “provide ... the facts [people] need to make better decisions.” The baser instinct is to create content that your audience wants to consume. This content is the most profitable, even if it gives consumers a mistaken view of the world.5

Rupert Murdoch is blunt, saying “Stop writing articles to win Pulitzer Prizes … Give people what they want to read and make it interesting.”6

Facebook newsfeeds are a perfect example where what we need is at odds with what we want. Facebook, with the help of the world’s smartest data scientists, has designed an algorithmic editor that decides what we should be exposed to. It ensures each of us is individually viewing the news we most want to consume, maximizing the amount of time we spend on Facebook. Unsurprisingly, this is often incompatible with what leads to the best decisions.

Today, users consume fake news. They willingly engage with it, and the parties that provide it (content creators and Facebook) financially benefit — despite the broader cost to society. Only with increasing external pressure has Facebook finally acknowledged that fake news is an issue. Still, Facebook’s actions pale in comparison to the way national networks treated their duty in the heyday of television.

But there is a much bigger and still unaddressed issue that confronts every feed designer: selective news or filter bubbles. By maximizing engagement and profitability, each reader is surrounded by content that they most want to read, shared by their like minded friends. Even when this content is 100% factual, it provides selective facts that only tells part of the story. This is likely one key reason for increasing polarization worldwide.

While Facebook’s goal may be to bring the world closer together, they’ve in fact selectively connected the world — allowing like minded people to congregate together, and build their own alternate reality.

Let’s take an extreme, but illustrative example. Imagine a prospective terrorist and Facebook user increasingly bitter at the US. This reader actively wants to see social media coverage that paints the US in a terrible light — and Facebook’s algorithms will do their best to help. A US military attack that kills civilians would be “relevant” for this reader, while a positive US action would not be. Over time, this gives them a highly distorted view of the world that can radicalize them. This same effect leads to anti-Muslim sentiment in America, as we don’t see how many Muslims die in terrorist attacks each year.

Can you imagine if college or secondary education was taught based on on Facebook’s engagement metrics? It would focus exclusively on what we want to learn about, not also what we should learn. And yet, news is education for making both personal decisions and important civic responsibilities (everything from personal safety assessment to voting choices).

It’s too simple to say this is Facebook’s — or YouTube’s or Twitter’s — fault alone. The onward march of technology has suddenly allowed us to distribute and target large amounts of media7. If Facebook hadn’t existed, we’d still be grappling with that.

Still, Facebook does have a role to play. To date, Mark Zuckerburg has largely denied this effect or argued that it simply represents pre-existing cracks in society. But by algorithmically focusing on what people want, we are creating a new form of segregation. This self-segregation allows us to congregate in information bubbles that confirm our pre-existing views, and paint outsiders in the worst possible light.

A common view in Silicon Valley (in 2016) is that moderating filter bubbles in any way is abhorrent, because it’s censorship8. This is a facile response to a complex problem. First, chasing engagement to the exclusion of everything else is its own deep form of bias, where we favor content that is most profitable. Facebook is already functioning in the editorial role taken by media organizations — deciding which stories get the most distribution.

Second, all thoughtful media organizations curate.9 Today, they’ll be careful not to sensationalize suicide and refuse to publish many stories until they can be verified10. Free speech isn't the same as free reach.

Modern journalistic ethics, in fact, were created when we realized that giving people what they want could lead to war in the yellow journalism era, even when the coverage was facutal. In the decades since, media organizations and journalism schools created commonly accepted journalistic ethics that would train a generation of journalists, even though it came at a cost to newspaper profits and journalist salaries.

Questions

These examples — and my own place in them — have led me to ask a few simple questions.

First, what is the responsibility of the largest technology companies and their employees?

Large technology companies have a critical role to play given the importance of technology in our society. We live in an era where companies continuously highlight the positive impact they make, driven by their customer/employee expectations and idealistic founders. To the degree they trumpet their positive impacts, they also need to assess and acknowledge the negative effects of their products without getting defensive.

After all, you can’t build technology that billions use, without affecting some positively and others negatively. If you broke it, you should help fix it.

I’m not suggesting that these companies should be focused exclusively on what is good for their customers. Rather, they need to moderate some of the worst impacts — and reflect on their impact without bias, especially if they purport to make the world better. And while it’s hard to discuss choices that impact the free will of a user, there are thoughtful ways to both provide user choice and act in the interests of your users.

If you ran a grocery store, you might give preferential placement to some healthy food — while still allowing full choice. This is directly analogous to how Facebook might reduce the effect of filter bubbles. Like it or not, they serve as editors for much of the news the world consumes each day and set the incentives for every journalist in the world.11

For thoughtful employees, it’s important to think about what users need, not simply what they want. Too often, I rarely had a second thought as I built software, working hard to ensure that our products were viral and “sticky”, without thinking about the impact on my users. This is exacerbated by the fact that all company management will highlight the positive you do, but downplay the challenges the work causes.

I’m heartened by the fact that most engineers I know want to do good — and they can be a powerful corrective. The reason there’s such a “change the world” mentality in the Valley is partly because people prize meaningful work that can make the world a better place.

To truly do so, though, we have to actively try to understand the impact we have. One of the creators of “Don’t be evil” at Google notes that its simplicity led to regular debates about the impact of proposed projects.12

Nearly every Silicon Valley company has a mission aligned with making the world “better”, making it easier to argue against hypocritical actions. We need to actively figure out the effects of our work, rather than staying in our bubble. And we need to bring it up internally, even when it’s uncomfortable.

This isn’t an argument to say that our users have little responsibility. But as product creators, we have some responsibility too, especially if we purport to do good.

Second, how do we influence organizations that benefit from the status quo? These groups make significant profits sometimes directly because they negatively impact their users’ lives. As such, there’s little incentive to acknowledge or address this except for the most forward thinking organizations.

Beyond employees, there’s an important role for researchers and hackers to investigate. Real data from objective researchers can serve as a counterpoint to a tech company's cherry-picked figures, making it harder to dismiss criticism. 13(This is one of the reasons I wrote a series on how selective media coverage and filter bubbles lead us to bad decisions, as Facebook’s early research showed that filter bubbles have no effect)

Most computer science programs also rarely touch on the ethics challenges in this modern era, which has to change as the impact our field has so dramatically increased. In this era, computer engineers literally function as a host of other positions: editors, psychologists, urban planners, among many others. The modern computer science curriculum has to change if it is to prepare its students for the awesome power that they have.

Convincing prospective employees in certain specialities or universities (like AI) to sign a code of conduct could have a huge impact in an industry where talent is fought over. Sam Altman, the head of YCombinator, proposes a tech union along these lines that advocate for values, not wages.

Users themselves could boycott technology products that cross the line, much as they do in other industries.

Third, and perhaps most uncontroversially, how do we turn needs into wants? Rather than demanding changes from groups that benefit from the status quo, the easiest — and hardest thing — would be to convince people that they should want what they need.

In the case of Facebook, it may take a generation, but students will be trained to recognize fake news, selective facts, and filter bubbles, blunting the economic benefits that currently accrue to Facebook. Computer science and statistics programs may delve deeply into the ethics of the work their graduates will do, such as designing news feed algorithms. And an extreme outcome (sadly, such as a war or another hacked election) may bring this issue to the forefront, where users collectively demand change.

To get back there, we need more thoughtful tech workers to think about what our society needs, not simply what individuals seem to want.


Thanks to Professor Danah Boyd, Professor Bene Faedi, Professor James Hamilton, Burt Herman, Shay Maunz, Dan Woods, and Adina Rom for warm spirited debates. Many others who provided feedback from the technology industry have not been named. All views are solely my own.


All opinions expressed are solely my own. feedback? drop a note to nemil at this domain




Footnotes
  1. The challenge for Cramer is trading off what his TV show audience wants (to be entertained) and what they need (to make good financial decisions). Though it isn’t always a clear tradeoff, focusing on the former can make it easier to attract viewers rather than the latter. CNBC actively marketed itself as a way to get “the financial expertise you need” rather than an entertainment show, which especially raised hackles with Stewart.

    And though it’s easy to dismiss Cramer’s perspective, he faces the challenge of standing out in a crowded, competitive marketplace where competing means participating in a race to the bottom. It’s likely that he rationalizes his own strategy as a way to positively influence viewers over time — and yet the danger is that to attract users, he perverts his content so substantially that it negatively impacts his viewers.

    Growing up, I literally knew day traders who would consider stock trading shows as a critical input into stock decisions, not quasi-entertainment news. Then I would go to business school, and compare how thoughtful investors made decisions.

  2. This moderation can come from many sources:

    Companies/organizations: e.g., auto-enrolling employees in IRAs, combating fake news, Facebook content moderation, internal ethics codes, setting required classes and curriculums, in universities, Don’t be Evil (Google), design ethics (Google)

    Professions: The Hippocratic Oath (medicine), journalistic objectivity, legal ethics, academia

    Religious organizations: e.g., treating others charitably

    Societal norms: e.g., appropriate clothing style, regulating cursing, appropriate drinking behavior

    Parents: e.g., education, screen time

    Political Representatives and/or government: e.g., mandating minimal education levels, creating laws to prevent discrimination, criminal laws, controlling gambling, pharmaceutical control laws

  3. The other danger at Zynga was that the weekly metrics people looked at could be gamed to achieve two objectives.

    Internally, the better a game seemed to be doing (based on forecasts submitted by each team), the more resources it would get. Of course, any product manager would quickly realize that more resources would mean a better chance of success, incentivizing some to pump the numbers artificially. Externally, the better metrics Zynga had as a whole, the easier it would be to raise at increasingly higher valuations and recruit talent, even if these were artificially raised.

  4. I liken this to Breaking Bad, where the key characters would rarely interact with many meth users. In tech, we mostly get caught in our world, without finding ourselves consistently talking to those affected by our work. ↩︎
  5. If you’re interested in this topic, I highly recommend Stanford Professor James Hamilton’s All the News That’s Fit to Sell and my own series. ↩︎
  6. There are many more examples of this. Kevin Merida, a managing editor of The Washington Post, highlights the key incentive he faces when making story decisions: “[W]e’re in the business of people reading our work. If we were to ignore the information that people are talking about, we would be news snobs.” Sam Zell, the former CEO of Tribune, is more blunt, telling journalists that “You need to help me by being a journalist that focuses on what readers want and therefore generates more revenue.” ↩︎
  7. Over the last two decades with the growth of 24 hours news channels, news sites, and social networks, user choice in media has exponentially increased. ↩︎
  8. There’s a big difference between curation and censorship as well. ↩︎
  9. Including Facebook. ↩︎
  10. Not revealing the Trump dossier for a long period is an example where even in this competitive day and age, many media sources followed basic journalistic norms. ↩︎
  11. Another option that would reduce the dissonance today is to stop recruiting employees and users with visions of making the world better. Large tech companies can simply focus on being profitable businesses and return to a previous era where a company’s goal was more clear. It may hurt recruiting in a generation where employees expect more, but would leave them less open to accusations of hypocrisy. ↩︎
  12. As recounted to me, by Paul Buchheit — a YCombinator partner and an early engineer at Google who helped brainstorm "Don't be Evil". ↩︎
  13. On the topic of filter bubbles, examples include UNC’s Zeynep Tufekci and Danah Boyd at Microsoft Research. ↩︎