Listening to podcasts is an excruciating process for me, because I listen to a single podcast (and at 2.7x speed no less!) and can’t listen to another one because I’ve had an idea. Then I have to write about it.
Hence this substack.
Anyway, the latest one was a Freakonomics episode on media negativity bias. I won’t recount the horrors to you except to say that it’s real—I don’t have time, and if you don’t believe me already I certainly won’t be able to prove it to you. But here are a few fascinating insights for the already-converted:
There is a website called “Have I shared fake news?” that tells you how many times you have shared disinformation. I’m proud to admit that I have a decent score, and most of the ones I am sharing are on purpose. (It’s a simple tool, btw, so it’s not exactly advanced analysis—but it’s a simple tool with some cool aspects. Check out the language analysis too, to see whether you are posting negativity or positivity).
Is the left worse with disinformation? The right? Women? Certain demographic subgroups? The only specific set of people that were mentioned in the podcast for being worse at this were Americans.
When things are bad, they’re really bad: 6.5-7 times more negative to positive in news coverage. But when things are good, they’re still pretty bad: 5-to-1 negative to positive. This leads people to give up: there is no way to make a difference, so why try?
This is a problem we’ve wrestled with for a while, and we don’t have a good solution.
Except we totally do. Easy ones.
And my favorite version of this pipe dream is where social media is the fix.
I’ve said for a while that it would be relatively easy to fix social media too, and this is just an extension. No government regulation needed either.
First, though, how did we get here?
Here’s the simple version: worst idea we ever had was to democratize—to give everyone a voice through social media.
Two questions to illustrate.
First, what role have publishers always had? They have been the gatekeepers—the elites who played the roll of the algorithm before the algorithm was around. They accentuated some voices and hushed others.
Second, what is the primary competitor to traditional news sites, like CNN? It isn’t MSNBC or Fox. It’s social media.
I’m not willing to excuse what our modern media has become—they have moral culpability for becoming the pit of clickbait and misinformation that they have—but I do want to point out that the competition is intense. They have to get eyeballs before they can enlighten, and that means being catchier than the social media platforms; hence, clickbait. Negativity. Outrage.
I’m a little contrarian, so I like to make things sound punchy sometimes. On this, I know it sounds pretty elitist, but I think I’m right. The triumph of expressive individualism is social media: be yourself. Express who you are. Let no one tell you what to do. Be authentic.
What a bunch of rot.
I digress. How do we fix it? Let the elites have a heavier voice. But not the elites in terms of wealth.
Imagine you pop onto Facebook. You get a survey. It’s only one question, but they won’t let you click away—there’s a notice that says that you have to fill one out every so often. Then it brings up a list of 10 of your friends that you interact with frequently. It asks you “who among your friends is most willing to challenge your thinking in kind ways?”
Another survey, another time, asks simply “which of these 10 friends is most likely to wind up in Facebook jail?” Another: “which of your friends do you trust to do their homework before they post?” “Which of your friends would you consider well-informed?” A fun one could be to ask people to identify the fake news: a simple quiz, generated by Facebook, that asks users to google some obvious question, but written in a compelling, outrage-bait kind of way.
You get the idea. Pretty simple stuff.
On the back end, Facebook (and this would be identical for Twitter and others) can generate a score of how moral, trustworthy, calm, patient, etc. a person is. Then, the algorithm can funnel just a touch more traffic to the best types.
There are problems of course, but they aren’t the ones you think. Bias? Yeah, not fixable, but it’s possible to mitigate. Find the people that are respected equally by various tribes. Fluffy-focused algorithms? If you don’t just want people who share adorable cat videos, ask harder-hitting questions about long-term outcomes: “who makes you think harder?” Best of all, you could run experiments on users’ mental health, and then A/B test over time. Reward those who produce good outcomes for others with increased attention. This would be relatively easy to pilot, and I’m a little shocked no one has. (I’m sure there will be problems once the pilot comes out, but I doubt it will be impossible to address.)
But what about ethically? Practically it might work, but would it really be a good idea? Again, there are real worries, but they’re mostly outweighed by benefits in my mind. Should Facebook really have that level of data? A friend works for “the agency.” (He has not told me which.) When I ask him what he does, his answer is “computer stuff.” When I ask who for, he says “the government.” When I ask if I should be scared of what he could know about me, he answered “not nearly as scared of what Facebook already knows about you.” They already have the data. They only use it, at present, to sell you stuff. Do we really want to encourage people to judge their neighbors? Bluntly, yes. I’m not talking about passing moral judgments, I’m talking about getting better at discernment.
But that only filters social media—it does nothing for CNN. Remember that the key is to change the incentives of the industry—not to force them to change on their own, but to make it a good idea to change. In the words of Milton Friedman, “I do not believe that the solution to our problem is simply to elect the right people. The important thing is to establish a political climate of opinion which will make it politically profitable for the wrong people to do the right thing. Unless it is politically profitable for the wrong people to do the right thing, the right people will not do the right thing either, or it they try, they will shortly be out of office.”
That. But for CNN.
How?
Run a series of surveys like those, but for CNN. Ask things like “which of the following news sources do you trust?” “Which of the following gives you news that makes you smarter?”
Again, the problems are real, but surmountable. What do you do about those who aren’t good at trusting the right sources in the first place? Give them a little quiz, and weight their responses accordingly. Who gets to decide what a “good” news source is? Isn’t there going to be bias in the questions? Certainly. So ask users to say “what do you want out of news” and then feed them more of that. Most people don’t want to be outraged all the time, they want to be enlightened and informed. If they say “I want partisan claptrap that makes me mad” then you’ll have a problem, but most people won’t say that.
I don’t think Facebook will do it because it is too costly—reputationally. The idea that they are putting their thumb on the scales is too dangerous to them when they’re already not trusted. But refusing to put your thumb on the scale is still putting your thumb on the scale. I see little virtue in defending the status quo because hey, at least its not terribly biased.
I’m also an old school teacher. The idea of a person in authority aiming to let children do as they please is just really weird. Facebook and Twitter and all the rest came to the scene assuming they could let people do as they wished. That was a terrible idea. Nothing will sour you on libertarianism faster than seeing how people act when left to themselves. And now, the teacher has had poorly-behaved children in the room for months and is trying to tighten up on behavior.
Always start stricter than you need to. Always. Then loosen up over time.
Maybe Facebook won’t, but Elon could. Twitter would change over night.
And by the way, on the 1-in-a-trillion chance that Elon reads this: you could solve the bots question easily with this one weird trick: “Of these ten people, who have you met in person?” It wouldn’t take long before you knew which accounts were duplicitous, which were fake, and which were normal people just living their normal lives.