Problem #1: Content networks’ ability to judge content quality is flawed.

We have social channels, search tools and marketing mechanisms that measure the quality of content based almost entirely upon peer review. What happens when your peer group is misguided or misinformed? Even more interesting, what happens when your peer group is intentionally malicious?

Consider a search tool like Google: there should be no argument as to Google’s importance in the world of information sharing. Google’s ability to measure content quality is based almost entirely upon the ability of its user-base to measure content quality. This cyclical paradox essentially means that Google is utilizing the behaviour of its users to determine what content will be delivered to subsequent users.

The following excerpt is taken from the article: User behavior: a ranking factor to reckon with

In a Federal Trade Commission court case, Google’s former Search Quality chief, Udi Manber, testified the following:

“The ranking itself is affected by the click data. If we discover that, for a particular query, hypothetically, 80 percent of people click on Result No. 2 and only 10 percent click on Result No. 1, after a while we figure probably Result 2 is the one people want. So we’ll switch it.”

I’m not accusing Google of any wrongdoing. Utilizing peer review in order to determine content quality is the logical solution to the problem of content saturation. However, what we are finding is that, in more than just a few cases, the user base is wrong. What’s worse is that we don’t have an effective solution to this complex issue.

Now, if this was the extent of the problem it actually wouldn’t be quite as bad. However, the plot thickens when we begin to consider the manner in which content is customized for each individual user.

Problem #2: Content networks pander to the fickle whims of a more fickle user base.

So problem #1 is that content networks gauge the quality of content on peer review and user engagement. Problem #2 is that they make every attempt at optimizing a user’s experience around what they are most likely to engage with based upon previous behavior.

This is a dangerous mix. We have content conduits with a flawed ability in gauging content quality that are making every attempt at delivering the content that its user base is most likely to consume. Content delivery is built entirely around optimizing a user’s experience in a way that ensures they’re receiving the content that they are most likely to engage with.

This is the perfect storm of misinformation. Take a tool that is incorrectly cataloging content and then allow a user’s behavioural prejudices to determine what already flawed content they are more likely to receive. This recipe for disaster essentially guarantees that, especially for topics with high emotional stakes, people are receiving the most polarizing forms of content possible. In addition, they’re receiving the polarizing content that they are most likely to engage with.

And, to make matters worse, these aren’t problems that are limited to Google. The same issues exist in every single major content network: Facebook, Twitter, Linkedin, Mashable, Outbrain, and even the glue-sniffing disappointment of a failed comeback, Bing. All of these tools are built around a peer reviewed content model. Everything that you and I are being told is coming through two layers of lies. The lies our peers tell us and the lies we tell ourselves.

Personalizing the problem . . .

Let’s get personal for a minute. Compare the browsing experiences of a Republican and a Democrat when it comes to the topic of climate change. Think about the answers they’ll get to questions when they search, the information that is showing up in their news feed, the articles they’re seeing in their Google Now stream and the opinion pieces they’re being sent by their “friends” group.

Is it really any wonder that the United States is more politically polarized now than it has been in 150 years? We have created a digressive feedback loop through which we only see the content that we are going to agree with and that has already passed the sniff test of our equally-prejudiced peer group.

We are essentially ranking content according to what has the highest appeal to the lowest common denominator. We are then delivering that content to people based upon their already existing preferences and prejudices. It’s a vacuum for ignorance-fueled mob rule in the realm of thought leadership. Scary.

Some have suggested that tools like Facebook and Google have some innate responsibility to proctor or verify content. Not only do I think this is absurd, I think it’s dangerous. What independent measure of truth could a content conduit utilize in order to determine what content is “legitimate?” This has 1984 written all over it.

So what’s the solution?

Honestly, I don’t know. I know that’s frustrating to hear, but it’s not as simple as a little soundbite blog saying, “Here are the three things we can do to fix our information problem.” I believe in the old adage that says, “50% of fixing a problem is recognizing there is one.” So, there you go; I got us halfway there!

I believe that the other half rests in the personal responsibility we all have to seek out alternative forms of information. Because it is so easy to be led, we need to learn to lead. I don’t mean lead others, I mean lead ourselves.
We should be challenging every piece of information we receive. Anything that you deem actionable should first be investigated before making any efforts to move upon. As we find ourselves forming opinions about important subjects and hot-button topics, we need to purposefully seek out alternative viewpoints.

An important note here resides in the concept of “the other point of view.” Because we have been so heavily polarized, we tend to think of every issue as a two-sided issue. Left and right. Red and blue. Black and white. Them and us. The topics we’re alluding to are way more robust than that. There are many, often hundreds, of views on some of these important topics and we need to be receptive to at least considering all of them.

I don’t know what the solution is because maybe there isn’t one. Maybe there isn’t one big “answer.” In fact, that’s the thinking that got us into this mess in the first place. We seek out the immediate end result instead of putting in the time, effort and energy necessary to determine what is right for us. The solution is understanding that there is no single, all-encompassing answer to every single problem and issue we are presented with.
Yes, the information age is upon us. We have the sum total of the world’s knowledge at our fingertips. Let’s use it.