Facebook has stepped up its two-year-old anti-clickbait campaign, changing its news feed algorithm to weed out manipulative or deceptive headlines. The company cites popular demand, but the reality is probably more complicated. As a user with more than 90,000 subscribers and about 4,200 friends, I don't want Facebook to do this: It's another step toward shrinking the view of the world people receive through the social network.

Facebook researchers Alex Peysakhovich and Kristin Hendrix explained the algorithm change in a post last week. Rather than reducing the distribution of links from which users quickly bounce back to Facebook (which means they were probably enticed to click and then disappointed), the company has tweaked the algorithm to look for phrases commonly used in clickbait headlines.

The definition of clickbait is important here. Facebook's is currently narrow: A headline needs to withhold information necessary to understand what the story is about ("You'll Never Believe Who Tripped and Fell on the Red Carpet"), forcing users to click on the link to find out the answer. Linguists call such tricks "deixis" (the use of words that require a context to be understood) and "cataphora" (a figure of speech in which an earlier expression describes a "forward" expression that hasn't come up yet). Facebook also has a problem with headlines that exaggerate the impact of the story.

This would not rule out most headlines from BuzzFeed, which, according to the social content consultancy NewsWhip, is the second-biggest magnet for Facebook comments, likes and shares among news sites.

BuzzFeed claims it "doesn't do clickbait" — but that's using the same narrow definition Facebook uses. That's not how most people understand it: To most of us, BuzzFeed's formulaic listicle headlines — a number plus a strong sales pitch for the subject, as in "23 Things That Will Make You Feel Like an Adult" (the content being a collection of "native ads" for everything from collapsible lunchboxes to wristwatches) — fall into the clickbait category. Just like cataphoric headlines ("Why This Father Feeds His Son Freakish Fruit and Vegetables"), they produce what in 1994 behavioral economist George Loewenstein described as "a feeling of deprivation labeled curiosity."

This year, a team from Bauhaus University in Weimar, Germany, published a paper on clickbait detection. They built a machine-learning-based model to analyze a set of news content links from Twitter (not Facebook), and the results of their work — following stricter criteria than Facebook says it's using, 215 of them in all — are revealing:

"Business Insider sends 51% clickbait, followed by Huffington Post, the Independent, BuzzFeed and the Washington Post with more than 40% each. Most online-only news publishers (Business Insider, Huffington Post, BuzzFeed, Mashable) send at least 33% clickbait, Bleacher Report being the only exception with a little less than 10%. TV networks (CNN, NBC, ABC, Fox) are generally at the low end of the distribution."

Facebook has long been moving away from being an impartial platform on which users can place any content. Since late June, it's been showing us more posts from friends and family and less from news organizations. Now there's the anti-clickbait change.

The goal is to increase users' engagement with every post and maximize the time spent on the social network: It's good for revenue. Yet since news websites get more than 40 percent of referral traffic from Facebook, every such change limits the amount of information that reaches readers. And some changes, including the latest one, even create the potential for censorship and arbitrary selection.

Readers are generally not dumb. In June, when the U.K. voted to leave the European Union, the Financial Times — which, unlike BuzzFeed, really doesn't do clickbait — had one of the top percentages of shares among its Facebook interactions, according to NewsWhip. The raciest tabloids with the flashiest headlines covered Brexit, too, but people wanted to share the Financial Times' sober journalism.

If somebody wants to block certain types of headlines — because they are manipulative, or for any other reason — they should have access to filters to personalize their news feed. Instead, Facebook presents users with a black box for fear of having its algorithm reverse-engineered.

There is one more reason Facebook's anti-clickbait rule is wrong. At this stage of its development, artificial intelligence is terrible at processing human languages, and letting it police content is premature. Silicon Valley companies are overconfident in their technology. They have to admit humans are better at content, at least for now.

Leonid Bershidsky, a Bloomberg View contributor, is a Berlin-based writer.