Media entrepreneur Steven Brill thinks there’s something missing from all the efforts to separate fake news from the real kind: Some smart and discerning humans.

Faced with the waves of mis- and disinformation lapping up on social media, Brill is proposing to apply some reader-beware labels to internet news sources. His idea: a series of ratings, as determined by teams of independent journalists, that would help readers understand where their news — or “news” — is coming from.

Brill, a veteran journalist and founder of American Lawyer, Court TV and the late Brill’s Content magazine, has turned the idea into a fledgling company. NewsGuard is backed by about $6 million in venture funds from the likes of Publicis Groupe, a multinational ad agency, and the Knight Foundation, which has launched many journalism initiatives.

As Brill and his business partner, former Wall Street Journal publisher and columnist Gordon Crovitz, describe it, the New York-based company aims to assign a “reliability” rating — green, yellow or red — to some 7,500 sources of online news, based on an assessment by its teams of journalists. The rating would cover each site’s overall track record as a news purveyor. It wouldn’t apply to any specific article or journalist.

The ratings (green for generally trustworthy ones, yellow for the consistently biased or inaccurate and red for a deliberately deceptive site) would be supplemented by what Brill and Crovitz call “nutrition labels” — a longer description of each site’s history, journalistic track record and ownership. The information would tell a reader instantly that, say, a popular news site such as is a Kremlin-funded adjunct of the Russian government.

If “platform” giants such as Facebook and Google play ball — and so far NewsGuard has no commitment that they will — these assessments would be incorporated in search results, on YouTube videos and on the Facebook or Twitter postings that share the articles. Alternatively, individual users may someday be able to add a plug-in that would display ratings for each news site they accessed. The Good-Housekeeping-type seals hold out the promise of appealing to marketers and ad agencies — hence, Publicis’ involvement — in that they could be used to form a “whitelist” of approved sites to keep advertisers from linking their brands to toxic content.

“Our goal isn’t necessarily to stop [fake news] but to arm people with some basic information when they’re about to read or share stuff,” Brill said. “We’re not trying to block anything.”

Ideally, he said, a user encountering, say, the website in a Google search would quickly learn that the site is funded by a vested interest, the American Petroleum Institute. It would also instantly flag as “fake news” a site such as the Denver Guardian, which posted a bogus story about Hillary Clinton that was viewed by about 1.6 million people during the late stages of the 2016 presidential campaign.

NewsGuard aims to roll out its system in time for the midterm elections later this year, but Brill and Crovitz acknowledge they have their work cut out for them. Thus far, the venture has assessed and rated only about 100 of the 7,500 sites it hopes to tackle.

The project also faces headwinds from the platforms that would figure to be its largest potential customers — most of which have undertaken their own media-rating initiatives amid the public and government outcry over fake news. Google, for example, adjusted its search algorithms last summer to push down “low-quality” content, such as Holocaust-denial pages.

Facebook, Google, Bing and Twitter have also partnered with a nonprofit venture titled the Trust Project that adds standardized disclosures from news publishers about the news outlet’s ethics and standards. And Facebook has an ongoing fact-checking project.

Still, Brill says technology can’t do what humans can — such as pointing out what interests are really behind a popular website. “Whatever algorithms Google has, it’s not working” to defeat the fake-news scourge, he said.