In 2014, China announced it would set up a system to judge its citizens: on their trustworthiness, their public and social media behavior, the wisdom of their purchasing habits, how often they play video games, their interactions with law enforcement and government officials, whether they clean up after their dogs and more. Many pundits and academics saw this as an extraordinary step by an authoritarian state to enforce conformity on its 1.3 billion-plus residents.

China isn't doing this just to nudge people to be better citizens. Instead, it is punishing or rewarding them based on their "social credit score." High rankings ensure quicker processing of government documents, better access to goods and housing, and other sorts of preferential treatment. Low rankings reduce or eliminate access to many goods, public and private services and travel options.

Beijing actively shames individuals it finds wanting. A cellphone app produces a graphic showing those nearby who are socially "deficient" and why they are considered untrustworthy. As Newsweek reported in 2018, the biggest sin of all is being a government critic. It detailed how an investigative journalist who had gone after official corruption found his daughter denied access to good schools, among many other penalties. Shades of "1984."

Yet as many publications have pointed out, the United States has a de facto equivalent — except it is overseen by private companies, not the government. These firms use artificial intelligence-driven algorithms to vacuum up a stunning variety of information points collected automatically by cellphone applications and off the internet. They use this data to make sweeping judgments about Americans — all without transparency about what goes into these judgments.

In one example, employers buy scores from HireVue and Cornerstone to judge the merits of job candidates based in part on what algorithms conclude about their facial scans and speaking patterns.

The Washington Post reported last year that "HireVue's 'AI-driven assessments' have become so pervasive in some industries, including hospitality and finance, that universities make special efforts to train students on how to look and speak for best results. More than 100 employers now use the system, including Hilton and Unilever, and more than a million job seekers have been analyzed."

In another revelation, landlords buy scores from CoreLogic and TransUnion to evaluate not just the finances of potential renters but their trustworthiness.

This should be extraordinarily controversial, not just the occasional topic of tech journalists. As California consumer advocates Harvey Rosenfield and Laura Antonini pointed out recently in the Post, these ratings are tantamount to a high-tech version of Jim Crow laws: "Surveillance scoring enables companies to cloak old-school discrimination in an aura of technological infallibility and wonder."

Using finances as a proxy for people's character or their worth as a potential employee is hugely problematic given that poverty is more prevalent in Black and Latino communities than white ones. And without thorough explanations on how algorithms judge individuals on what their faces look like and how they talk — among many possible examples — it's certainly conceivable they are using the same stereotypes that have created obstacles for communities of color for centuries.

What makes the growing use of these tools to judge individuals so hard to believe is they go against some of the strongest cultural trends of the past decade. The Black Lives Matter movement is focused on racial inequality. The Occupy movement focused on income inequality.

But perhaps these movements should focus as much on the private sector as government. The idea that companies are ramping up use of algorithms that constantly make snap judgments on us all — judgments with potentially deep consequences for our lives — feels like the corporate version of the police tech in the movie "Minority Report."

Maybe this will finally turn the community of those who back online privacy into a movement of its own. Those who don't give a second thought to apps tracking us to figure out what products we want to buy may change their minds when they realize the 24/7/365 tech surveillance we've accepted or acquiesced to is being weaponized against many by our judgmental domestic Skynet.

I, for one, do not welcome our new AI overlords.

Chris Reed is the deputy editorial and opinion editor of the San Diego Union-Tribune. Twitter: @chrisreed99. E-mail: chris.reed@sduniontribune.com.