Microsoft President Brad Smith paints an Orwellian picture of the future in his latest call for government regulation of facial-recognition technology.

Smart camera systems could follow us anywhere, tracking our whereabouts and activities for companies and governments to scrutinize.

"It could follow anyone anywhere, or for that matter, everyone everywhere," Smith wrote in a blog post Thursday.

Smith also pointed out the benefits of facial-recognition technology, which has received praise for helping police find missing children and identify criminals.

But without regulation, he added, "this use of facial-recognition technology could unleash mass surveillance on an unprecedented scale."

It's not too late to put safeguards on the technology before that happens in the U.S., he argues. "We must ensure that the year 2024 doesn't look like a page from the novel '1984,' " he wrote, referring to George Orwell's dystopian novel.

Smith outlined the company's recommendations for government regulation and tech-company policies, which Microsoft has been developing since announcing this summer that it would support regulation of facial technology.

The proposals include a law that would inform consumers when facial-recognition technology is being used in a public place. The technology, which uses cameras and advanced machine learning systems to analyze and identify faces, is becoming increasingly common in the country as the technology gets more accurate, and is being used as a security measure in schools and at retail stores to observe consumers' shopping patterns.

Microsoft also recommended laws that require people to review results from the artificial intelligence systems before they're automatically used to make decisions about people's actions, especially where there could be legal or other important consequences. This could help cut down on instances of bias and discrimination, Smith wrote, an issue that developers of facial-recognition technology have struggled with and come under fire for, especially when related to use of the technology by law enforcement.

Studies have found that several facial-recognition systems make more errors when identifying women and people of color rather than white men. Microsoft and others have vowed to work on the problem, and Microsoft notes that its own Face API system has become more accurate at identifying people.

Smith also recommended tech companies be required to publish documents that clearly explain their technology's capabilities and limitations, allow third-party groups to independently test the systems, and require governments to obtain court orders in many cases before persistently monitoring people with facial- recognition technology.

Two civil-liberties organizations acknowledged that Microsoft has done better than other companies in addressing the need for regulations, but said the recommendations did not go far enough.

"Microsoft gets some things right, but unfortunately the protections they're suggesting are not sufficient," said Shankar Narayan, who directs the technology and liberty project at the American Civil Liberties Union of Washington. "Their actions won't prevent '1984,' they will accelerate it," he said.

Narayan called for a moratorium on tech companies selling facial-recognition technology to government and law-enforcement agencies. Even operating perfectly, he said, the technology can be used to racially profile and discriminate against groups of people.

Electronic Frontier Foundation echoed the call to prevent companies from selling to law enforcement.