A massive government study in which more than 18 million images of more than 8 million people were run through almost 200 algorithms has confirmed what researchers have been warning for years: Facial-recognition systems misidentify people of color more often than white people, and women more often than men.

The federal National Institute of Standards and Technology has been combing through products from 99 mostly commercial developers in an attempt to ascertain accuracy and, now, performance across demographics. What researchers have discovered gives lawmakers good reason not to dawdle as they consider how to regulate an increasingly popular yet inherently invasive surveillance technology.

While the study released this week indicates a range in quality among algorithms, it reveals that in general the country's most vulnerable communities are also most susceptible, by a factor of as much as 100, to being falsely matched to someone else's face (or falsely not matched to their own). Silicon Valley and Seattle giants have protested in the past that existing tests revealing these discrepancies were conducted incorrectly, and that if done right they'd have yielded rosier results. This report serves up some doubt. If only those companies had consented to participating, we'd know for sure.

The possible harms vary by application; it's one thing if your phone won't let you unlock it at first attempt, and another entirely if you're mistakenly identified as a known terrorist at the airport. Some forms of facial-recognition technology could surely make both law enforcement and everyday life more efficient. But that efficiency is dampened even in the most innocuous of cases when an algorithm isn't accurate, and the trade-offs are downright dangerous in the contexts most likely to affect civil liberties — especially when inaccuracies increase among minorities.

There is a world in which careful consideration and legislation could lead to facial recognition being harnessed only for the best purposes, and forbidden for the worst. Those one-to-one scans comparing you to your passport at the airport might be opt-out; reasonable suspicion or a warrant might be required for searches against watch lists; and real-time surveillance might be prohibited except in public emergencies. Only algorithms with proven accuracy across demographics would be certified for government use, and only at high confidence thresholds would they generate results.

But this more thoughtful world isn't the one we live in. The world we live in includes a country, China, that is exploiting this powerful tool to turn the public sphere into an always-on dragnet. The regulatory response to nascent technologies here has often been to allow them to roll out now — and worry about risks later. Congress has ample reason to reverse the order.

FROM AN EDITORIAL IN THE WASHINGTON POST