When Jessie Battaglia started looking for a new babysitter for her 1-year-old son, she wanted more information than she could get from a criminal-background check, parent comments and a face-to-face interview.

So she turned to Predictim, an online service that uses “advanced artificial intelligence” to assess a babysitter’s personality, and aimed its scanners at one candidate’s thousands of Facebook, Twitter and Instagram posts.

The system offered an automated “risk rating” of a 24-year-old candidate, saying she was at a “very low risk” of being a drug abuser. But it gave a slightly higher risk assessment — a 2 out of 5 — for bullying, harassment, being “disrespectful” and having a “bad attitude.”

The system didn’t explain why it had made that decision. But Battaglia, who had believed the sitter was trustworthy, suddenly felt pangs of doubt.

“Social media shows a person’s character,” said Battaglia, 29, who lives near Los Angeles. “So why did she come in at a 2 and not a 1?”

Predictim is offering parents the same playbook that dozens of other tech firms are selling to employers around the world: artificial-intelligence systems that analyze a person’s speech, facial expressions and online history with promises of revealing the hidden aspects of their private lives.

But critics say Predictim and similar systems present their own dangers by making automated and possibly life-altering decisions virtually unchecked.

The systems depend on black-box algorithms that give little detail about how they reduced the complexities of a person’s inner life into a calculation of virtue or harm. And even as Predictim’s technology influences parents’ thinking, it remains entirely unproven, largely unexplained and vulnerable to quiet biases over how an appropriate babysitter should share, look and speak.

There’s this “mad rush to seize the power of AI to make all kinds of decisions without ensuring it’s accountable to human beings,” said Jeff Chester, the executive director of the Center for Digital Democracy, a tech advocacy group. “It’s like people have drunk the digital Kool-Aid and think this is an appropriate way to govern our lives.”

Predictim’s chief and co-founder Sal Parsa said the company, launched last month, takes ethical questions about its use of the technology seriously. Parents, he said, should see the ratings as a companion that “may or may not reflect the sitter’s actual attributes.”

But the danger of hiring a problematic or violent babysitter, he added, makes the AI a necessary tool for any parent hoping to keep his or her child safe.

A Predictim scan starts at $24.99 and requires a babysitter’s name and e-mail address and her consent to share broad access to her social media accounts. The scans analyze the entire history of a babysitter’s social media.

Parents could, presumably, look at their sitters’ public social media accounts themselves. But the computer-generated reports promise an in-depth inspection of years of online activity, boiled down to a single digit: an intoxicatingly simple solution to an impractical task.

Malissa Nielsen, Battaglia’s 24-year-old babysitter, gave her approval recently to two separate families who asked her to hand over social media access to Predictim. She said she has always been careful on social media and figured sharing more about herself couldn’t hurt: She goes to church once a week, doesn’t curse and is finishing a degree in early childhood education, with which she hopes to open a preschool.

But after she learned that the system had given her imperfect grades for bullying and disrespect, she was stunned.

“Why would it think that about me?” Nielsen said. “A computer doesn’t have feelings. It can’t determine all that stuff.”