BCP

Hach & Rose Omegle Lawsuit Featured in USA Today

Posted on Monday, November 15th, 2021 at 7:30 pm    

Online child sex abuse on sites like Facebook, not just the ‘dark web’: Can they stop it?

This article originally appeared on northjersey.com and was written by Dustin Racioppi and Trenton Bureau.

Soon after schools in New Jersey switched to remote instruction during the COVID-19 pandemic last March, an 11-year-old girl went home with her school-issued Chromebook and logged into the popular chat website Omegle for the first time.

The platform, whose tagline is “talk to strangers,” paired the girl, identified as C.H., with a group of fellow minors. They appeared to be older than her, so C.H. left.

Then, in the next chatroom, a man threatened her into stripping naked and masturbating for him, according to court papers. C.H. had become one of the pandemic’s earliest victims in an unprecedented wave of child sexual abuse and exploitation online.

Home isolation and remote instruction in the pandemic, paired with little government oversight, underfunded law enforcement and poor self-regulation by tech platforms, has created a fertile breeding ground online for pedophiles, law enforcement officials and experts said.

The result has been a staggering 65.4 million images of suspected child sexual abuse material reported to the National Center for Missing and Exploited Children last year. But 95% of the reports from internet companies came from Facebook alone, raising questions about the efforts platforms put into self-policing for child abuse.

Comparable tech giants reported exceedingly smaller figures to the national center: 65,062 by Twitter and 265 by Apple, for example.

“The fact is these companies are not investing significant resources into developing technology to stop illegal activity and child predators,” said Hany Farid, a professor at the UC.

“They can talk all they want about how important this material is and how there’s zero tolerance, but I can tell you, having worked with these companies for over a decade now, their heart is just not in it,” he added. “It’s not good for business.”

As a result, Farid said, abusers “have a sense of immunity.”

That seemed to be the case when a man identified in a lawsuit as John started talking to C.H. in the second chat room she entered on Omegle last year.

John told C.H. he knew where she lived and provided her the geolocation of her home. Then he threatened to hack the cell phones and computers in her house before demanding she strip and masturbate in front of the computer’s camera for him, according to court papers.

Parents often think online sexual abuse can’t happen to their child, or that it’s a problem in other places, in the dark corners of the internet, experts say.

That was the mindset of C.H.’s mother, M.H., she said in an interview. Her focus at home was getting through the upheaval brought on by the pandemic, not the threat of her children being exploited on the internet, she said.

“I didn’t even think that, ‘Oh my god, now we’re going to go online and potentially have a random website that targets kids to profit off of the pandemic,’” said M.H., whose full identity, and that of her daughter’s, is being protected for privacy concerns.

“I never, ever saw this coming,” she said.

Stacia Lay, an attorney for Omegle, said she could not comment on the family’s case against the company but said “the vast majority” of interactions on the site help people meet and share perspectives from around the world.

“Any inappropriate behavior that has occurred, while a very small percentage of the millions of daily interactions, is deeply disturbing and unacceptable,” Lay said.

“We have enhanced and strengthened Omegle’s moderation practices to help prevent inappropriate use of our technology so that a small minority of bad actors don’t ruin the positive interactions experienced by millions of users.”

Child sexual abuse material, or CSAM, is a term preferred by law enforcement that covers a range of activity.

It could mean a video, photo or livestream of an adult sexually abusing a child. It could mean a child who has been “sextorted” shares images of them naked or touching themselves. It could mean teenagers trading — or selling — nude selfies on social media platforms.

“Our investigators see a lot of videos where kids are performing sex acts for (the) camera, in their bedrooms and their bathrooms, and you can hear the parents’ voices in the next room,” said Julie Inman Grant, Australia’s e-Safety Commissioner and former employee of Adobe, Microsoft and Twitter. “It’s almost all remote coercion.”

Under federal law, online companies face criminal liability if they know about child sexual abuse material on their platforms and don’t take it down “as soon as reasonably possible.”

But, “the thing is, you don’t have to look,” said Brian Levine, director of the Cybersecurity Institute at the University of Massachusetts Amherst. The government should step in with more regulation, Levine said, because there are “almost no laws in place for tech companies” to protect children from abuse online.

In a sweeping report on the prevalence of abuse material, the Canadian Centre for Child Protection said that in the absence of regulatory requirements companies “have no commercial or legal interest in investing in measures to prevent the images from surfacing or re-surfacing in the first place.”

“There are no consequences for inaction on the prevention side,” the center said.

Most tech companies use Photo DNA, a unique digital signature, or “hash,” that is used against other photos to find abuse material. But it is also a decade-old technology and few companies have invested in advancements to identify CSAM, experts said.

But even if all online companies stepped up their efforts and reported more abuse to the national center, already swamped law enforcement agencies would fall even farther behind, experts said.

“What do you do with 60 million reports a month?” said Matthew Green, a cryptographer and professor at Johns Hopkins University. “We don’t have the police to deal with 1.6 million or 160,000 reports a month.”

Apple, the world’s largest company, announced earlier this year it would scan devices for images of child sexual abuse but it delayed those plans after facing backlash over privacy concerns.

Jennifer Granick, surveillance and cybersecurity counsel for the ACLU’s Speech, Privacy, and Technology Project, said no company wants abuse material on their platforms but they try to balance child safety with personal privacy.

Apple’s announcement raised alarms among privacy advocates like the ACLU because “it’s an expansion of the territory of surveillance,” Granick said, and “it is ripe for abuse and to be extended to things that are not illegal.”

Apple did not respond to messages seeking comment.

Some popular social media platforms have become breeding grounds for abusers to groom victims and openly seek abusive images.

Ninety-seven percent of the CSAM found by the Canadian center in a global analysis was on the “clear web,” it said, meaning publicly accessible websites such as Facebook, Google, Reddit and Twitter.

In a separate analysis, the Canadian center said 78% of images and videos depicted children under 12 years old.

On Twitter, the USA TODAY Network conducted a hashtag search one day in July and found users looking “for 12-17 girls” and “Teen only or r8pe videos.”

Many included links purported to be files of such material, but the Network did not click them to verify. Simply typing in known terms automatically populated the search box with the hashtags to find users discussing or trading abuse material.

But according to a 2019 study by the Canadian center, Twitter makes it “extremely difficult” to report abuse material and it received the organization’s lowest rating compared to platforms such as Facebook, Bing and Pornhub, the popular pornographic website.

And although Twitter is one of the largest social media platforms, its 65,000 reports of abuse imagery last year “are extremely low for the size of Twitter’s platform,” according to the National Center on Sexual Exploitation.

The national center said in a lawsuit against Twitter that it has “enabled and profited” from abuse material on its platform and described it as “one of the most prolific distributors of material depicting the sexual abuse and exploitation of children.”

The center said when Twitter was first alerted in 2018 that abuse imagery featuring the center’s two anonymous clients was on the website, the company “refused” to remove it. Twitter can, and has, blocked certain hashtags, but the ones included in that lawsuit had not been blocked as of July, when the Network used them for searches.

“If you look at what’s going on right now with big tech, they’re selling this narrative: ‘Oh, we’ve got this filter, oh, we’ve got this to keep kids safe,” said Lianna McDonald, executive director of the Canadian center. “It’s a bunch of baloney.”

For victims whose images spread online, the fallout can be intense and lifelong. They often struggle with interpersonal relationships, holding down a job and substance abuse.

Knowing that those images are circulating make it difficult to move past the trauma of abuse, Cassie said. A large part of that is because federal law enforcement notifies victims each time their image is known to have appeared online.

She holds tech companies largely responsible for the re-traumatization that happens to victims like her each time they learn their images were found online.

“All of these platforms are making money off these pedophiles sharing these images,” Cassie.

Facebook said it uses PhotoDNA, VideoDNA and other methods across all its apps to detect and remove abusive images from being shared. It also has 40,000 people working on safety and security and has invested $13 billion in “teams and technology” since 2016, the company said.

Since 2019, Facebook said it has made its technologies open source, allowing other developers and platforms to more easily identify abusive content.

“We have no tolerance for this abhorrent abuse of children and use sophisticated technologies to combat it. We’ve funded and helped build the tools used to investigate this terrible crime, rescue children and bring justice to victims. We’ve shared our anti-abuse technologies with other companies and worked with experts to prevent and tackle this abuse,” Antigone Davis, global head of safety for Facebook’s parent company, Meta, said in a statement.

Farid, the Berkeley professor, said social media companies in general “are failing at this job” of protecting children on their platforms. He compared them to the airline industry.

“Imagine when the Boeing 737 Maxes fell out of the sky and the CEO of Boeing came up and said, ‘Hey look, here’s all these planes that we landed safely,’” Farid said.

“Does anybody think that that would be a reasonable response to 200 some-odd people that died when that plane crashed? You don’t point over here and say ‘I do all these things well’ when there’s horrific crimes happening on your platforms, but that’s exactly what the industry does.”

If Facebook encrypted all its platforms without ensuring it won’t lead to further exploitation, the move could effectively “make invisible” 12 million, or 70%, of abuse material cases, according to a Securities and Exchange Commission document.

“Why can’t we all agree that encryption is not helping the privacy of these children?” Levine said. “It’s putting them in a situation that puts them in danger.”

The COVID-19 pandemic undoubtedly factored into the increase of online abuse because “minors have been at home more than ever” and “offenders are home more than ever right alongside of them,” said Steve Grocki, chief of the child exploitation and obscenities section at the U.S. Department of Justice.

A 28% increase in tips of suspected abuse to the national center over 2019 seems to bolster Grocki’s point.

But statistics cannot convey the full extent of what’s happening online, a lesson Mrs. Williams, whose full name has been withheld, recently learned firsthand.

A foster parent of several children, Williams said she was “shocked” even after being warned by Homeland Security that two of her adopted daughters had been abused by their father. The agent had shown her filing cabinets full of abuse cases and told her the agency had storage units more, but Williams didn’t understand until she started receiving notices in the mail that are required to be sent each time a victim’s image is discovered online.

The notices flooded her post office box. After going away for a week, she said she returned to find her mailbox full.

It was full again the next day, and included a notification saying as much. Mrs. Williams went to the post office counter to learn more. The staff returned carrying “two huge crates” full of notifications that sexual abuse images of her new daughters had been found online.

“It was heartbreaking to see,” she said. “I sat on the floor in my bedroom and I went through the mail, or I started to, and it was just overwhelming because I didn’t realize how severe of a problem we had.”

Contact Hach Rose Schirripa & Cheverie LLP right now at (212) 779-0057 for a FREE, discreet consultation
Instant confidential case evaluation - click here