HomeTechnologyIs that Facebook account real? Meta-reports

Is that Facebook account real? Meta-reports


Facebook’s parent Meta is seeing a “rapid rise” in fake AI-generated profile photos.

Publicly available technology such as “generative antagonistic networks” (GANs) allows anyone, including threat actors, to create creepy deep fakes, producing dozens of synthetic faces in seconds.

These are “basically photos of people who don’t exist,” said Ben Nimmo, leader of Global Threat Intelligence at Meta. “It’s not actually a person in the image. It’s an image created by a computer.”

“More than two-thirds of all [coordinated inauthentic behavior] The networks we disrupted this year featured accounts that likely had GAN-generated profile pictures, suggesting that threat actors may see it as a way to make their fake accounts look more authentic and original,” META revealed in a report. public on Thursday.

Researchers at the social media giant “look at a combination of behavioral cues” to identify GAN-generated profile photos, a step up from reverse image searches to identify more than just profile photos from stock photos.

Meta has shown some of the fakes in a recent report. The following two images are among several that are false. When they overlap, as shown in the third image, all the eyes line up exactly, revealing their artificiality.

ai1.png
Fake AI-generated Facebook profile for “Ali Ahmed Ghanem”

Goal


ai-alicia.png
Fake AI image of “Alice Schultz” Facebook profile.

Goal


aisuper.png
Six AI-generated photos of supposedly different individuals, when superimposed on the right, show that the eyes of all of them line up perfectly, revealing that they are fakes.

Meta/Graphika


Those trained to spot errors in AI images are quick to notice that not all AI images look perfect: some have revealing melted backgrounds or mismatched earrings.

ai-melted-background.png
AI-generated image showing “melt” on top of baseball cap.

Goal


“There’s a whole community of open search researchers who love to go crazy to find those [imperfections,]Nimmo said. “So what threat actors may think is a good way to hide is actually a good way to be detected by the open source community.”

But the increased sophistication of generative antagonistic networks that will soon rely on algorithms to produce content indistinguishable from human-produced content has created a tricky game of hit-a-mole for the global social media threat intelligence team.

Since public reporting began in 2017, more than 100 countries have been targeted by what Meta calls “coordinated inauthentic behavior” (CIB). They are critical to the operation.”

Since Meta began publishing threat reports just five years ago, the tech company has disrupted more than 200 global networks, spanning 68 countries and 42 languages, that it says violated policy. According to Thursday’s report, “The United States was the country most attacked by [coordinated inauthentic behavior] operations that we have discontinued over the years, followed by the Ukraine and the UK.”

Russia led the charge as the most “prolific” source of coordinated inauthentic behavior, according to Thursday’s report with 34 networks originating from the country. Iran (29 networks) and Mexico (13 networks) also ranked high among geographic sources.

“Since 2017, we have disrupted networks run by individuals linked to the Russian military and military intelligence, marketing firms, and entities associated with a sanctioned Russian financier,” the report states. “While most public reporting has focused on various Russian operations targeting the United States, our investigations found that more Russian operations were targeting the Ukraine and Africa.”

“If you look at the scope of Russian operations, Ukraine has consistently been the single biggest target that they have chosen,” Nimmo said, even before the Kremlin invasion. But the United States is also among those to blame for violating Meta’s policies governing coordinated online influence operations.

Last month, in a rare attribution, Meta reported individuals “associated with the US military” promoted a network of approximately three dozen Facebook accounts and two dozen Instagram accounts focused on US interests abroad, concentrating on audiences in Afghanistan and Central Asia.

Nimmo said the kill last month marks the first kill associated with the US military. It was based on a “range of technical indicators.”

“This particular network was operating on various platforms and posting about general events in the regions it was talking about,” Nimmo continued. “For example, describe Russia or China in those areas.” Nimmo added that Meta went “as far as we can go” in pinpointing the operation’s connection to the US military, not citing a particular branch of service or military command.

The report revealed that the majority, two-thirds, of the coordinated inauthentic behavior removed by Meta “most often targeted people in their own country.” Among the first of that group were government agencies in Malaysia, Nicaragua, Thailand and Uganda, which were found to have targeted their own populations online.

The tech giant said it is working with other social media companies to expose the information war between platforms.

“We continue to expose operations running on many different Internet services at once, with even the smallest networks following the same diverse approach,” Thursday’s report noted. “We have seen that these networks operate on Twitter, Telegram, TikTok, Blogspot, YouTube, Odnoklassniki, VKontakte, Change[.]org, Avaaz, other petition sites, and even LiveJournal.”

But critics say these kinds of collaborative takedowns are too few and too late. In a scathing rebuke, Sacha Haworth, executive director of the Tech Oversight Project, called the report “[not] worth the paper they’re printed on.”

“By the time deepfakes or propaganda from malevolent foreign state actors reach unsuspecting people, it’s already too late,” Haworth told CBS News. “Meta has shown that they’re not interested in tampering with their algorithms that amplify this dangerous content in the first place, and that’s why we need lawmakers to step up and pass laws giving them oversight over these platforms.”

Last month, a 128-page investigation by the Senate Homeland Security Committee and obtained by CBS News alleged that social media companies, including Meta, are prioritizing user engagement, growth and profit over moderation. Of content.

Meta informed congressional investigators that it will “eliminate[s] millions of offending posts and accounts every day,” and its artificial intelligence content moderation blocked 3 billion fake accounts in the first half of 2021 alone.

The company added that it invested more than $13 billion in security and safety equipment between 2016 and October 2021, with more than 40,000 people dedicated to moderation or “more than the size of the FBI.” But as the committee noted, “that investment represented approximately 1 percent of the company’s market value at the time.”

Nimmo, who was the direct target of misinformation when he was pronounced dead by 13,000 Russian bots in a 2017 hoax, he says the online defender community has come a long way, adding that he no longer feels like he’s “screaming in the desert.”

“These nets are getting caught earlier and earlier. And that’s because we’re getting more and more eyes in more and more places. If you look back at 2016, there really wasn’t a community of defenders. The guys who were playing offense they were the only ones on the field. That’s not the case anymore.”



Source link

RELATED ARTICLES
- Advertisment -

Most Popular