As someone who has been studying social media countermeasures and the way cybercriminals evade them for several years now, I always find it fascinating when these companies openly discuss their strategies. Of course, the technical details of these countermeasures remain closely guarded secrets—”it’s an adversarial space” as Zuckerberg aptly described—but it’s good to hear confirmation about the overarching principles behind detecting and addressing inauthentic content.
Here’s a transcript of Mark Zuckerberg’s latest appearance on the Joe Rogan Experience podcast, episode #2255, January 10, 2025:
Zuckerberg: …the way that you identify that is you build AI systems that can basically detect that those accounts are not behaving the way that a human would and when we find that there’s like some bot that’s operating an account…
Rogan: How do you differentiate, how do you figure that out?
Zuckerberg: I mean there are some things that a person just would never do.
Rogan: So um have you met Lex Fridman, right?
Zuckerberg: Yeah is he going to make a million actions in a minute? Probably not. I mean it’s more subtle than that. I think like these guys are pretty sophisticated and it’s an adversarial space. So, we find some technique and then they they basically kind of update their their techniques. But we have a team of, it’s effectively like intelligence, counter-intelligence folks, counterterrorism folks, AI folks, who are building systems to identify what are these accounts that are just not behaving the way that people would and how are they interacting and and then sometimes you trace it down and sometimes you get some tips from different intelligence agencies and then you can kind of piece together over time. It’s like oh this network of people is actually some kind of fake cluster of accounts and that’s against our policies and we just take them all off.
Rogan: How do you how are you sure like is there a 100% certainty that you are definitely getting a group of people that are bad actors or is it just people that have unpopular opinions?
Zuckerberg: No I don’t think it’s that for this.
Rogan: What I’m saying is how do you determine, at what percentage of accuracy are you determining? Do you ever accidentally think that people that are going to get moderated are actually just real people?
Zuckerberg: Yes I think for the specific problem around these like large coordinated groups doing kind of like election interference or something, it’s a large enough group, we have like a bunch of people analyzing it. They study it for a while. I think we’re probably pretty accurate on that.
Zuckerberg’s remarks primarily focus on “Coordinated Inauthentic Behavior,” but the same foundational principles are applied to combat other forms of criminal activity, albeit at different scales and levels of complexity. The podcast was filled with interesting nuggets of information, and I plan to cover at least the announcement about Community Notes in F-Secure’s next F-Alert threat bulletin.
EDIT: The threat report is out now. No paywall!
Could this signify the dawn of a new era for free speech and open expression on social media?