Uncovering a long-lasting porn spam campaign on YouTube (NSFW, maybe)

Uncovering a long-lasting porn spam campaign on YouTube (NSFW, maybe)

In December 2022 I stumbled upon an interesting YouTube comment-based campaign, which promoted a shady camgirl / porn website through a clever use of YouTube features. I screengrabbed some video evidence and took a quick look at the campaign, but didn’t have time to dig any deeper.

I had forgotten the whole thing until in late April 2023 I saw the same campaign still going strong, still using exactly the same vectors in YouTube, still promoting the same site.

And this time I took a closer look, going through the rabbit hole of sus af adult website promotion. For science!

Continue reading “Uncovering a long-lasting porn spam campaign on YouTube (NSFW, maybe)”

What are social media countermeasures?

What are social media countermeasures?

As the guy who pretty much owns the #socialmediacountermeasures on Twitter, I figured it makes sense to give the term some proper definition beyond just 280 characters.

In short, social media countermeasures are those techniques – both automated and manual – of which social media services use when trying to detect, flag, and remove malicious content. And by malicious, I mean the actually harmful content created by scammers and other cyber criminals. Therefore, these countermeasures do not involve enforcing narratives, shadowbanning, or other forms of suppressing freedom of speech in the name of “fighting disinformation (1, 2)”.

The countermeasures these social media platforms use are, of course, a trade secret, and very little amount of information about them is publicly available. Keeping them that way is a competitive advantage and makes criminals’ lives harder. We can however deduce that all major platforms have long since evolved beyond using simple blacklist of words or URLs as means of detecting malicious content. Behavior analysis seems to be the area of focus these days, as the social media companies can hoover up massive amounts of usage data from real users and then build a model around that. This behavior model alone isn’t enough though, as it only gives us some sort of average, or an acceptable variance, of typical behavior, but it lacks context. Without context a model like that can still detect for example bot-driven copypaste spamming campaigns easily, but when a person writes (at least seemingly) manually messages aiming to scam or phish a specific individual, detecting becomes a lot harder.

That’s way I’ve seen criminals deploy automated tactics that simulate normal behavior, such as introducing a false delay before auto-answering a message or a tweet, or sometimes even creating fake conversations between bots, and in those “conversations” they happen to promote a scam service and so forth.

These could be called counter-countermeasures. It’s a forever cat-and-mouse game between defenders’ tools and attackers’ criminal-cunningness. This is the reason why while most of the spam messages, e.g. YouTube comments, will end up automatically in the “Held for review” folder (so countermeasures caught them), a few will evade detection and end up among the legitimate comments.

Recently I saw a very interesting malicious campaign in YouTube comments, utilizing stolen accounts and impressively contextual and real looking comments. I did however immediately recognize it for what it is, and this once again begs the question: how on earth it didn’t get detected by YouTube’s countermeasures, while it was so blatantly obvious to me? Unless you get a job working in YouTube’s countermeasures unit, you’ll never know.

I will make another blog post about that campaign though. It’s a very interesting example of using multiple layers of the site’s features in order to lure victims into a specific website. It’s a bit NSFW so I need to figure out first if I need to sanitize my screengrabs or not.

Finally, I’d like to remind everyone to report all scam messages. Reports do improve the detection rate in the future! I shared this tip also in November 2022 issue of F-Alert, the monthly threat report by F-Secure. Feel free to download the report and read my article about a curious Facebook scam targeting Page Admins.

Social Media Countermeasures – Battling Long-Running Scams on YouTube, Facebook, Twitter and Instagram

Social Media Countermeasures – Battling Long-Running Scams on YouTube, Facebook, Twitter and Instagram

For the past few years, I’ve been documenting, screenshotting, and sharing examples of criminal campaigns on the three big social media platforms: Facebook, YouTube and Twitter. I’m not that interested in speculating whether or not something is fake content, falsely amplified by nation-state sponsored threat actors (i.e. coordinated inauthentic behavior), but instead I’ve been focusing on two (a lot less media-sexy) themes:

  1. low-tier criminals using these platforms to promote their services
  2. so called “support scams” targeting mainly Facebook page owners

What is common across these two is the fact that they keep getting through social media platforms’ automatic filtering. I call this filtering – the good-willed type, not the censorship type – social media countermeasures. A term I think I picked up from Destin who runs Smarter Every Day YouTube channel, but I haven’t really seen it used. In a nutshell, social media platforms are trying to create countermeasures to prevent malicious behavior on their platform, and at the same time cyber criminals are developing counter-countermeasures to bob and weave their way around detection and filtering. Sometimes these criminals simply operate in a grey area not covered explicitly by a platform’s Terms of Service, making developing effective countermeasures even harder. Let’s take a look at few examples.

Continue reading “Social Media Countermeasures – Battling Long-Running Scams on YouTube, Facebook, Twitter and Instagram”