What are social media countermeasures?

What are social media countermeasures?

As the guy who pretty much owns the #socialmediacountermeasures on Twitter, I figured it makes sense to give the term some proper definition beyond just 280 characters.

In short, social media countermeasures are those techniques – both automated and manual – of which social media services use when trying to detect, flag, and remove malicious content. And by malicious, I mean the actually harmful content created by scammers and other cyber criminals. Therefore, these countermeasures do not involve enforcing narratives, shadowbanning, or other forms of suppressing freedom of speech in the name of “fighting disinformation (1, 2)”.

The countermeasures these social media platforms use are, of course, a trade secret, and very little amount of information about them is publicly available. Keeping them that way is a competitive advantage and makes criminals’ lives harder. We can however deduce that all major platforms have long since evolved beyond using simple blacklist of words or URLs as means of detecting malicious content. Behavior analysis seems to be the area of focus these days, as the social media companies can hoover up massive amounts of usage data from real users and then build a model around that. This behavior model alone isn’t enough though, as it only gives us some sort of average, or an acceptable variance, of typical behavior, but it lacks context. Without context a model like that can still detect for example bot-driven copypaste spamming campaigns easily, but when a person writes (at least seemingly) manually messages aiming to scam or phish a specific individual, detecting becomes a lot harder.

That’s way I’ve seen criminals deploy automated tactics that simulate normal behavior, such as introducing a false delay before auto-answering a message or a tweet, or sometimes even creating fake conversations between bots, and in those “conversations” they happen to promote a scam service and so forth.

These could be called counter-countermeasures. It’s a forever cat-and-mouse game between defenders’ tools and attackers’ criminal-cunningness. This is the reason why while most of the spam messages, e.g. YouTube comments, will end up automatically in the “Held for review” folder (so countermeasures caught them), a few will evade detection and end up among the legitimate comments.

Recently I saw a very interesting malicious campaign in YouTube comments, utilizing stolen accounts and impressively contextual and real looking comments. I did however immediately recognize it for what it is, and this once again begs the question: how on earth it didn’t get detected by YouTube’s countermeasures, while it was so blatantly obvious to me? Unless you get a job working in YouTube’s countermeasures unit, you’ll never know.

I will make another blog post about that campaign though. It’s a very interesting example of using multiple layers of the site’s features in order to lure victims into a specific website. It’s a bit NSFW so I need to figure out first if I need to sanitize my screengrabs or not.

Finally, I’d like to remind everyone to report all scam messages. Reports do improve the detection rate in the future! I shared this tip also in November 2022 issue of F-Alert, the monthly threat report by F-Secure. Feel free to download the report and read my article about a curious Facebook scam targeting Page Admins.

Everyman’s Cyber Defence

Everyman’s Cyber Defence

The following is my translation of “Jokamiehen kyberpuolustus”, Everyman’s Cyber Defence, a short snippet from publicly available document #kyberpuolustus : kyberkäsikirja Puolustusvoimien henkilöstölle (2019) by Laari, Flyktman, Härmä, Timonen and Tuovinen. Source material is encrypted in Finnish and free to download from National Defence University of Finland’s website. I intend no copyright infringement and share this as cyber security awareness material for public interest.

Continue reading “Everyman’s Cyber Defence”

Social Media Countermeasures – Battling Long-Running Scams on YouTube, Facebook, Twitter and Instagram

Social Media Countermeasures – Battling Long-Running Scams on YouTube, Facebook, Twitter and Instagram

For the past few years, I’ve been documenting, screenshotting, and sharing examples of criminal campaigns on the three big social media platforms: Facebook, YouTube and Twitter. I’m not that interested in speculating whether or not something is fake content, falsely amplified by nation-state sponsored threat actors (i.e. coordinated inauthentic behavior), but instead I’ve been focusing on two (a lot less media-sexy) themes:

  1. low-tier criminals using these platforms to promote their services
  2. so called “support scams” targeting mainly Facebook page owners

What is common across these two is the fact that they keep getting through social media platforms’ automatic filtering. I call this filtering – the good-willed type, not the censorship type – social media countermeasures. A term I think I picked up from Destin who runs Smarter Every Day YouTube channel, but I haven’t really seen it used. In a nutshell, social media platforms are trying to create countermeasures to prevent malicious behavior on their platform, and at the same time cyber criminals are developing counter-countermeasures to bob and weave their way around detection and filtering. Sometimes these criminals simply operate in a grey area not covered explicitly by a platform’s Terms of Service, making developing effective countermeasures even harder. Let’s take a look at few examples.

Continue reading “Social Media Countermeasures – Battling Long-Running Scams on YouTube, Facebook, Twitter and Instagram”

What is Ransomware 3.0?

What is Ransomware 3.0?

I believe there’s a pretty clear consensus within the industry that ransomware should not be mistaken anymore to limit itself to just encrypting files and demanding payment for a decryption key. Dubbed by F-Secure “Ransomware 2.0”, now the standard practice for ransomware groups includes also stealing files from the target company in order to increase the leverage for ransom. Proper backups are an antidote to encrypted files but won’t help against the threat of stolen data being leaked.

Although this double extortion scheme has been the new modus operandi only since late 2019, cyber criminals are already looking for additional ways to apply pressure to their victims. This is where Ransomware 3.0 comes in.

Continue reading “What is Ransomware 3.0?”

Cyber Security in Gaming – Extensive Show Notes for KOVA Podcast X F-Secure

Cyber Security in Gaming – Extensive Show Notes for KOVA Podcast X F-Secure

Recently I was invited to KOVA Esports podcast to talk about cyber security, online privacy and identity management from the perspective of gamers and gaming industry in general. Hosted by KOVA’s General Manager Timo Tarvainen and joined by their streamer Teemu “Spamned” Rissanen, we had a great one-hour long discussion. This post covers my own notes about the things we mentioned, source links included, and further expands on some of the topics. Links to the podcast episode can be found on the bottom of the page. Enjoy!

Continue reading “Cyber Security in Gaming – Extensive Show Notes for KOVA Podcast X F-Secure”

YouTube Channel Phishing, Part 2: The Enemy Evolves

YouTube Channel Phishing, Part 2: The Enemy Evolves

Last year I took a first look at a phishing campaign that was interestingly targeting YouTube channel owners’ email addresses. The aim of the campaign was to guide people to fake YouTube sign in page and phish their login credentials. Note, this did not target YouTube accounts in general, but actual channels. These were my main findings:

  • Despite being hilariously obvious, first four of these were not caught by ProtonMail’s spam filter
  • Out of several YouTube channels I manage, only one has been targeted
  • Same email was CC’d to others
  • Unclear where they have found my email address
  • Senders’ email service providers started as Russian. Little to no typosquatting involved.
  • After few iterations, phishing content seems to have reached its final form (for now)

The campaign came in a burst, stopping as suddenly as it had started. Now after a couple of months it has started again, and it’s time to re-examine what has changed.

Continue reading “YouTube Channel Phishing, Part 2: The Enemy Evolves”

Wearables & Privacy – What You Need To Know

Wearables & Privacy – What You Need To Know

Continuing my seemingly never-ending quest of digging through privacy policies, this time I analyzed how the most popular wearables companies handle their customers’ data. Fitbit, Biostrap, Motiv, Oura and Whoop all are on the cutting edge of health technology, but are their privacy practices on par with that or not?

A fellow biohacker Alex Fergus provided me with the opportunity to publish my little research article on his website. Over the years he has published tons of information on fitness, sleep and – of course – health gadgets. Few days ago he published the most comprehensive red light panel comparison I’ve ever seen, analyzing everything from EMF levels to irradiance and LED flicker. Let’s just say he knows his stuff, so I’m excited to try to match his professionalism on that space with mine about privacy.

I believe it’s time for the biohacker community to start valuing their data more. In my guest blog post you’ll learn:

  • What data do these wearables collect?
  • Are they selling or exchanging data with third parties?
  • Data retention – how long are they storing your data?
  • What can you do?
  • And more…

So head over to alexfergus.com and learn everything you need to know about wearables and privacy!

“YouTube channel will be disabled within 24 hours!” Phishing Campaign First Look

“YouTube channel will be disabled within 24 hours!” Phishing Campaign First Look

During past few months I’ve witnessed and been targeted by rather simple, but still interesting phishing campaign. Well, not me personally, but instead a YouTube channel that I run. This campaign has noticeably sped up in November, so I decided to take a closer look at these phishing emails and share with you my findings.

Continue reading ““YouTube channel will be disabled within 24 hours!” Phishing Campaign First Look”

Freedom of Speech in the Age of Privacy Policies

Freedom of Speech in the Age of Privacy Policies

(I got access to thinkspot beta and this was my first post on that platform. I decided to crosspost it here to increase awareness of thinkspot, and also because the issues I raise here are relevant on other social media platforms as well.)

 

Hi, I’m Joel, and I eat Privacy Policies for breakfast.

I’m thrilled to be among the first users a social platform that encourages free speech and exchange of ideas, driven by the idea of diversity of minds – the true diversity – not the superficial diversity of how we look or where we come from. However, there can be no free speech without privacy. In a similar vein, Snowden famously wrote few years ago that “Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.” Well I care about both. It makes a lot of sense then for my first contribution on this platform to be an analysis of thinkspot’s Privacy Policy.

All comments are made about Privacy Policy that’s dated to be effective starting August 8, 2019. It seems that they don’t keep an archive of old policies, so I took the liberty to archive this one myself. They do however notify users “in advance of any material updates to this Privacy Policy by providing a notice on the Website or via email”, so that’s a good thing. Here’s some of the most notable parts of the policy.

Continue reading “Freedom of Speech in the Age of Privacy Policies”