A large-scale malware operation has been uncovered on YouTube, where thousands of videos disguised as game hacks and cracked software tutorials quietly infected viewers’ computers. Researchers at Check Point found that a network of fake and hijacked channels had been pushing data-stealing programs for several years, turning a popular video platform into a delivery system for digital theft.

The campaign, known internally as the YouTube Ghost Network, used hundreds of interconnected accounts to upload, comment, and promote content that appeared legitimate. What seemed like an ordinary search for a “free Photoshop crack” or “Roblox hack” often led users to download malicious files posing as helpful tools.

How the Operation Spread

Check Point’s team traced[1] the earliest activity back to 2021, though most of the growth happened in 2024 and early 2025. More than three thousand infected videos were uploaded across different channels, many of which used stolen thumbnails, voiceovers, and AI-generated narrations to imitate authentic creators.

Once a viewer followed the instructions and clicked the download link, the file usually came from public sharing sites such as Google Drive, Dropbox, or MediaFire. Each archive contained an executable claiming to unlock premium features or bypass software checks. Victims were told to disable Windows Defender before installing, a warning disguised as a harmless setup tip.

That action allowed infostealers like Lumma, RedLine, and Rhadamanthys to run freely, collecting passwords, browser cookies, and cryptocurrency wallet data. In some cases, the files redirected to phishing pages that mimicked login portals from well-known platforms.

Engineered to Look Real

The network relied on layers of false engagement. One set of accounts uploaded the content, while another wave flooded the comments with praise and emojis, making the videos seem trustworthy. A few of these accounts had once belonged to legitimate users who later had their channels hijacked. One stolen account carried more than 129,000 subscribers, helping the attackers gain nearly 300,000 views before takedown.

The structure resembled a small marketing ecosystem rather than a lone hacker. The operators switched links frequently, updated file names, and re-uploaded removed content under fresh profiles. That persistence made it harder for YouTube’s automated systems to flag them.

YouTube’s Cleanup and Remaining Risks

After being alerted by Check Point, Google removed the bulk of the infected videos in September 2025. Yet traces of the campaign remain online, hidden in reposted or mirrored content. The researchers say that even a short-lived upload can generate thousands of downloads within hours, especially when tags and titles target trending searches.

Cybersecurity analysts see this as part of a wider trend where criminals piggyback on trusted social platforms rather than shady websites. By blending malware inside familiar spaces, attackers exploit users’ comfort and the assumption that large platforms filter dangerous content.

Lessons from the Ghost Network

The YouTube Ghost Network shows how modern cybercrime depends less on technical genius and more on social manipulation. A convincing video, a realistic comment thread, and a short instruction list can bypass years of public awareness campaigns.

Researchers recommend that users avoid downloading any cracked software or tools offered through video links and verify content creators before following their advice. Even a small lapse in caution, they warn, can turn a simple tutorial into a costly breach.

The cleanup continues, but the investigation suggests that other clusters could still be operating under new names. For a platform built on trust and visibility, the hidden ghosts of YouTube may linger longer than expected.

Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen. 

Read next:

• Google Earth AI Taps Gemini to Predict Disasters Before They Unfold[2]

• Why AI Chatbots Aren’t Bullying Kids, But Still Pose Serious Risks[3]

References

  1. ^ traced (research.checkpoint.com)
  2. ^ Google Earth AI Taps Gemini to Predict Disasters Before They Unfold (www.digitalinformationworld.com)
  3. ^ Why AI Chatbots Aren’t Bullying Kids, But Still Pose Serious Risks (www.digitalinformationworld.com)

By admin