
- Deepfake injection attacks bypass cameras and deceive video verification software directly
- Face swaps and motion re-enactments transform stolen images into convincing deepfakes
- Managed detection services can identify suspicious patterns before attacks succeed
Digital communication platforms are increasingly vulnerable to sophisticated attacks that exploit advanced artificial intelligence.
A report from iProov[1] reveals a specialized tool capable of injecting AI-generated deepfakes directly into iOS video calls, raising concerns about the reliability of existing security measures.
The discovery reveals how quickly AI tools[2] are being adapted for fraud and identity theft[3], while exposing gaps in current verification systems.
A sophisticated method for bypassing verification
The iOS video injection tool, suspected to have Chinese origins, targets jailbroken iOS 15 and newer devices.
Attackers connect a compromised iPhone to a remote server, bypass its physical camera, and inject synthetic video streams into active calls.
This approach enables fraudsters to impersonate legitimate users or construct entirely fabricated identities that can pass weak security checks.
Using techniques such as face swaps and motion re-enactments, the method transforms stolen images or static photos into lifelike video.
This shifts identity fraud from isolated incidents to industrial-scale operations.
The attack also undermines verification processes by exploiting operating system[4]-level vulnerabilities rather than camera-based checks.
Fraudsters no longer need to fool the lens, they can deceive the software directly.
This makes traditional anti-spoofing systems, especially those lacking biometric safeguards, less effective.
“The discovery of this iOS tool marks a breakthrough in identity fraud and confirms the trend of industrialized attacks,” said Andrew Newell, Chief Scientific Officer at iProov.
“The tool’s suspected origin is especially concerning and proves that it is essential to use a liveness detection capability that can rapidly adapt.”
“To combat these advanced threats, organizations need multilayered cybersecurity controls informed by real-world threat intelligence, combined with science-based biometrics and a liveness detection capability that can rapidly adapt to ensure a user is the right person, a real person, authenticating in real time.”
How to stay safe
- Confirm the right person by matching the presented identity to trusted official records or databases.
- Verify a real person by using embedded imagery and metadata to detect malicious or synthetic media.
- Ensure verification is in real-time with passive challenge-response methods to prevent replay or delayed attacks.
- Deploy managed detection services that combine advanced technologies with human expertise for active monitoring.
- Respond swiftly to incidents using specialized skills to reverse-engineer attacks and strengthen future defenses.
- Incorporate advanced biometric checks informed by active threat intelligence to improve fraud detection and prevention.
- Install the best antivirus software[5] to block malware that could enable device compromise or exploitation.
- Maintain strong Ransomware protection to safeguard sensitive data from secondary or supporting cyberattacks.
- Stay informed on evolving AI tools to anticipate and adapt to emerging deepfake injection methods.
- Prepare for scenarios where video verification alone cannot guarantee security against sophisticated identity fraud.
You might also like
References
- ^ iProov (www.iproov.com)
- ^ AI tools (www.techradar.com)
- ^ identity theft (www.techradar.com)
- ^ operating system (www.techradar.com)
- ^ best antivirus software (www.techradar.com)