Sensity AI, a startup focused on tackling identity fraud, carried out a series of pretend attacks. Engineers scanned the image of someone from an ID card, and mapped their likeness onto another person's face. Sensity then tested whether they could breach live facial recognition systems by tricking them into believing the pretend attacker is a real user.
Sensity mentioned needing a specialised phone to hijack mobile cameras and injecting pre-made deepfake models in its report.
Security is always a moving target...
See Deepfake attacks can easily trick facial recognition
Plus: Next PyTorch release will support Apple GPUs so devs can train neural networks on their own laptops