Netarx comprehensively protects organizations against AI-generated fraud from deepfakes, voice cloning, and cross-channel social engineering attacks.
Netarx runs multiple AI models to detect and alert users when they receive AI generated deepfakes for voice, video, text, and images.
Netarx employs blockchain technology to manage a decentralized identification system, to which they can add up to 70 authentication factors for increased security.
When a protected device receives an AI generated deepfake item, Netarx analyzes all metadata associated with the item, including multiple authentication factors. The AI scan produces a red, yellow, or green score for the device user, who can decide whether to block the item.
The metadata driven approach to AI model analysis allows Netarx to scan any type of media.
Customers register their devices with Netarx so that it can authenticate and scan incoming messages. Netarx deploys agents to protect mobile devices and web browser plugins to protect web apps.
Netarx coordinates and runs these multiple AI models to detect attacks in real time across email, SMS, voice, and video channels.
Copyright © Intellyx BV. Intellyx is an industry analysis and advisory firm focused on enterprise digital transformation. Covering every angle of enterprise IT from mainframes to artificial intelligence, our broad focus across technologies allows business executives and IT professionals to connect the dots among disruptive trends. None of the vendors mentioned in this article is an Intellyx customer. No AI was used to produce this article. To be considered for a Brain Candy article, email us at pr@intellyx.com.


