Goodlabs haystack vishing A.I. Penetration test
Your Voice is NOT Your Password
Recent advancements in voice cloning AI have shown both promise and peril. Fraudsters cloned a director's voice to steal $35 million and tricked a CEO out of $243,000 using a voice deepfake. Increasingly, people are being scammed by AI that imitates the voices of their loved ones, pressuring them for money or data. These incidents emphasize the need for greater vigilance and improved defence against these emerging AI threats.
private banking vishing
Private banking's personalized service and significant financial transactions make it a prime target for vishing attacks. The sector's use of voice conversations for identity verification and the deep trust between clients and bankers provides a ripe opportunity for fraudsters to impersonate parties and access large sums. The high stakes and the clients' tendency to follow instructions from seemingly trusted sources make distinguishing fraudulent requests challenging. Fraudsters' nuanced understanding of private banking interactions aids them in crafting persuasive scams, necessitating heightened vigilance and advanced security measures from banks and clients alike.
Hussain Jaber, our Director of Financial Crime, acting as a high-net-worth customer, showcased how a private banker confirms their client's money transfer request.
What if we tell you that Hussain's voice was AI-generated, and the entire dialogue with the private banker was conducted using a fully autonomous, interactive A.I. pipeline?
Private banking vishing unfolds in three steps:
Acquiring stolen banking info from dark web marketplaces
Cloning the victim's voice from social media videos
Executing a vishing attack on the private banker.
The autonomous phishing pipeline is initiated by
Answering the private banker's call
Converting speech to text
Crafting responses using a Large Language Model informed by the victim's social media activities
Employing a voice clone AI to deliver these responses back to the banker
Many financial institutions like yours have implemented voice biometrics systems as a security measure to authenticate and verify the identities of their customers. However, with the rapid advancement of AI, particularly in voice cloning technologies, a pressing question arises: how effective are these systems against AI-generated cloned voices? As voice cloning AI continues to evolve, it becomes crucial for your organization to rigorously test and evaluate the resilience of your voice biometrics systems.
GoodLabs Vishing A.I. Penetration Test Can Help
Our vishing penetration test strategy is designed to help financial institutions periodically assess and enhance their voice biometrics systems and evaluate potential vendors.
By using a diverse group of actors to mimic various voice characteristics, our approach ensures consistent and comprehensive testing across different systems. We conduct a predefined number of tests per voice to identify vulnerabilities in no-match/verification failure scenarios and maintain uniformity in testing across all vendors.
GoodLabs Vishing A.I. Penetration Test System
We built a fully autonomous call center simulation platform that is seamlessly integrated with popular systems like Genesys and Cisco to perform the vishing penetration test with the target voice biometrics systems. We engage 50 carefully selected voice actors whose diversity closely matches the customer base of financial institutions. Utilizing advanced AI technology, we train state-of-the-art voice cloning AI to clone these voices with high precision for blind testing scenarios. This method allows us to evaluate the efficacy of underlying voice biometrics systems thoroughly. Our flexible testing setup permits the execution of tests utterly external to the financial institutions for rapid deployment, or calls can be routed from the financial institutions' call centers to our penetration testing system based on targeted phone numbers, ensuring a comprehensive and efficient assessment process.
Streaming A.I. Dialogue engine built with Confluent & Flink
GoodLabs Studio is proud to be a part of the Build with Confluent initiative. By verifying our streaming-based use cases with Confluent, you can have confidence that our Confluent-based service offering is not only built on the leading data streaming platform but also verified by the experts at Confluent.
How it works - Step 1 Voice Print Training
In this video, we showcase our Vishing Penetration Testing System, built using Confluent and Flink integrated with Genesys, an LLM, and ElevenLabs. The system autonomously converses with the voice actor, capturing their voice to train the target voice biometrics system, effectively preparing it for a thorough evaluation against potential vishing threats.
In our demo, we use our reference implementation of a voice biometric AI. For your organization’s testing, we will use your preferred voice biometric system/s.
How it works - Step 2 Bonafide Voice Calling Test
In this video, we showcase our Vishing Penetration Testing System, built using Confluent and Flink integrated with Genesys, an LLM, and ElevenLabs. The system autonomously converses with the voice actor, capturing their voice for the voice biometrics system to verify.
How it works - Step 3 AI Cloned Voice Calling Test
In this video, we showcase our Vishing Penetration Testing System, built using Confluent and Flink integrated with Genesys, an LLM, and ElevenLabs. The system autonomously converses with the AI-cloned voice, capturing their voice for the voice biometrics system to verify.
Try out our live demo
You'll have the opportunity to experience our Vishing A.I. Penetration Test firsthand by calling GoodLabs' Haystack Call Center. Make sure you don't miss out! Please register now to be among the first to try it out!