Adversarial testing, also referred to as “red teaming”, refers to risk identification exercises through adversarial attacks on AI models, infrastructure, development and deployment environments, and deployed AI products/systems. It is a mode of evaluation targeted at finding vulnerabilities in AI systems that can be exploited to get an AI system to output harmful content.
Schedule a call with one of our experts