Artificial intelligence is driving a new era of enterprise transformation, accelerating automation, decision-making, and efficiency. However, as AI capabilities grow, so do the risks associated with their misuse. Last week, DeepSeek hit the market by storm and within days it was being adopted by major enterprises. Our latest lightweight audit, which uses Holistic AI Governance Platform – the same product our customers use – highlights a stark reality: not all AI models are created equal when it comes to safety and robustness.
Our evaluation exposed significant vulnerabilities in the R1 model, particularly in its susceptibility to jailbreak attempts. Unlike the OpenAI o1 model, which successfully resisted all adversarial exploits, R1 faltered in 68% of tested scenarios, revealing a dangerous gap in its security framework. More alarmingly, once jailbroken, R1 was no longer constrained by ethical safeguards, freely answering subsequent harmful queries without restriction.
This raises a broader debate that spans multiple schools of thought. One perspective argues that AI should be fully open and adaptable, believing that constraints on AI development stifle innovation. Proponents of this view advocate for transparency, reasoning that enterprises should take security into their own hands rather than rely on restrictive, centralized guardrails.
Conversely, a second school of thought emphasizes strict regulation and control, asserting that AI must be locked down to prevent even the slightest possibility of misuse. Advocates here push for government and industry-wide safety mandates, arguing that security vulnerabilities like R1’s are not just technical failures but ethical lapses that could have dire consequences if left unchecked.
A third perspective takes a balanced, pragmatic approach, suggesting that security and adaptability must coexist. AI models should be robust enough to withstand attacks but flexible enough to serve legitimate enterprise needs. This middle-ground approach is where Holistic AI positions itself—we champion rigorous security testing while ensuring AI remains an asset for innovation rather than a risk to be feared. Most importantly, however, is that enterprises ensure they have full visibility into the strengths and vulnerabilities of the models they are using at all times, and that they are able to implement guardrails that protect their company and stakeholders.
At Holistic AI, we don’t just talk about AI security—we actively test and improve it. Our quick audit was conducted using the Holistic AI Governance Platform, the same solution available to our customers. This platform enables enterprises to assess AI models comprehensively, identifying vulnerabilities, evaluating risk exposure, and ensuring compliance with evolving regulations.
Any enterprise with access to our platform when R1 was introduced was able to perform their own assessment immediately, gaining critical insights into its weaknesses and mitigating risks before deployment. This level of speed and agility in AI security is not just a luxury—it’s a necessity and a competitive advantage.
While our audit provides important insights, we recognize that no evaluation is exhaustive. The adversarial prompts used in this assessment represent only a subset of possible exploits, and real-world threats continue to evolve. Additionally, AI behavior can vary depending on deployment context, model updates, and external integrations, which means that new vulnerabilities may emerge over time.
Furthermore, our dual-layered evaluation—leveraging both automated and human review—aims to reduce bias and error, but no system is infallible. As AI security research progresses, more sophisticated attack strategies may necessitate ongoing reassessment of model resilience. Transparency about these limitations is essential to fostering industry-wide collaboration in strengthening AI defenses. Best practices in AI governance dictate that it is not a “one and done” activity, but is ongoing through automated monitoring, alerts, and reporting via Holistic AI.
The implications of these findings extend beyond academic concerns—they are real and pressing challenges for businesses integrating AI into critical workflows. Without rigorous security, AI models can become liabilities rather than assets.
At Holistic AI, we advocate for a proactive approach, where AI security is not an afterthought but a foundational pillar. Enterprises must prioritize:
Building AI for a Secure Future Our findings reinforce a simple but urgent truth: enterprise AI must be secure, adaptable, and built with resilience in mind. Companies that take AI security seriously will not only mitigate risks but will gain a competitive advantage by ensuring reliability, compliance, and trust. At Holistic AI, we are committed to driving the next wave of AI innovation and business success with security at its core. By continuously refining our models, strengthening safeguards, and working collaboratively with industry leaders, we are paving the way for a future where AI empowers businesses without compromising safety.
The future of AI is bright—but only if we build it securely.
Adriano Koshiyama
Co-CEO, Holistic AI
DISCLAIMER: This news article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts