{"id":32657,"date":"2025-10-31T11:54:00","date_gmt":"2025-10-31T14:54:00","guid":{"rendered":"https:\/\/nocodestartup.io\/?p=32657"},"modified":"2025-11-02T11:55:24","modified_gmt":"2025-11-02T14:55:24","slug":"anthropics-ia-petri-framework","status":"publish","type":"post","link":"https:\/\/nocodestartup.io\/en\/anthropics-ia-petri-framework\/","title":{"rendered":"IA Petri: How the Anthropic Framework is Revolutionizing Security Auditing for LLMs"},"content":{"rendered":"<p>The rapid rise and increasing autonomy of <strong>Large Language Models (LLMs)<\/strong> They radically transformed the technological landscape.<br><br>In the No-Code\/Low-Code ecosystem, where speed of implementation is a crucial competitive differentiator, the security and predictability of these models have become a central concern.<br><br>Enter the framework. <strong>IA Petri<\/strong> Anthropic is an open-source system designed to solve the biggest challenge in modern AI security: scale.<br><br>O <a href=\"https:\/\/github.com\/safety-research\/petri\" rel=\"nofollow noopener\" target=\"_blank\">IA Petri<\/a> It&#039;s not just another testing tool; it&#039;s a paradigm shift that replaces inefficient static benchmarks with a model of... <strong>AI automated audit<\/strong> based on intelligent agents, offering a <strong>agency guarantee<\/strong> which is essential for any startup that wants to scale its solutions with confidence.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1024\" height=\"683\" src=\"https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Diagrama-conceitual-da-arquitetura-do-framework-IA-Petri-da-Anthropic-mostrando-a-interacao-entre-o-Agente-Auditor-e-o-Modelo-Alvo-em-um-Ambiente-controlado-1024x683.png\" alt=\"Conceptual diagram of the architecture of the Anthropic Petri AI framework, showing the interaction between the Audit Agent and the Target Model in a controlled environment.\" class=\"wp-image-32665\" srcset=\"https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Diagrama-conceitual-da-arquitetura-do-framework-IA-Petri-da-Anthropic-mostrando-a-interacao-entre-o-Agente-Auditor-e-o-Modelo-Alvo-em-um-Ambiente-controlado-1024x683.png 1024w, https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Diagrama-conceitual-da-arquitetura-do-framework-IA-Petri-da-Anthropic-mostrando-a-interacao-entre-o-Agente-Auditor-e-o-Modelo-Alvo-em-um-Ambiente-controlado-768x512.png 768w, https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Diagrama-conceitual-da-arquitetura-do-framework-IA-Petri-da-Anthropic-mostrando-a-interacao-entre-o-Agente-Auditor-e-o-Modelo-Alvo-em-um-Ambiente-controlado-18x12.png 18w, https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Diagrama-conceitual-da-arquitetura-do-framework-IA-Petri-da-Anthropic-mostrando-a-interacao-entre-o-Agente-Auditor-e-o-Modelo-Alvo-em-um-Ambiente-controlado-150x100.png 150w, https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Diagrama-conceitual-da-arquitetura-do-framework-IA-Petri-da-Anthropic-mostrando-a-interacao-entre-o-Agente-Auditor-e-o-Modelo-Alvo-em-um-Ambiente-controlado.png 1536w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">Conceptual diagram of the architecture of the Anthropic Petri AI framework, showing the interaction between the Audit Agent and the Target Model in a controlled environment.<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The Problem of Scale in AI Security: Why Static Benchmarks Have Failed<\/strong><\/h2>\n\n\n\n<p>As the <a href=\"https:\/\/nocodestartup.io\/en\/llm-what-and-how-ai-models-transform-the-market\/\">LLMs<\/a> As technologies advance in capacity and become increasingly autonomous \u2013 able to plan, interact with tools, and execute complex actions \u2013 the risk surface expands exponentially.<br><br>This growth places unsustainable pressure on traditional safety assessment methods.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>The Inadequacy of the Red Teaming Manual in the Era of Complex LLMs<\/strong><\/h3>\n\n\n\n<p>Historically, the assessment of <a href=\"https:\/\/go.crowdstrike.com\/2025-state-of-ai-cybersecurity-survey-ebook-pt-br.html?utm_campaign=ptfm&amp;utm_content=crwd-pltfm-amer-bra-pt-psp-x-x-x-tct-x_x_x_ai-x&amp;utm_medium=sem&amp;utm_source=goog&amp;utm_term=intelig%C3%AAncia%20artificial%20em%20seguran%C3%A7a&amp;utm_languagept-br&amp;cq_cmp=22757294139&amp;cq_plac=%7Bplacement]&amp;gad_source=1&amp;gad_campaignid=22757294139&amp;gbraid=0AAAAAC-K3YSqUKn1kPAm0ekNt2NRx8Db4&amp;gclid=Cj0KCQjwsPzHBhDCARIsALlWNG3pKnf7FDZ4XwFE1Yafcf2Fzc35kGwj7rhEgaM16fT7uuIhxtZepm4aAg-HEALw_wcB\" rel=\"nofollow noopener\" target=\"_blank\">LLM security<\/a> depended mainly on <em>red teaming<\/em> manual: teams of experts who actively try to &quot;break&quot; or exploit the model.<br><br>While this approach is invaluable for in-depth investigations, it is by nature slow, labor-intensive, and, most importantly, not scalable.<br><br>The sheer volume of possible behaviors and combinations of interaction scenarios far exceeds what any human team can systematically test.<\/p>\n\n\n\n<p>The limitation lies in repeatability and scope. Manual tests are often specific to a scenario and difficult to replicate in new models or versions.<br><br>In a low-code development cycle, where iterations are rapid and frequent, relying solely on one-off and time-consuming audits creates a security gap that can be exploited.<br><br>THE <strong>AI automated audit<\/strong> It therefore presents itself not as an option, but as a technical necessity to keep pace with the speed of innovation.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Emergent Behaviors and the Exponential Attack Surface<\/strong><\/h3>\n\n\n\n<p>AI models, especially the most advanced ones, exhibit <strong>emerging AI behaviors<\/strong>.<br><br>This means that the interaction of their complex neural networks can result in capabilities or vulnerabilities that were not explicitly trained or predicted.<br><br>It is this unpredictable nature that makes them<a href=\"https:\/\/quiker.com.br\/ferramentas-de-benchmark\/\" rel=\"nofollow noopener\" target=\"_blank\"> static benchmarks<\/a> \u2013 Pre-defined tests with a fixed set of questions and answers \u2013 obsolete.<br><br>They only test what we already know, leaving aside the vast space of the &quot;unknown unknown&quot;.<\/p>\n\n\n\n<p>The attack surface for misalignment \u2013 where the model acts in harmful or unintended ways \u2013 grows in direct proportion to its capacity and autonomy.<br><br>O <strong>IA Petri<\/strong> It was designed precisely to address this dynamic nature, using artificial intelligence (agents) itself to interrogate the Target Model in a creative and systematic way, simulating the complex interactions of the real world.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>IA Petri&#039;s Agency Architecture: Components and Audit Dynamics<\/strong><\/h2>\n\n\n\n<p>O <strong>IA Petri<\/strong> It functions as an evaluation ecosystem where the model to be audited is placed in a controlled environment and challenged by an adversarial agent.<br><br>The sophistication of this framework lies in the separation of responsibilities into modular and interconnected components, which makes it a solution for... <strong>agency security framework<\/strong> highly structured, detailed in your research paper (<a href=\"https:\/\/go.sardine.ai\/hubfs\/Whitepapers\/The%20Agentic%20Oversight%20Framework%20-%20Procedures%2C%20Accountability%2C%20and%20Best%20Practices%20for%20Agentic%20AI%20Use%20in%20Regulated%20Financial%20Services.pdf\" rel=\"nofollow noopener\" target=\"_blank\">The Agentic Oversight Framework<\/a>).<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>The Target Model and the Need for Continuous Evaluation<\/strong><\/h3>\n\n\n\n<p>The Target Model is the LLM being tested. It can be any model, from Anthropic&#039;s own Claude model to an open-source model integrated into a Low-Code workflow.<br><br>The beauty of <strong>IA Petri<\/strong> it is your ability to perform <strong>dynamic assessment of LLMs<\/strong>. Instead of a test <em>post-mortem<\/em>, It allows for continuous, real-time auditing, which is crucial for teams that are constantly deploying and adjusting their applications.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>The Audit Agent and the Scenarios Engine: The Heart of Dynamic Testing<\/strong><\/h3>\n\n\n\n<p>Herein lies the power of <strong>IA Petri<\/strong>. The Audit Agent is a simpler, more dedicated LLM program specialized in testing the limits of the Target Model.<br><br>He is not merely a passive tester; he acts as a <em>red teamer<\/em> (autonomous) adversary, generating sequences of malicious or strategically misaligned interactions.<\/p>\n\n\n\n<p>The Scenarios Engine is responsible for structuring the tests, ensuring that the Auditor Agent explores a wide range of attack vectors, from prompt injection to attempts to generate prohibited information.<br><br>This dynamic allows for a much deeper and more replicable exploration than any manual test, as detailed in the tool&#039;s official release (<a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/comments\/1o1haaj\/anthropic_ai_releases_petri_an_opensource\/?tl=pt-br\" rel=\"nofollow noopener\" target=\"_blank\">Anthropic AI Launches Petri: An Open Source Framework<\/a>).<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>The Controlled Environment: Ensuring Test Reproducibility<\/strong><\/h3>\n\n\n\n<p>The environment is the simulated context where the interaction takes place. It is fundamental to the science of AI evaluation, as it allows the same tests to be run accurately on different models or on different iterations of the same model.<br><br>This ability to <strong>reproducibility<\/strong> This is a milestone for the security of AI models, allowing Low-Code development teams to incorporate audit results directly into their CI\/CD (Continuous Integration\/Continuous Delivery) pipelines.<br><br>To better understand how to structure the technological foundation for these systems, you can delve deeper into...<a href=\"https:\/\/nocodestartup.io\/en\/what-is-ai-infrastructure\/\"> What is AI infrastructure and why is it essential?<\/a>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1536\" height=\"1024\" src=\"https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Ilustracao-da-arquitetura-de-agentes-de-IA-onde-um-agente-atua-como-auditor-e-outro-como-modelo-alvo-dentro-de-um-ambiente-isolado.png\" alt=\"Illustration of AI agent architecture, where one agent acts as an &quot;auditor&quot; and another as a &quot;target model,&quot; within an isolated environment.\" class=\"wp-image-32666\" srcset=\"https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Ilustracao-da-arquitetura-de-agentes-de-IA-onde-um-agente-atua-como-auditor-e-outro-como-modelo-alvo-dentro-de-um-ambiente-isolado.png 1536w, https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Ilustracao-da-arquitetura-de-agentes-de-IA-onde-um-agente-atua-como-auditor-e-outro-como-modelo-alvo-dentro-de-um-ambiente-isolado-1024x683.png 1024w, https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Ilustracao-da-arquitetura-de-agentes-de-IA-onde-um-agente-atua-como-auditor-e-outro-como-modelo-alvo-dentro-de-um-ambiente-isolado-768x512.png 768w, https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Ilustracao-da-arquitetura-de-agentes-de-IA-onde-um-agente-atua-como-auditor-e-outro-como-modelo-alvo-dentro-de-um-ambiente-isolado-18x12.png 18w\" sizes=\"(max-width: 1536px) 100vw, 1536px\" \/><figcaption class=\"wp-element-caption\">Illustration of AI agent architecture, where one agent acts as an &quot;auditor&quot; and another as a &quot;target model,&quot; within an isolated environment.<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Automated Red Teaming and the Concept of Agency Assurance with Petri AI<\/strong><\/h2>\n\n\n\n<p>THE <strong>IA Petri<\/strong> raises the concept of <em>red teaming<\/em> by automating it with AI agents.<br><br>The ultimate goal is to <strong>Agency Guarantee<\/strong>, In other words, having confidence that a model will maintain its... <strong>language model alignment<\/strong> and safety, even under stress, without the need for constant human intervention.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>AI Petri vs. Common Evaluation Tools (DeepEval, Garak): A Technical Comparison<\/strong><\/h3>\n\n\n\n<p>There are excellent open-source tools in the LLM evaluation space. Tools such as<a href=\"https:\/\/www.helpnetsecurity.com\/2025\/09\/10\/garak-open-source-llm-vulnerability-scanner\/\" rel=\"nofollow noopener\" target=\"_blank\"> Garak<\/a> it&#039;s the<a href=\"https:\/\/deepeval.com\/docs\/getting-started\" rel=\"nofollow noopener\" target=\"_blank\"> DeepEval<\/a> They offer robust capabilities for scanning vulnerabilities, performing fuzzing, or evaluating the quality of the model output.<br><br>O <em>paper<\/em> academic who describes the<a href=\"https:\/\/arxiv.org\/html\/2406.11036v1\" rel=\"nofollow noopener\" target=\"_blank\"> Garak<\/a>, For example, it focuses on probing the security of LLMs. Other tools, such as those listed among the<a href=\"https:\/\/www.promptfoo.dev\/blog\/top-5-open-source-ai-red-teaming-tools-2025\/\" rel=\"nofollow noopener\" target=\"_blank\"> Top 5 Open-Source AI Red-Teaming Tools<\/a>, they complement the ecosystem.<br><br>O<a href=\"https:\/\/github.com\/confident-ai\/deepeval\" rel=\"nofollow noopener\" target=\"_blank\"> DeepEval&#039;s GitHub repository<\/a> It also demonstrates a focus on evaluation metrics.<\/p>\n\n\n\n<p>While DeepEval may focus on evaluating metrics and Garak on discovering known vulnerabilities, the <strong>IA Petri<\/strong> uses an adversary&#039;s own intelligence to <em>to generate<\/em> actively explore new attack vectors and exploit vulnerabilities that are not on any pre-existing checklist.<br><br>He does, in fact, simulate malicious intent, escalating the <strong>Red Teaming of LLMs<\/strong> to a new level of sophistication.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Generating Complex Scenarios: Testing the Alignment and Security of Language Models<\/strong><\/h3>\n\n\n\n<p>The framework&#039;s main feature is its ability to automatically generate test scenarios that cover a wide range of AI security risks, including:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Generating Dangerous Content:<\/strong> Attempts to make the model produce instructions for illegal or harmful activities.<br><\/li>\n\n\n\n<li><strong>Data Leak:<\/strong> Exploring vulnerabilities to extract sensitive information from the model.<br><\/li>\n\n\n\n<li><strong>Instructional Misalignment:<\/strong> Ensuring that the model does not pursue unintended or dangerous objectives, even when instructed to do so by a user, is a central point discussed in the article that underpins the...<a href=\"https:\/\/arxiv.org\/abs\/2406.11036\" rel=\"nofollow noopener\" target=\"_blank\"> Agency Guarantee framework<\/a>.<\/li>\n<\/ol>\n\n\n\n<p>The Audit Agent adapts and learns from the Target Model&#039;s responses, making the audit an iterative and continuous &quot;hunt&quot; process.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Types of Vulnerabilities Discovered and the Importance of Open Source<\/strong><\/h3>\n\n\n\n<p>Since its launch, the <strong>IA Petri<\/strong> They have demonstrated the ability to uncover subtle flaws that would go unnoticed by traditional methods, reinforcing the urgency of a dynamic approach.<br><br>The fact that it&#039;s a project <em>open-source<\/em> (as announced at the launch of<a href=\"https:\/\/www.anthropic.com\/research\/petri-open-source-auditing\" rel=\"nofollow noopener\" target=\"_blank\"> Petri by Anthropic<\/a>This allows the global AI security community to collaborate in defining and executing scenarios, accelerating the mitigation of vulnerabilities across all models.<br><br>This transparency is vital for trust in the AI ecosystem.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"683\" src=\"https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Visualizacao-de-dados-mostrando-a-taxa-de-deteccao-de-vulnerabilidades-em-LLMs-atraves-de-Red-Teaming-automatizado-vs.-testes-manuais-1024x683.png\" alt=\"Data visualization showing the vulnerability detection rate in LLMs through automated Red Teaming vs. manual testing.\" class=\"wp-image-32668\" srcset=\"https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Visualizacao-de-dados-mostrando-a-taxa-de-deteccao-de-vulnerabilidades-em-LLMs-atraves-de-Red-Teaming-automatizado-vs.-testes-manuais-1024x683.png 1024w, https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Visualizacao-de-dados-mostrando-a-taxa-de-deteccao-de-vulnerabilidades-em-LLMs-atraves-de-Red-Teaming-automatizado-vs.-testes-manuais-768x512.png 768w, https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Visualizacao-de-dados-mostrando-a-taxa-de-deteccao-de-vulnerabilidades-em-LLMs-atraves-de-Red-Teaming-automatizado-vs.-testes-manuais-18x12.png 18w, https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Visualizacao-de-dados-mostrando-a-taxa-de-deteccao-de-vulnerabilidades-em-LLMs-atraves-de-Red-Teaming-automatizado-vs.-testes-manuais-150x100.png 150w, https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Visualizacao-de-dados-mostrando-a-taxa-de-deteccao-de-vulnerabilidades-em-LLMs-atraves-de-Red-Teaming-automatizado-vs.-testes-manuais.png 1536w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">Data visualization showing the vulnerability detection rate in LLMs through automated Red Teaming vs. manual testing.<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Practical Application for No-Code\/Low-Code Developers: Integrating Dynamic Security<\/strong><\/h2>\n\n\n\n<p>For the Low-Code developer or the startup leader at No Code Start Up, the question is not merely theoretical: it&#039;s about how to translate this advanced technology into more reliable products.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Mitigating Risks in Autonomous Applications and AI Agents<\/strong><\/h3>\n\n\n\n<p>The greatest relevance of <strong>IA Petri<\/strong> is in the construction of <strong>AI Agents<\/strong> and standalone applications.<br><br>When an agent is given the ability to interact with the real world (such as sending emails, processing payments, or managing tasks), the misalignment transforms from a textual error into a high-risk operational failure.<\/p>\n\n\n\n<p>By incorporating principles of <strong>AI automated audit<\/strong> like the <strong>IA Petri<\/strong>, Low-code developers can stress-test their agents before deployment, ensuring that the automation follows predefined business rules and security boundaries.<br><br>If your startup is exploring the creation of sophisticated or new workflows<a href=\"https:\/\/nocodestartup.io\/en\/man-2\/\"> AI and Automation Agents: No-Code Solution for Businesses<\/a>, Dynamic auditing is indispensable.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Secure Development Strategies and the Culture of Continuous Testing in Practice<\/strong><\/h3>\n\n\n\n<p>Integrating LLM security is not a one-time step; it&#039;s a culture. Adopting frameworks like... <strong>IA Petri<\/strong> This requires Low-Code teams to think about security from the very beginning of the project, not just at the end.<br><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Validation of Prompts and Outputs:<\/strong> Use the <strong>IA Petri<\/strong> to test the robustness of its prompts and the security of the outputs in different model versions.<br><\/li>\n\n\n\n<li><strong>Regression Test:<\/strong> After each fine adjustment (<em>fine-tuning<\/em>For example, if the model is updated, the framework can be run to ensure that security fixes do not introduce new problems (security regression).<\/li>\n<\/ul>\n\n\n\n<p>For those seeking to master the creation of robust and secure AI solutions, the foundation lies in...<a href=\"https:\/\/nocodestartup.io\/en\/ai-coding-training\/?utm_source=site&amp;utm_medium=blog-site&amp;utm_campaign=ppt-ai-coding&amp;utm_content=ia-petri-framework-da-anthropic&amp;conversion=ppt-ai-coding\"> AI Coding Training: Create Apps with AI and Low-Code<\/a>, which emphasizes the integration of secure development practices.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>The Role of AI Infrastructure in the Adoption of Frameworks like Petri<\/strong><\/h3>\n\n\n\n<p>The efficient execution of complex and large-scale tests, such as those performed by <strong>IA Petri<\/strong>, This requires a robust and scalable AI infrastructure.<br><br>startups systems require systems that can manage multiple models, orchestrate auditing agents, and process large volumes of test data cost-effectively.<br><br>Investing in adequate infrastructure is not just about speed, but about enabling the adoption of these cutting-edge tools to raise the standard of security and low-code development.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1536\" height=\"1024\" src=\"https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Um-painel-de-controle-Low-Code-mostrando-metricas-de-seguranca-de-IA-e-relatorios-de-auditoria-automatizada-do-framework-IA-Petri.png\" alt=\"A low-code dashboard displaying AI security metrics and automated audit reports from the AI Petri framework.\" class=\"wp-image-32669\" srcset=\"https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Um-painel-de-controle-Low-Code-mostrando-metricas-de-seguranca-de-IA-e-relatorios-de-auditoria-automatizada-do-framework-IA-Petri.png 1536w, https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Um-painel-de-controle-Low-Code-mostrando-metricas-de-seguranca-de-IA-e-relatorios-de-auditoria-automatizada-do-framework-IA-Petri-1024x683.png 1024w, https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Um-painel-de-controle-Low-Code-mostrando-metricas-de-seguranca-de-IA-e-relatorios-de-auditoria-automatizada-do-framework-IA-Petri-768x512.png 768w, https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Um-painel-de-controle-Low-Code-mostrando-metricas-de-seguranca-de-IA-e-relatorios-de-auditoria-automatizada-do-framework-IA-Petri-18x12.png 18w, https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Um-painel-de-controle-Low-Code-mostrando-metricas-de-seguranca-de-IA-e-relatorios-de-auditoria-automatizada-do-framework-IA-Petri-150x100.png 150w\" sizes=\"(max-width: 1536px) 100vw, 1536px\" \/><figcaption class=\"wp-element-caption\">A low-code dashboard displaying AI security metrics and automated audit reports from the AI Petri framework.<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The Evolution of Model Security: The Future of AI, Petri, and the Open-Source Movement<\/strong><\/h2>\n\n\n\n<p>The launch of <strong>IA Petri<\/strong> Anthropic&#039;s adoption is not an end point, but a catalyst for the next phase of AI security.<br><br>Its impact extends beyond fault detection, shaping the very philosophy of how... <strong>language model alignment<\/strong> It must be achieved and maintained.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Community Collaboration and Shaping the Global Alignment Pattern<\/strong><\/h3>\n\n\n\n<p>As open source, the <strong>IA Petri<\/strong> benefits from collective wisdom. Researchers, security companies, and even Low-Code\/No-Code enthusiasts can contribute new insights. <strong>test scenarios<\/strong> (Petri Scenarios), identifying and formalizing unique attack vectors.<br><br>This collaboration ensures that the framework stays ahead of new developments. <strong>emerging AI behaviors<\/strong> and become the industry standard for model evaluation. The strength of the community is the only way to combat the increasing complexity of <a href=\"https:\/\/www.promptfoo.dev\/docs\/red-team\/\" rel=\"nofollow noopener\" target=\"_blank\">Red Teaming of LLMs<\/a>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Preparing for AI Governance: The AI Act and Preventive Auditing<\/strong><\/h3>\n\n\n\n<p>As the <a href=\"https:\/\/nocodestartup.io\/en\/governance-and-ethics-in-ai-agents\/\">AI Governance<\/a> becomes a global reality \u2013 with regulations such as <a href=\"https:\/\/artificialintelligenceact.eu\/\" rel=\"nofollow noopener\" target=\"_blank\"><em>EU AI Act<\/em> <\/a>Requiring increasing levels of transparency and security \u2013 the ability to demonstrate the robustness of a model will be a legal and market requirement.<br><br>O <strong>IA Petri<\/strong> It provides organizations, including startups No-Code, with a defensible mechanism to conduct preventative audits, generate comprehensive test documentation, and demonstrate that their systems have been rigorously evaluated against risks of misalignment and misuse.<a href=\"https:\/\/govtech-responsibleai.github.io\/agentic-risk-capability-framework\/\" rel=\"nofollow noopener\" target=\"_blank\">Agentic Assurance Framework<\/a>).<\/p>\n\n\n\n<p>The use of a <strong>agency security framework<\/strong> It&#039;s not just good technical practice; it&#039;s an investment in future compliance.<br><br>By mastering tools such as <strong>IA Petri<\/strong>, Low-code developers are positioning themselves as leaders in building responsible and secure AI solutions.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1536\" height=\"1024\" src=\"https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Representacao-visual-da-seguranca-de-IA-como-um-pilar-de-confianca-na-construcao-de-aplicacoes-e-softwares-Low-Code.png\" alt=\"Visual representation of AI security as a pillar of trust in building applications and softwares Low Code.\" class=\"wp-image-32670\" srcset=\"https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Representacao-visual-da-seguranca-de-IA-como-um-pilar-de-confianca-na-construcao-de-aplicacoes-e-softwares-Low-Code.png 1536w, https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Representacao-visual-da-seguranca-de-IA-como-um-pilar-de-confianca-na-construcao-de-aplicacoes-e-softwares-Low-Code-1024x683.png 1024w, https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Representacao-visual-da-seguranca-de-IA-como-um-pilar-de-confianca-na-construcao-de-aplicacoes-e-softwares-Low-Code-768x512.png 768w, https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Representacao-visual-da-seguranca-de-IA-como-um-pilar-de-confianca-na-construcao-de-aplicacoes-e-softwares-Low-Code-18x12.png 18w, https:\/\/nocodestartup.io\/wp-content\/uploads\/2025\/11\/Representacao-visual-da-seguranca-de-IA-como-um-pilar-de-confianca-na-construcao-de-aplicacoes-e-softwares-Low-Code-150x100.png 150w\" sizes=\"(max-width: 1536px) 100vw, 1536px\" \/><figcaption class=\"wp-element-caption\">Visual representation of AI security as a pillar of trust in building applications and softwares Low Code.<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>FAQ: Frequently Asked Questions about LLM Audits<\/strong><\/h2>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Q1: What is the main objective of the IA Petri framework?<\/strong><\/h3>\n\n\n\n<p>The main objective of <strong>IA Petri<\/strong> The goal is to automate the security audit process for Large Language Models (LLMs).<br><br>It uses AI agents (the Auditor Agent) to dynamically interact with the Target Model, generating complex, large-scale test scenarios to discover and mitigate <strong>emerging AI behaviors<\/strong> and risks of misalignment that would be missed in manual testing or static benchmarks.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Q2: How does AI Petri differ from human Red Teaming?<\/strong><\/h3>\n\n\n\n<p>O <em>red teaming<\/em> Human intelligence is qualitative, in-depth, and focused on a limited set of attack vectors.<br><br>O <strong>IA Petri<\/strong> and <strong>quantitative, scalable and continuous<\/strong>. It automates and scales the process, allowing millions of interactions to be tested quickly and repeatedly, overcoming the scaling problem inherent in the manual evaluation of complex LLMs.<br><br>It doesn&#039;t replace human beings, but it dramatically expands their capabilities.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Q3: Can IA Petri be used in any Large Language Model?<\/strong><\/h3>\n\n\n\n<p>Yes, the <strong>IA Petri<\/strong> It was designed to be modular and model-agnostic. It treats the LLM in auditing (the Target Model) as a black or white box, interacting with it through prompts and observing its behavior in the controlled environment.<br><br>This makes it applicable to any <strong>Big Language Model<\/strong> that can be orchestrated within a test environment, whether it&#039;s a proprietary model or an open-source model.<\/p>\n\n\n\n<p>For the Low-Code Start Up community, this means the chance to build autonomous systems with a level of trust never before achieved.<\/p>\n\n\n\n<p>The guarantee that your product behaves predictably and in a consistent manner is no longer an ideal, but an auditable reality.<\/p>\n\n\n\n<p>The future of building robust, AI-powered softwares lies in the ability to integrate the <strong>AI automated audit<\/strong> natively.<\/p>\n\n\n\n<p>O <strong>IA Petri<\/strong> This is the map, and now it&#039;s up to you to take the next step to master this new frontier of security and innovation.<\/p>\n\n\n\n<p><strong>If you&#039;re looking not only to create, but also to ensure the robustness and alignment of your own AI agents, explore...<\/strong><a href=\"https:\/\/nocodestartup.io\/en\/ai-coding-training\/?utm_source=site&amp;utm_medium=blog-site&amp;utm_campaign=ppt-ai-coding&amp;utm_content=ia-petri-framework-da-anthropic&amp;conversion=ppt-ai-coding\"> AI Coding Training: Create Apps with AI and Low-Code<\/a><strong> <\/strong>and raise the security standard of your solutions.<\/p>","protected":false},"excerpt":{"rendered":"<p>The rapid rise and increasing autonomy of Large Language Models (LLMs) have radically transformed the technological landscape.<\/p>\n<p>In the No-Code\/Low-Code ecosystem, where speed of implementation is a crucial competitive differentiator, the security and predictability of these models have become a central concern.<\/p>","protected":false},"author":4,"featured_media":32661,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[23],"tags":[],"post_folder":[],"class_list":["post-32657","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-inteligencia-artificial"],"acf":[],"_links":{"self":[{"href":"https:\/\/nocodestartup.io\/en\/wp-json\/wp\/v2\/posts\/32657","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nocodestartup.io\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/nocodestartup.io\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/nocodestartup.io\/en\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/nocodestartup.io\/en\/wp-json\/wp\/v2\/comments?post=32657"}],"version-history":[{"count":0,"href":"https:\/\/nocodestartup.io\/en\/wp-json\/wp\/v2\/posts\/32657\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/nocodestartup.io\/en\/wp-json\/wp\/v2\/media\/32661"}],"wp:attachment":[{"href":"https:\/\/nocodestartup.io\/en\/wp-json\/wp\/v2\/media?parent=32657"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/nocodestartup.io\/en\/wp-json\/wp\/v2\/categories?post=32657"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/nocodestartup.io\/en\/wp-json\/wp\/v2\/tags?post=32657"},{"taxonomy":"post_folder","embeddable":true,"href":"https:\/\/nocodestartup.io\/en\/wp-json\/wp\/v2\/post_folder?post=32657"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}