NVIDIA Bolsters AI Application Security with New Tools
Generado por agente de IANathaniel Stone
jueves, 16 de enero de 2025, 12:49 pm ET1 min de lectura
DOX--
NVIDIA has announced a suite of new tools designed to enhance the security and safety of AI applications, particularly focusing on agentic AI systems. The new tools, part of the NVIDIA NIM (NVIDIA Inference Microservices) collection, aim to address critical concerns like trust, safety, security, and compliance, which are essential for enterprises to adopt AI agents.

The new NIM microservices for AI guardrails, part of the NeMo Guardrails platform, are portable, optimized inference microservices that help companies improve the safety, precision, and scalability of their generative AI applications. Central to the orchestration of these microservices is NeMo Guardrails, which helps developers integrate and manage AI guardrails in large language model (LLM) applications.
Industry leaders such as Amdocs, Cerence AI, and Lowe's are already using NeMo Guardrails to safeguard their AI applications. Amdocs, a leading global provider of software and services to communications and media companies, is harnessing NeMo Guardrails to enhance AI-driven customer interactions by delivering safer, more accurate, and contextually appropriate responses. Cerence AI, specializing in AI solutions for the automotive industry, is using NVIDIA NeMo Guardrails to ensure its in-car assistants deliver contextually appropriate, safe interactions powered by its CaLLM family of large and small language models.
NVIDIA has introduced three new NIM microservices for NeMo Guardrails that help AI agents operate at scale while maintaining controlled behavior:
1. Content Safety NIM: Trained on NVIDIA's Aegis Content Safety Dataset, this microservice safeguards AI against generating biased or harmful outputs, ensuring responses align with ethical standards.
2. Topic Control NIM: This microservice keeps conversations focused on approved topics, avoiding digression or inappropriate content, and helps maintain AI integrity in adversarial scenarios.
3. Jailbreak Detection NIM: By adding protection against jailbreak attempts, this microservice helps maintain AI integrity in adversarial scenarios, preventing security bypasses through clever hacks.
Small language models, like those in the NeMo Guardrails collection, offer lower latency and are designed to run efficiently, even in resource-constrained or distributed environments. This makes them ideal for scaling AI applications in industries such as healthcare, automotive, and manufacturing, in locations like hospitals or warehouses.
NVIDIA NeMo Guardrails, available to the open-source community, helps developers orchestrate multiple AI software policies, called rails, to enhance LLM application security and control. It works with NVIDIA NIM microservices to offer a robust framework for building AI systems that can be deployed at scale without compromising on safety or performance.
As the use of agentic AI continues to grow, so too does the need for safety and security. NVIDIA's new tools for AI application security address these concerns, making it easier for enterprises to deploy and manage AI agents while maintaining trust, safety, and compliance.
NVDA--
NVIDIA has announced a suite of new tools designed to enhance the security and safety of AI applications, particularly focusing on agentic AI systems. The new tools, part of the NVIDIA NIM (NVIDIA Inference Microservices) collection, aim to address critical concerns like trust, safety, security, and compliance, which are essential for enterprises to adopt AI agents.

The new NIM microservices for AI guardrails, part of the NeMo Guardrails platform, are portable, optimized inference microservices that help companies improve the safety, precision, and scalability of their generative AI applications. Central to the orchestration of these microservices is NeMo Guardrails, which helps developers integrate and manage AI guardrails in large language model (LLM) applications.
Industry leaders such as Amdocs, Cerence AI, and Lowe's are already using NeMo Guardrails to safeguard their AI applications. Amdocs, a leading global provider of software and services to communications and media companies, is harnessing NeMo Guardrails to enhance AI-driven customer interactions by delivering safer, more accurate, and contextually appropriate responses. Cerence AI, specializing in AI solutions for the automotive industry, is using NVIDIA NeMo Guardrails to ensure its in-car assistants deliver contextually appropriate, safe interactions powered by its CaLLM family of large and small language models.
NVIDIA has introduced three new NIM microservices for NeMo Guardrails that help AI agents operate at scale while maintaining controlled behavior:
1. Content Safety NIM: Trained on NVIDIA's Aegis Content Safety Dataset, this microservice safeguards AI against generating biased or harmful outputs, ensuring responses align with ethical standards.
2. Topic Control NIM: This microservice keeps conversations focused on approved topics, avoiding digression or inappropriate content, and helps maintain AI integrity in adversarial scenarios.
3. Jailbreak Detection NIM: By adding protection against jailbreak attempts, this microservice helps maintain AI integrity in adversarial scenarios, preventing security bypasses through clever hacks.
Small language models, like those in the NeMo Guardrails collection, offer lower latency and are designed to run efficiently, even in resource-constrained or distributed environments. This makes them ideal for scaling AI applications in industries such as healthcare, automotive, and manufacturing, in locations like hospitals or warehouses.
NVIDIA NeMo Guardrails, available to the open-source community, helps developers orchestrate multiple AI software policies, called rails, to enhance LLM application security and control. It works with NVIDIA NIM microservices to offer a robust framework for building AI systems that can be deployed at scale without compromising on safety or performance.
As the use of agentic AI continues to grow, so too does the need for safety and security. NVIDIA's new tools for AI application security address these concerns, making it easier for enterprises to deploy and manage AI agents while maintaining trust, safety, and compliance.
Divulgación editorial y transparencia de la IA: Ainvest News utiliza tecnología avanzada de Modelos de Lenguaje Largo (LLM) para sintetizar y analizar datos de mercado en tiempo real. Para garantizar los más altos estándares de integridad, cada artículo se somete a un riguroso proceso de verificación con participación humana.
Mientras la IA asiste en el procesamiento de datos y la redacción inicial, un miembro editorial profesional de Ainvest revisa, verifica y aprueba de forma independiente todo el contenido para garantizar su precisión y cumplimiento con los estándares editoriales de Ainvest Fintech Inc. Esta supervisión humana está diseñada para mitigar las alucinaciones de la IA y garantizar el contexto financiero.
Advertencia sobre inversiones: Este contenido se proporciona únicamente con fines informativos y no constituye asesoramiento profesional de inversión, legal o financiero. Los mercados conllevan riesgos inherentes. Se recomienda a los usuarios que realicen una investigación independiente o consulten a un asesor financiero certificado antes de tomar cualquier decisión. Ainvest Fintech Inc. se exime de toda responsabilidad por las acciones tomadas con base en esta información. ¿Encontró un error? Reportar un problema

Comentarios
Aún no hay comentarios