NVIDIA Bolsters AI Application Security with New Tools

Nathaniel StoneThursday, Jan 16, 2025 12:49 pm ET
2min read


NVIDIA has announced a suite of new tools designed to enhance the security and safety of AI applications, particularly focusing on agentic AI systems. The new tools, part of the NVIDIA NIM (NVIDIA Inference Microservices) collection, aim to address critical concerns like trust, safety, security, and compliance, which are essential for enterprises to adopt AI agents.



The new NIM microservices for AI guardrails, part of the NeMo Guardrails platform, are portable, optimized inference microservices that help companies improve the safety, precision, and scalability of their generative AI applications. Central to the orchestration of these microservices is NeMo Guardrails, which helps developers integrate and manage AI guardrails in large language model (LLM) applications.

Industry leaders such as Amdocs, Cerence AI, and Lowe's are already using NeMo Guardrails to safeguard their AI applications. Amdocs, a leading global provider of software and services to communications and media companies, is harnessing NeMo Guardrails to enhance AI-driven customer interactions by delivering safer, more accurate, and contextually appropriate responses. Cerence AI, specializing in AI solutions for the automotive industry, is using NVIDIA NeMo Guardrails to ensure its in-car assistants deliver contextually appropriate, safe interactions powered by its CaLLM family of large and small language models.

NVIDIA has introduced three new NIM microservices for NeMo Guardrails that help AI agents operate at scale while maintaining controlled behavior:

1. Content Safety NIM: Trained on NVIDIA's Aegis Content Safety Dataset, this microservice safeguards AI against generating biased or harmful outputs, ensuring responses align with ethical standards.
2. Topic Control NIM: This microservice keeps conversations focused on approved topics, avoiding digression or inappropriate content, and helps maintain AI integrity in adversarial scenarios.
3. Jailbreak Detection NIM: By adding protection against jailbreak attempts, this microservice helps maintain AI integrity in adversarial scenarios, preventing security bypasses through clever hacks.

Small language models, like those in the NeMo Guardrails collection, offer lower latency and are designed to run efficiently, even in resource-constrained or distributed environments. This makes them ideal for scaling AI applications in industries such as healthcare, automotive, and manufacturing, in locations like hospitals or warehouses.

NVIDIA NeMo Guardrails, available to the open-source community, helps developers orchestrate multiple AI software policies, called rails, to enhance LLM application security and control. It works with NVIDIA NIM microservices to offer a robust framework for building AI systems that can be deployed at scale without compromising on safety or performance.

As the use of agentic AI continues to grow, so too does the need for safety and security. NVIDIA's new tools for AI application security address these concerns, making it easier for enterprises to deploy and manage AI agents while maintaining trust, safety, and compliance.

Comments



Add a public comment...
No comments

No comments yet

Disclaimer: The news articles available on this platform are generated in whole or in part by artificial intelligence and may not have been reviewed or fact checked by human editors. While we make reasonable efforts to ensure the quality and accuracy of the content, we make no representations or warranties, express or implied, as to the truthfulness, reliability, completeness, or timeliness of any information provided. It is your sole responsibility to independently verify any facts, statements, or claims prior to acting upon them. Ainvest Fintech Inc expressly disclaims all liability for any loss, damage, or harm arising from the use of or reliance on AI-generated content, including but not limited to direct, indirect, incidental, or consequential damages.