![NVIDIA - ServiceNow]()
At ServiceNow’s flagship Knowledge 2025 event in Las Vegas, ServiceNow and NVIDIA announced a major leap for enterprise artificial intelligence: the debut of the Apriel Nemotron 15B model. This new large language model (LLM), developed in close partnership between the two tech leaders, is designed to accelerate the adoption of intelligent, real-time AI agents across IT, HR, customer service, and beyond.
Compact Power: Real-Time Reasoning at Scale
Apriel Nemotron 15B stands apart for its focused, efficient architecture. With 15 billion parameters, it is significantly smaller than many general-purpose LLMs that can exceed a trillion parameters, yet it delivers advanced reasoning capabilities essential for enterprise use. The model is engineered to draw inferences, weigh goals, and navigate complex rules in real time, enabling AI agents to handle intricate workflows and decision-making tasks with speed and precision.
Trained using NVIDIA NeMo, the NVIDIA Llama Nemotron Post-Training Dataset, and ServiceNow’s domain-specific data on NVIDIA DGX Cloud infrastructure running on AWS, the model achieves lower latency, reduced inference costs, and the ability to scale across thousands of concurrent workflows. This makes it not only cost-effective but also enterprise-ready for demanding business environments.
Continuous Learning with Data Flywheel Architecture
A key innovation accompanying the model’s release is the integration of ServiceNow’s Workflow Data Fabric with NVIDIA NeMo microservices, including NeMo Customizer and Evaluator. This creates a “data flywheel” architecture, a closed-loop system where real-time workflow data continually refines and personalizes AI agent responses, improving accuracy and adaptability over time. Guardrails are built to ensure customer data is handled securely and compliantly.
Real-World Impact: From Complexity to Clarity
In practical deployments, Apriel Nemotron 15B is already showing its value. For example, at AstraZeneca, AI agents powered by the model are helping employees resolve issues and make decisions faster, reportedly saving up to 90,000 hours for staff.
The Apriel Nemotron 15B model — developed by two of the most advanced enterprise AI companies — features purpose-built reasoning to power the next generation of intelligent AI agents. This achieves what generic models can’t, combining real-time enterprise data, workflow context and advanced reasoning to help AI agents drive real productivity.
Jon Sigler, executive vice president of Platform and AI at ServiceNow.
Together with ServiceNow, we’ve built an efficient, enterprise-ready model to fuel a new class of intelligent AI agents that can reason to boost team productivity. By using the NVIDIA Llama Nemotron Post-Training Dataset and ServiceNow domain-specific data, Apriel Nemotron 15B delivers advanced reasoning capabilities in a smaller size, making it faster, more accurate and cost-effective to run.
Kari Briski, vice president of generative AI software at NVIDIA
Setting the Standard for Agentic AI
The launch of Apriel Nemotron 15B signals a broader shift in enterprise AI strategy- from static, generic models to dynamic, agentic systems that evolve and learn. By offering a compact, high-performance model, ServiceNow and NVIDIA are enabling businesses to deploy AI agents that are faster, more accurate, and more cost-efficient than ever before. This partnership also advances the integration of NVIDIA’s Llama Nemotron models and AI agent evaluation tools into the ServiceNow platform, further accelerating the development of robust agentic AI.
Availability and Looking Ahead
Apriel Nemotron 15B is expected to be available to customers in the second quarter of 2025, powering ServiceNow’s Now LLM services and forming the backbone of its next-generation AI agent offerings. As enterprises increasingly look to AI for operational efficiency and smarter workflows, this new model and the ongoing collaboration between ServiceNow and NVIDIA set the stage for a new era of intelligent, adaptable, and secure enterprise AI.