Highlights –
- A core component of the new release is an updated set of supported versions of popular open-source tools, such as PyTorch and TensorFlow.
- A key goal of Nvidia’s TAO toolkit, which is part of the Nvidia Enterprise AI 2.1 update, is to make building models for computer vision and speech recognition use cases easier.
Nvidia has released version 2.1 of its AI Enterprise software suite, which includes new commercially supported tools to run Artificial Intelligence (AI) and Machine Learning (ML) workloads for enterprise use cases.
Nvidia AI Enterprise was first made generally available in August 2021 as a collection of supported AI and ML tools that work well on Nvidia hardware. In the new release, a key component of the software suite is the release of an updated set of supported versions of popular open-source tools, such as PyTorch and TensorFlow. The 22.04 update for Nvidia’s Rapids open-source libraries for running data science pipelines on GPUs is also included, as is the new Nvidia Tao 22.05 low-code and no-code toolkit for computer vision and speech applications.
Open-source AI to equip enterprise support
An “upstream” community, where the leading edge of development occurs in an open approach, is a common approach with open-source software. Nvidia, for example, can and does contribute code upstream before providing commercially supported offerings like Nvidia AI Enterprise in the “downstream,” referred to as the “downstream.”
Justin Boitano, VP of enterprise and edge computing at Nvidia, said, “When we talk about popular AI projects like TensorFlow, our goal is absolutely to commit as much as possible back into the upstream.”
The open-source components of Nvidia AI Enterprise also benefit from integration testing across different frameworks and on various hardware configurations that ensure that the software works as expected.
“It’s very similar to the early Linux days, where there are those companies that are happy running with the open-source frameworks, and then there’s another part of the community that really feels more comfortable having that direct engagement,” Boitano said.
Enterprise support and cloud-native deployment options for AI
Another important aspect of enterprise support is making it easier to deploy various AI tools in the cloud. Installing and configuring AI tools can be a difficult task for the uninitiated.
Using containers and Kubernetes in a cloud-native model is one of the most popular approaches to cloud deployment today. Nvidia AI Enterprise, according to Boitano, is available as a collection of containers. To help automate the installation and configuration of the AI tools in the cloud, there is also a Helm chart, an application manifest for Kubernetes deployment.
Nvidia LaunchPad labs, a hosted service on Nvidia infrastructure, offers an even more straightforward approach for testing the tools and frameworks supported by the Enterprise AI software suite.
The Nvidia TAO
A key goal of Nvidia’s TAO toolkit, which is part of the Nvidia Enterprise AI 2.1 update, is to make building models for computer vision and speech recognition use cases easier.
TAO, according to Boitano, offers a low-code model that allows organizations to take an existing pre-trained model and tune it to a user’s specific environment and data. TAO can assist with computer vision applications in factories, as one example.
Lighting conditions in different factories can vary, causing glare on cameras that can impair recognition. The ability to relabel a large amount of data within a specific environment where the light may differ from the pre-trained model can aid in accuracy.
“TAO provides a lightweight way to retrain models for new deployments,” Boitano said.
Boitano stated that the plan for future Nvidia AI Enterprise releases is to continue making it easier for organizations to use different toolkits for deploying AI and ML workflows in production.
Experts’ Take
Justin Boitano, VP of enterprise and edge computing at Nvidia, said, “Over the last couple of years, what we’ve seen is the growth of AI being used to solve a bunch of problems, and it is really driving automation to improve operational efficiency. Ultimately, as more organizations get AI into a production state, many companies will need commercial support on the software stack that has traditionally just been open source.”