Highlights:
- By implementing the Private AI Foundation solution, enterprises can operate AI services in close proximity to their data.
- As the foundational “operating system” of Nvidia’s AI platform, NeMo provides a framework for customization operations, guardrail tools, data curation, and offerings.
In a significant development, VMware Inc. and Nvidia Corp. have unveiled an extended collaboration, introducing an innovative solution. This solution empowers enterprises to construct, train, and implement generative artificial intelligence models tailored for their applications, all while prioritizing data security and privacy.
VMware Private AI Foundation with Nvidia is a full-stack platform with generative AI software and everything an enterprise customer needs to develop, train, customize, and deploy an AI model right out of the box. It is constructed on VMware Cloud Foundation and optimized for artificial intelligence, with generative AI software and Nvidia-accelerated computation.
VMware’s CEO, Raghu Raghuram, stated, “Generative AI and multicloud are the perfect match. Customer data is everywhere — in their data centers, at the edge, and in their clouds.”
A new challenge has arisen with the increasing interest among businesses in employing generative AI large language models in their applications. Enterprises are now confronted with potential risks for using their proprietary business data, including possible exposure. By implementing the Private AI Foundation solution, enterprises can operate AI services in close proximity to their data. This approach ensures the retention of data privacy, thus reinforcing security measures while utilizing VMware services.
The enterprise has access to a vast variety of data-handling models, and choosing the correct one can make or break an application due to the varying ways in which these models manage data. Customers will be able to train and deploy any model, including Falcon LLM, Meta Platform Inc.’s LLaMA 2, and MosaicML Inc.’s MPT, with the Private AI Foundation. Hugging Face allows users to effortlessly work with their own models or open-source models from the community.
Beneath the surface, customers can access various resources, including deep learning virtual machines, vector databases, scalable graphical processing units, and preconfigured images tailored to address the prevalent use cases across different industries. Furthermore, developers will experience immediate access to Nvidia’s AI Workbench, a dedicated workspace that facilitates the seamless creation, testing, and personalization of pre-trained generative AI models.
The platform is also equipped with NeMo, Nvidia’s cloud-native end-to-end framework, enabling enterprises to build, customize, and deploy their proprietary generative AI models across diverse environments. As the foundational “operating system” of Nvidia’s AI platform, NeMo provides a framework for customization operations, guardrail tools, data curation, and offerings. This empowers customers to develop their unique AI solutions seamlessly.
Nvidia’s founder and CEO, Jensen Huang, stated, “Enterprises everywhere are racing to integrate generative AI into their businesses. Our expanded collaboration with VMware will offer hundreds of thousands of customers — across financial services, healthcare, manufacturing, and more — the full-stack software and computing they need to unlock the potential of generative AI using custom applications built with their own data.”
Both companies have announced that the support for the Private AI Foundation platform will come from industry giants such as Dell Technologies Inc., Hewlett Packard Enterprise Co., and Lenovo Ltd. These enterprises will be at the forefront, introducing systems capable of delivering LLM customization and inference workloads. These operations will leverage Nvidia L40S graphical processing units, Nvidia BlueField-3 data processing units, and Nvidia ConnectX-7 SmartNICs.
The anticipated release of the VMware Private AI Foundation in collaboration with Nvidia is set for early 2024.