Highlights:
- Dell Technologies and Hugging Face claim to have worked together to develop a portal that allows Dell users to choose from an assortment of free and open-source AI models that have been hand-picked for functionality, accuracy, and use cases.
- The process of optimizing AI models for optimal performance involves the utilization of a laborious technique called retrieval augmented generation, or RAG.
Dell Technologies Inc. recently announced teaming up with Hugging Face Inc. to assist businesses in implementing generative AI models on-premises and streamlining the process of putting their proof-of-concepts into production.
Big cloud providers like Microsoft Corp. and Google Cloud have dominated the generative AI market as they offer convenient access to the robust computing resources necessary for running and refining large language models. However, businesses are leery of cloud-related expenses since they want to better utilize LLMs for their operations by customizing them. Many believe a hybrid approach to generative AI is far more practical because they are also concerned about data security.
Matt Baker, Senior Vice President of AI strategy at Dell stated that customers have resented the challenges occurring in AI application building. “If you recall, with the era of big data, there was a real challenge progressing from proof-of-concept to production,” he added.
The same is true with generative AI, where businesses require much assistance. Due to Dell’s relationship with Hugging Face, which runs an open-source artificial intelligence project hosting platform, more AI models can be deployed where the critical data they require is stored. This is mostly in the case of on-premises systems as opposed to cloud-based ones. Dell has consistently developed verified blueprints for on-premises servers, accelerators, and storage solutions that will handle AI workloads to facilitate this.
The two businesses claim to have worked together to develop a portal that allows Dell users to choose from an assortment of free and open-source AI models that have been hand-picked for functionality, accuracy, and use cases. They will be able to decide which model best fits their use cases, choosing thee optimized infrastructure to execute it on-premises, and receive assistance in optimizing those models.
The Hugging Face Dell site will include specially designed containers and scripts to facilitate their deployment on Dell servers and data storage systems.
Jeff Boudreau, Chief AI Officer at Dell, said the company partners with Hugging Face to provide customers the liberty to leverage open-source generative AI models hassle-free owing to the security and reliability of on-premises systems. “This collaboration translates into enterprises being able to modernize faster by more simply deploying customized GenAI models powered by trusted Dell infrastructure,” he added.
The portal will also provide access to various libraries, datasets, and tutorials for training generative AI models. There will be models and templates made to accomplish very particular goals. Within a software container, enterprises can combine them with their own unique data and customize it to suit their requirements.
The process of optimizing AI models for optimal performance involves the utilization of a laborious technique called retrieval augmented generation, or RAG. It adds extra information to generative AI models by integrating external knowledge sources.
This technique allows users to construct step-by-step instructions for various operations. Dell announced that it would employ a containerized solution based on the well-liked parameter-efficient approaches LoRA and QLoRA to streamline the RAG process for clients.
Customers will first be able to choose from various Dell PowerEdge servers made for AI models through the Hugging Face Dell portal. The business will eventually include workstations as choices for deployment. Dell’s Apex service is also a collection of hardware items provided to clients on an as-a-service basis.
Analyst Holger Mueller of Constellation Research Inc. stated that as AI workloads are no longer solely executed on cloud platforms, it is logical for Dell to collaborate with the provider of the broadest range of open-source LLMs available.
Mueller mentioned, “The move isn’t a surprise, but what is surprising is why it took Dell so long to arrange this partnership. Dell has made big investments in AI infrastructure, but how much hardware enterprises will buy for their on-premises AI workloads remains to be seen. Much may depend on how useful the preconfigured set of AI libraries for Dell’s platforms are.”
In addition to providing thousands of open-source LLMs, Hugging Face has concentrated on forming alliances with AI hardware service providers. The company mentioned earlier this year that it was collaborating with Advanced Micro Devices Inc. to expand the number of LLMs compatible with the MI300 chip, which is the company’s latest AI accelerator.