Highlights –
- The new partnership will allow developers to deploy neural networks on SageMaker in just a few clicks.
- Amazon SageMaker and AWS-designed chips will let the team and larger machine learning community turn the latest research to reproducible models that anyone can build on.
Hugging Face Inc., the operator of a well-known platform that hosts machine learning models, is collaborating with Amazon Web Services Inc. to streamline the development of its artificial intelligence projects.
There was an existing collaboration going on since early 2021 which got expanded when the companies announced the partnership on February 21, 2023.
Adam Selipsky, AWS Chief Executive Officer, said, “Generative AI has the potential to transform entire industries, but its cost and the required expertise puts the technology out of reach for all but a select few companies. Hugging Face and AWS are making it easier for customers to access popular machine learning models to create their own generative AI applications with the highest performance and lowest costs.”
New York-based Hugging Face has received over USD 160 million in a recent funding round. It operates a platform that resembles GitHub, which helps developers to host open-source AI models plus technical assets such as training datasets. It allows storing code for over 100,000 neural networks.
Additionally, Hugging Face will use AWS as its preferred public cloud in accordance with the new collaboration. The business is also launching a new integration with the machine learning platform Amazon SageMaker. Developers can use the platform’s more than six cloud services to train, deploy, and create AI models.
With just a few clicks, developers will be able to deploy neural networks hosted by Hugging Face on SageMaker thanks to the new integration. After uploading an AI model in SageMaker, it can be trained by using AWS Titanium chips-powered cloud instances. The chips are curated exclusively for AI training tasks.
Neural networks deployed from Hugging Face to AWS work with many types of cloud instances, including the ones powered by AWS Inferentia accelerator series. These are basically the chips optimized to perform inference, or the tasks of running AI models right after the training phase gets completed.
Clement Delangue, CEO of Hugging Face, said, “The future of AI is here, but it’s not evenly distributed. Amazon SageMaker and AWS-designed chips will enable our team and the larger machine learning community to convert the latest research into openly reproducible models that anyone can build on.”
It adds essence to Hugging Face AWS Deep Learning Containers that the company already offers to developers as a part of the partnership. The containers make Hugging Face’s AI models available in prepackaged format which is easy to deploy in public cloud environments.