Hugging Face teams up with Google to accelerate open AI development

Key Points:

  • The collaboration between Google and Hugging Face aims to provide a streamlined way for developers to leverage Google Cloud services, accelerating the development of open generative AI applications.
  • Hugging Face, renowned as the GitHub for AI, serves as a repository for over 500,000 AI models and 250,000 datasets, with more than 50,000 organizations relying on the platform. Google Cloud has been actively working to provide AI-centric infrastructure and tools.
  • The integration will offer Hugging Face users the ability to train, fine-tune, deploy models using Vertex AI, and deploy and scale their models with Hugging Face-specific deep learning containers on GKE, utilizing Google Cloud’s advanced hardware capabilities.

Summary:

The AI landscape is abuzz with the latest strategic collaboration between tech giants Google and Hugging Face. The collaboration aims to empower developers with seamless access to Google Cloud services, fostering the rapid development of open generative AI applications.

 

This partnership enables teams utilizing Hugging Face’s open-source models to harness the resources of Google Cloud, including the state-of-the-art Vertex AI and advanced hardware like TPUs and GPUs. With Hugging Face emerging as a central repository for AI models and datasets, and Google Cloud’s dedication to serving enterprises with AI-centric infrastructure, this union holds tremendous promise for the AI community.

 

The user experience is set to undergo a significant transformation, providing Hugging Face users with the ability to train, fine-tune, and deploy models using Vertex AI. Additionally, the integration with Google Kubernetes Engine (GKE) will offer developers the opportunity to deploy and scale their models with ease, leveraging the power of Hugging Face-specific deep learning containers.

 

As the partnership unfolds, developers can look forward to tapping into Google Cloud’s hardware capabilities, from TPU v5e to Nvidia H100 Tensor Core GPUs and Intel Sapphire Rapid CPUs. Notably, the deployment of models for production on Google Cloud with inference endpoints and the management of usage and billing are poised to become seamless processes.

 

While the full suite of new experiences, such as Vertex AI and GKE deployment options, is not yet available, both companies are committed to bringing these capabilities to Hugging Face Hub users in the first half of 2024.

DAILY LINKS TO YOUR INBOX

PROMPT ENGINEERING

Prompt Engineering Guides

ShareGPT

 

©2024 The Horizon