Highlights:

  • Couchbase first introduced Capella AI Model Services in December as part of its Capella AI Services launch, aiming to streamline AI agent development.
  • By integrating Capella AI Model Services with NIM, Couchbase enables customers to position their AI agents closer to both their underlying data and the GPUs that power them.

A database company, Couchbase Inc. enhanced the capabilities of its AI agent-building tools, Capella AI Model Services, by integrating them with Nvidia Corp.’s NIM microservices.

This enhancement will simplify the development of agentic AI applications for Couchbase Capella database users. These applications go beyond traditional chatbots by autonomously performing tasks with minimal supervision.

Couchbase first introduced Capella AI Model Services in December as part of its Capella AI Services launch, aiming to streamline AI agent development. The tools also enable the creation and embedding of managed endpoints within large language models, ensuring they meet organizations’ performance, latency, and privacy needs.

Couchbase Capella, the cloud-based version of the open-source Couchbase NoSQL database, offers a key advantage over traditional databases like Oracle—it can handle both structured and unstructured data. This makes it particularly well-suited for AI applications requiring access to diverse data types. Additionally, it serves as a data cache.

NIM, short for “Nvidia Inference Microservices,” bundles optimized inference engines, industry-standard APIs, and pretested large language models into software containers for seamless deployment. It also includes tools like Nvidia’s NeMo Guardrails, which help enforce policies and prevent AI hallucinations. Additionally, NIM allows organizations to integrate their proprietary data to implement retrieval-augmented generation, enhancing the knowledge and accuracy of its LLMs.

By integrating Capella AI Model Services with NIM, Couchbase enables customers to position their AI agents closer to both their underlying data and the GPUs that power them. This results in higher-throughput AI applications with increased model flexibility.

According to Matt McDonough, Couchbase’s Senior Vice President of Product and Partners, AI agents require a unified, high-performance data platform that supports every stage of the application development lifecycle.

“We’re giving customers the flexibility to run their preferred AI models in a secure and governed way, while providing better performance for AI workloads and seamless integration of AI with transactional and analytical data,” he explained.

McDonough emphasized the importance of reliability and compliance in AI agents, noting that enterprises often struggle in these areas. Ensuring AI agents function correctly is crucial, especially for customer-facing applications, as unreliable responses can significantly harm a brand’s reputation. He also stressed the need for strong security measures to prevent data leaks that could lead to privacy regulation violations.

Over the past year, Couchbase has continuously enhanced Capella’s capabilities to position it as the preferred database for AI developers. In February 2024, it introduced vector search and retrieval-augmented generation features while integrating AI frameworks like LlamaIndex and LangChain to support generative AI applications. Then, in September, the company expanded Capella’s AI capabilities with Capella Columnar, enabling the development of more advanced generative AI applications that analyze real-time data for more personalized user experiences.