Highlights:
- Simplismart’s inference engine offers enterprises a standardized language for building generative AI applications.
- Simplismart’s declarative language, akin to Terraform, assists software teams in fine-tuning, deploying, and monitoring generative AI models at scale.
Recently, AI inference startup, Simplismart, secured USD 7 million in funding to expand its infrastructure platform and simplify AI model deployment for businesses.
The Series A round was led by Accel, with participation from Shastra VC, Titan Capital, and prominent angel investors, including Akshay Kothari, co-founder of Notion Inc.
Simplismart has developed a “fast inference engine” designed to help companies optimize the performance of AI model deployments. The startup aims to position itself as a key player in bringing AI into mainstream enterprise operations. To achieve this, it focuses on addressing several challenges that hinder enterprise AI adoption, including the performance trade-offs many businesses currently face.
In a blog post, Simplismart Co-founder and CEO Amritanshu Jain noted that while more enterprises are eager to adopt AI, they often struggle to derive significant value from it. One challenge is that deploying AI independently can be difficult for companies. While using third-party APIs is an option, Jain pointed out that they tend to be costly, and inflexible, and raise concerns about data security.
“Every company has different inference needs, and one size does not fit all. APIs are not tailored to scale for bursty workloads and cannot tweak performance to suit needs. Businesses need to control their cost vs performance tradeoffs. This will be the primary reason for a shift toward open-source models, as companies prefer smaller niche models trained on relevant datasets over large generalist models to justify ROI,” Jain said.
Jain contends that few enterprises want to “rent their AI,” but many are compelled to do so due to the challenges of owning AI. Deploying large language models internally presents major obstacles, such as scaling infrastructure, establishing continuous integration and deployment pipelines, accessing compute resources, optimizing models, and managing costs efficiently.
Currently, most companies rely on one of two off-the-shelf AI solutions, both of which have limitations. MLOps platforms allow for orchestration and model serving, but they lack an optimized environment for AI in production, leading to significant performance constraints. The alternative is using generative AI cloud platforms, or “GPU brokers,” which offer optimized APIs and better performance, but raise serious concerns around data privacy and high costs.
Simplismart’s inference engine offers enterprises a new solution by providing a standardized language for software engineers to use when building generative AI applications. Its key advantage lies in significantly reducing the response time of models to queries.
Simplismart highlights benchmarks showing that it can run the open-source Llama 3.1 8B model with a throughput exceeding 440 tokens per second, marking a significant speed breakthrough. This capability is bundled with a comprehensive MLOps platform specifically designed for on-premises AI deployments.
Jain sees a significant market opportunity for the startup’s solution, referencing data that shows nearly 90% of enterprises’ machine learning projects never reach production.
“The adoption of generative AI is far behind the rate of new developments. It’s because enterprises struggle with four bottlenecks: lack of standardized workflows, high costs leading to poor ROI, data privacy, and the need to control and customize the system to avoid downtime and limits from other services,” the CEO said.
Simplismart’s declarative language, similar to Terraform, assists software teams in tasks like fine-tuning, deploying, and monitoring generative AI models at scale. The platform streamlines and standardizes these workflows, enabling teams to optimize their models for better performance.
Simplismart was founded in 2022 by Amritanshu Jain and Chief Technology Officer Devansh Ghatak. Jain brings expertise in cloud infrastructure, largely from his experience at Oracle Corp., while Ghatak specializes in search algorithms, a skill he developed during his tenure at Google LLC.
In just two years and with under USD 1 million in capital, Simplismart has built a robust MLOps platform, featuring what its founders claim is the world’s fastest inference engine. This platform allows companies to create, fine-tune, deploy, and run AI models on-premises at impressive speeds, enhancing performance while avoiding the cost and security issues typically associated with other solutions.
Simplismart aims to empower companies to deploy custom generative AI applications with complete control. It envisions offering the essential building blocks that allow businesses to create their own tailored inference and deployment environments.
So far, Simplismart has secured around 30 customers, generating a total annual revenue run rate of USD 1 million. With the new funding from this round, Jain anticipates the company can grow that figure to USD 5 million by the first quarter of next year.
The funds from this round will be a significant boost for Simplismart, with plans to allocate them toward product development, hiring, and strengthening its sales and marketing initiatives.
Accel Partner Anand Daniel noted that an increasing number of companies are recognizing the benefits of deploying and customizing AI models on their own infrastructure, such as greater control over performance, cost, data security, privacy, and other factors.
“What blew us away was how their tiny team had already begun serving some of the fastest-growing generative AI companies in production. It furthered our belief that Simplismart has a shot at winning in the massive but fiercely competitive global AI infrastructure market,” he said.