Enkrypt raises seed round to create a ‘control layer’ for generative AI safety

Key Points:

  • Enkrypt AI raised $2.35 million in a seed round of funding led by Boldcap.
  • The company offers Sentry, a comprehensive solution for secure and compliant deployment of generative AI models.
  • Sentry technology has been tested by mid to large-sized enterprises in regulated industries such as finance and life sciences, reducing risks and accelerating AI adoption.


Boston-based startup Enkrypt AI has secured $2.35 million in a seed funding round led by Boldcap. Co-founded by Yale PhDs Sahil Agarwal and Prashanth Harshangi, the company specializes in providing a control layer for safe generative AI usage, aiming to ensure private, secure, and compliant deployment of AI models. Enkrypt’s innovative tech promises to accelerate the adoption of generative AI for enterprises by up to ten times, offering a solution to handle various safety hurdles that can hinder AI projects.


In the seed round, Enkrypt also attracted investments from prominent players such as Berkeley SkyDeck, Kubera VC, Arka VC, Veredas Partners, and other angel investors from the AI, healthcare, and enterprise sectors. The startup’s product, Sentry, acts as a comprehensive solution that offers visibility and oversight of AI model usage and performance across business functions while safeguarding sensitive information, mitigating security threats, and ensuring compliance with regulatory requirements through automated monitoring and rigid access controls.


CEO Sahil Agarwal explained that Sentry serves as a secure enterprise gateway, managing model access controls, data privacy, and model security to prevent breaches and ensure reliability. Leveraging proprietary guardrails, the solution can defend against prompt injection attacks, protect privacy, test generative AI APIs for corner cases, and filter out harmful content. Enkrypt’s technology is currently being trialed by mid to large-sized enterprises in regulated industries like finance and life sciences.


Enkrypt’s success in reducing jailbreak vulnerabilities for a Fortune 500 enterprise using Meta’s Llama2-7B model showcases the effectiveness of the Sentry technology, enabling faster adoption and deployment of AI models across departments. The startup plans to further develop and expand its solution to cater to a broader range of enterprises, positioning safety as a critical component in the development and deployment of generative AI models. Amidst rising concerns around AI safety, Enkrypt aims to differentiate itself by offering a comprehensive and unique solution to address various security and compliance challenges.


As the startup progresses with design partners to refine its product, it faces competition from other players like Protect AI, which is also enhancing its security and compliance offerings. Additionally, the U.S. government’s NIST has initiated an AI safety consortium involving over 200 firms to establish standards for AI safety. Enkrypt’s focus on addressing critical safety issues in AI development positions it at the forefront of providing innovative solutions for enterprises navigating the complexities of generative AI technologies.



Prompt Engineering Guides



©2024 The Horizon