If you have not just woken up from a year-long coma, you are aware of the unexpected AI boom that started with the image generators last summer, and shifted to a higher gear with the release of ChatGPT in November of last year. Since then, new companies have formed and raised huge amounts of money. AI influencers have taken over social networks, and forecasters make all sorts of wild predictions. For some of us who have been around for other hype cycles, this is not new. However, we are seeing a race to dominate what might be the largest technology market so far this century (emphasis on might).

 

The word race implies speed. We have seen companies like LangChain, LlamaIndex, Pinecone, Chroma, Anthropic and of course OpenAI (to name a few) release software at a ridiculous pace. We have contributed to some AI-related open source projects, and spend a significant amount of time hanging out at online communities and real-life events. One of the comments that we hear repeatedly is that developers never know when a release of a library will break their working code. Or when a new model from OpenAI may kill finely-tuned prompts that resulted from hours of trial and error. Many of the examples released by these companies are proof-of-concept demos that show that it is possible for AI agents to interact with traditional software such as search engines, e-commerce APIs and SQL databases. There is rarely a mention of security in these examples. We have shown how trivial it is to make a language model generate malicious SQL that will modify a database, or read information that would normally not be exposed to user applications. We could write many more blog posts along those lines, but we can do much more to improve matters.

A fresh start

Over the past few months, Futo, Pablo and I have been discussing what it would take to help companies build secure systems around AI technologies. The three of us are veterans of the software industry. Pablo and Futo have been working on information security for decades, and my career has been in information retrieval (mostly web search). One insight that is obvious to us is that many people worry about security after something bad has happened. A password database leak costs a company hundreds of millions of dollars, and then they create an internal security initiative. Or a crypto exchange is compromised and put out of business, and they never fix the problem because they are dead. Our mission is to prevent all these unnecessary losses.


How we can help

Here are a few of the things we can do to help you build a secure AI system:

  • First off, we can help you with all the standard security review practices such as threat modeling, secure architecture assessment. This includes reviewing your code as well as your design practices and operational processes. For example, what level of trust are you giving to the language model? What attack vectors could a malicious user take advantage of?
  • We can help you fine-tune models with confidence, and be your development / design partner towards a robust and secure production system.

So, do not wait for the inevitable catastrophe that will make you want to travel back in time and think about these issues. Contact us now, and let’s build.