Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the easy-accordion-free domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the zoho-flow domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893
{"id":2845,"date":"2024-03-27T02:52:00","date_gmt":"2024-03-27T02:52:00","guid":{"rendered":"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/"},"modified":"2024-03-27T02:52:00","modified_gmt":"2024-03-27T02:52:00","slug":"securing-the-llm-stack-on-march-26-2024-at-838-pm","status":"publish","type":"post","link":"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/","title":{"rendered":"Securing the LLM Stack on March 26, 2024 at 8:38 pm"},"content":{"rendered":"

A few months ago, I wrote about the security of AI models, fine-tuning techniques, and the use of Retrieval-Augmented Generation (RAG) in a Cisco Security Blog post<\/a>. In this blog post, I will\u2026 Read more on Cisco Blogs<\/a><\/p>\n

\u200b[[{“value”:”<\/p>\n

A few months ago, I wrote about the security of AI models, fine-tuning techniques, and the use of Retrieval-Augmented Generation (RAG) in a Cisco Security Blog post<\/a>. In this blog post, I will continue the discussion on the critical importance of learning how to secure AI systems, with a special focus on current LLM implementations and the \u201cLLM stack.\u201d<\/p>\n

I also recently published two books. The first book is titled \u201cThe AI Revolution in Networking, Cybersecurity, and Emerging Technologies\u201d<\/a> where my co-authors and I cover the way AI is already revolutionizing networking, cybersecurity, and emerging technologies. The second book, \u201cBeyond the Algorithm: AI, Security, Privacy, and Ethics,\u201d<\/a> co-authored with Dr. Petar Radanliev of Oxford University<\/a>, presents an in-depth exploration of critical subjects including red teaming AI models, monitoring AI deployments, AI supply chain security, and the application of privacy-enhancing methodologies such as federated learning and homomorphic encryption. Additionally, it discusses strategies for identifying and mitigating bias within AI systems.<\/p>\n

For now, let\u2019s explore some of the key factors in securing AI implementations and the LLM Stack.<\/p>\n

What is the LLM Stack?<\/strong><\/h2>\n

The \u201cLLM stack\u201d generally refers to a stack of technologies or components centered around Large Language Models (LLMs). This \u201cstack\u201d can include a wide range of technologies and methodologies aimed at leveraging the capabilities of LLMs (e.g., vector databases, embedding models, APIs, plugins, orchestration libraries like LangChain, guardrail tools, etc.).<\/p>\n

Many organizations are trying to implement Retrieval-Augmented Generation (RAG)<\/a> nowadays. This is because RAG significantly enhances the accuracy of LLMs by combining the generative capabilities of these models with the retrieval of relevant information from a database or knowledge base. I introduced RAG in this article<\/a>, but in short, RAG works by first querying a database with a question or prompt to retrieve relevant information. This information is then fed into an LLM, which generates a response based on both the input prompt and the retrieved documents. The result is a more accurate, informed, and contextually relevant output than what could be achieved by the LLM alone.<\/p>\n

Let\u2019s go over the typical \u201cLLM stack\u201d components that make RAG and other applications work. The following figure illustrates the LLM stack.<\/p>\n\n

Vectorizing Data and Security<\/strong><\/h2>\n

Vectorizing data and creating embeddings are crucial steps in preparing your dataset for effective use with RAG and underlying tools. Vector embeddings, also known as vectorization, involve transforming words and different types of data into numerical values, where each piece of data is depicted as a vector within a high-dimensional space. \u00a0OpenAI offers different embedding models<\/a> that can be used via their API. \u00a0You can also use open source embedding models from Hugging Face<\/a>. The following is an example of how the text \u201cExample from Omar for this blog<\/em>\u201d was converted into \u201cnumbers\u201d (embeddings<\/em>) using the text-embedding-3-small<\/a> model from OpenAI.<\/p>\n

“object”: “list”,
\n “data”: [
\n \u00a0\u00a0 {
\n \u00a0\u00a0\u00a0\u00a0 “object”: “embedding”,
\n \u00a0\u00a0\u00a0\u00a0 “index”: 0,
\n \u00a0\u00a0\u00a0\u00a0 “embedding”: [
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.051343333,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.004879803,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.06099363,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.0071908776,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.020674748,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.00012919278,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.014209986,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.0034705158,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.005566879,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.02899774,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.03065297,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.034541197,
\n<output omitted for brevity>
\n \u00a0\u00a0\u00a0\u00a0 ]
\n \u00a0\u00a0 }
\n ],
\n “model”: “text-embedding-3-small”,
\n “usage”: {
\n \u00a0\u00a0 “prompt_tokens”: 6,
\n \u00a0\u00a0 “total_tokens”: 6
\n }
\n}<\/p>\n

The first step (even before you start creating embeddings) is data collection and ingestion. Gather and ingest the raw data from different sources (e.g., databases, PDFs, JSON, log files and other information from Splunk, etc.) into a centralized data storage system called a vector database<\/a>.<\/p>\n

Note<\/strong>: Depending on the type of data you will need to clean and normalize the data to remove noise, such as irrelevant information and duplicates.<\/p>\n

Ensuring the security of the embedding creation process involves a multi-faceted approach that spans from the selection of embedding models to the handling and storage of the generated embeddings. Let\u2019s start discussing some security considerations in the embedding creation process.<\/p>\n

Use well-known, commercial or open-source embedding models that have been thoroughly vetted by the community. Opt for models that are widely used and have a strong community support. Like any software, embedding models and their dependencies can have vulnerabilities that are discovered over time. Some embedding models could be manipulated by threat actors. This is why supply chain security is so important.<\/p>\n

You should also validate and sanitize input data. The data used to create embeddings may contain sensitive or personal information that needs to be protected to comply with data protection regulations (e.g., GDPR, CCPA). Apply data anonymization or pseudonymization techniques where possible. Ensure that data processing is performed in a secure environment, using encryption for data at rest and in transit.<\/p>\n

Unauthorized access to embedding models and the data they process can lead to data exposure and other security issues. Use strong authentication and access control mechanisms to restrict access to embedding models and data.<\/p>\n

Indexing and Storage of Embeddings<\/strong><\/h2>\n

Once the data is vectorized, the next step is to store these vectors in a searchable database or a vector database such as ChromaDB, pgvector, MongoDB Atlas, FAISS (Facebook AI Similarity Search), or Pinecone. These systems allow for efficient retrieval of similar vectors.<\/p>\n

Did you know that some vector databases do not support encryption? Make sure that the solution you use supports encryption.<\/p>\n

Orchestration Libraries and Frameworks like LangChain<\/strong><\/h2>\n

In the diagram I used earlier, you can see a reference to libraries like LangChain and LlamaIndex. LangChain is a framework for developing applications powered by LLMs. It enables context-aware and reasoning applications, providing libraries, templates, and a developer platform for building, testing, and deploying applications. LangChain consists of several parts, including libraries, templates, LangServe for deploying chains as a REST API, and LangSmith for debugging and monitoring chains. It also offers a LangChain Expression Language (LCEL)<\/a> for composing chains and provides standard interfaces and integrations for modules like model I\/O, retrieval, and AI agents. I wrote an article<\/a> about numerous LangChain resources and related tools that are also available at one of my GitHub repositories<\/a>.<\/p>\n

Many organizations use LangChain supports many use cases, such as personal assistants, question answering, chatbots, querying tabular data, and more. It also provides example code for building applications with an emphasis on more applied and end-to-end examples.<\/p>\n

Langchain can interact with external APIs to fetch or send data in real-time to and from other applications. This capability allows LLMs to access up-to-date information, perform actions like booking appointments, or retrieve specific data from web services. The framework can dynamically construct API requests based on the context of a conversation or query, thereby extending the functionality of LLMs beyond static knowledge bases. When integrating with external APIs, it\u2019s crucial to use secure authentication methods and encrypt data in transit using protocols like HTTPS. API keys and tokens should be stored securely and never hard-coded into the application code.<\/p>\n

AI Front-end Applications<\/strong><\/h2>\n

AI front-end applications refer to the user-facing part of AI systems where interaction between the machine and humans takes place. These applications leverage AI technologies to provide intelligent, responsive, and personalized experiences to users. The front end for chatbots, virtual assistants, personalized recommendation systems, and many other AI-driven applications can be easily created with libraries like Streamlit<\/a>, Vercel<\/a>, Streamship<\/a>, and others.<\/p>\n

The implementation of traditional web application security practices is essential to protect against a wide range of vulnerabilities, such as broken access control<\/a>, cryptographic failures<\/a>, injection vulnerabilities<\/a> like cross-site scripting (XSS)<\/a>, server-side request forgery (SSRF),<\/a> and many other vulnerabilities.<\/p>\n

LLM Caching<\/strong><\/h2>\n

LLM caching is a technique used to improve the efficiency and performance of LLM interactions. You can use implementations like SQLite Cache, Redis, and GPTCache. LangChain provides examples<\/a> of how these caching methods could be leveraged.<\/p>\n

The basic idea behind LLM caching is to store previously computed results of the model\u2019s outputs so that if the same or similar inputs are encountered again, the model can quickly retrieve the stored output instead of recomputing it from scratch. This can significantly reduce the computational overhead, making the model more responsive and cost-effective, especially for frequently repeated queries or common patterns of interaction.<\/p>\n

Caching strategies must be carefully designed to ensure they do not compromise the model\u2019s ability to generate relevant and updated responses, especially in scenarios where the input context or the external world knowledge changes over time. Moreover, effective cache invalidation strategies are crucial to prevent outdated or irrelevant information from being served, which can be challenging given the dynamic nature of knowledge and language.<\/p>\n

LLM Monitoring and Policy Enforcement Tools<\/strong><\/h2>\n

Monitoring is one of the most important elements of LLM stack security. There are many open source and commercial LLM monitoring tools such as MLFlow.<\/a> \u00a0There are also several tools that can help protect against prompt injection attacks, such as Rebuff. Many of these work in isolation. Cisco recently announced Motific.ai<\/a>.<\/p>\n

Motific enhances your ability to implement both predefined and tailored controls over Personally Identifiable Information (PII), toxicity, hallucination, topics, token limits, prompt injection, and data poisoning. It provides comprehensive visibility into operational metrics, policy flags, and audit trails, ensuring that you have a clear oversight of your system\u2019s performance and security. Additionally, by analyzing user prompts, Motific enables you to grasp user intents more accurately, optimizing the utilization of foundation models for improved outcomes.<\/p>\n

Cisco also provides an LLM security protection suite inside Panoptica<\/a>. \u00a0Panoptica is Cisco\u2019s cloud application security solution for code to cloud. It provides seamless scalability across clusters and multi-cloud environments.<\/p>\n

AI Bill of Materials and Supply Chain Security<\/strong><\/h2>\n

The need for transparency, and traceability in AI development has never been more crucial. Supply chain security is top-of-mind for many individuals in the industry. This is why AI Bill of Materials (AI BOMs) are so important. But what exactly are AI BOMs, and why are they so important? How do Software Bills of Materials (SBOMs) differ from AI Bills of Materials (AI BOMs)? SBOMs serve a crucial role in the software development industry by providing a detailed inventory of all components within a software application. This documentation is essential for understanding the software\u2019s composition, including its libraries, packages, and any third-party code. On the other hand, AI BOMs cater specifically to artificial intelligence implementations. They offer comprehensive documentation of an AI system\u2019s many elements, including model specifications, model architecture, intended applications, training datasets, and additional pertinent information. This distinction highlights the specialized nature of AI BOMs in addressing the unique complexities and requirements of AI systems, compared to the broader scope of SBOMs in software documentation.<\/p>\n

I published a paper<\/a> with Oxford University, titled \u201cToward Trustworthy AI: An Analysis of Artificial Intelligence (AI) Bill of Materials (AI BOMs)\u201d, that explains the concept of AI BOMs. Dr. Allan Friedman (CISA), Daniel Bardenstein, and I presented in a webinar<\/a> describing the role of AI BOMs. Since then, the Linux Foundation SPDX<\/a> and OWASP CycloneDX<\/a> have started working on AI BOMs (otherwise known as AI profile SBOMs).<\/p>\n

Securing the LLM stack is essential not only for protecting data and preserving user trust but also for ensuring the operational integrity, reliability, and ethical use of these powerful AI models. As LLMs become increasingly integrated into various aspects of society and industry, their security becomes paramount to prevent potential negative impacts on individuals, organizations, and society at large.<\/p>\n

Sign up for\u00a0Cisco U.<\/a>\u00a0| Join the\u202fCisco Learning Network<\/a>.<\/p>\n

Follow Cisco Learning & Certifications<\/strong><\/h2>\n

Twitter<\/a>\u202f|\u202fFacebook<\/a>\u202f|\u202fLinkedIn<\/a>\u202f|\u202fInstagram<\/a><\/strong>\u202f|\u202fYouTube<\/a><\/strong><\/h3>\n

Use\u00a0#CiscoU<\/strong>\u00a0and\u202f#CiscoCert<\/strong>\u202fto join the conversation.<\/p>\n

\n\t\tShare\n
\n
<\/div>\n<\/div>\n
\n
\n\t\t<\/a>\n\t<\/div>\n<\/div>\n
\n
\n\t\t<\/a>\n\t<\/div>\n<\/div>\n
\n
\n\t <\/a>\n\t<\/div>\n<\/div>\n<\/div>\n
Share:<\/div>\n
\n
\n
<\/div>\n<\/div>\n
\n
\n\t\t<\/a>\n\t<\/div>\n<\/div>\n
\n
\n\t\t<\/a>\n\t<\/div>\n<\/div>\n
\n
\n\t <\/a>\n\t<\/div>\n<\/div>\n<\/div>\n

“}]]\u00a0\u00a0Learn how to secure the LLM stack, which is essential to protecting data and preserving user trust, as well as ensuring the operational integrity, reliability, and ethical use of these powerful AI models.\u00a0\u00a0Read More<\/a>\u00a0Cisco Blogs\u00a0<\/p>","protected":false},"excerpt":{"rendered":"

<\/p>\n

A few months ago, I wrote about the security of AI models, fine-tuning techniques, and the use of Retrieval-Augmented Generation (RAG) in a Cisco Security Blog post<\/a>. In this blog post, I will\u2026 Read more on Cisco Blogs<\/a><\/p>\n

\u200b[[{“value”:”<\/p>\n

A few months ago, I wrote about the security of AI models, fine-tuning techniques, and the use of Retrieval-Augmented Generation (RAG) in a Cisco Security Blog post<\/a>. In this blog post, I will continue the discussion on the critical importance of learning how to secure AI systems, with a special focus on current LLM implementations and the \u201cLLM stack.\u201d<\/p>\n

I also recently published two books. The first book is titled \u201cThe AI Revolution in Networking, Cybersecurity, and Emerging Technologies\u201d<\/a> where my co-authors and I cover the way AI is already revolutionizing networking, cybersecurity, and emerging technologies. The second book, \u201cBeyond the Algorithm: AI, Security, Privacy, and Ethics,\u201d<\/a> co-authored with Dr. Petar Radanliev of Oxford University<\/a>, presents an in-depth exploration of critical subjects including red teaming AI models, monitoring AI deployments, AI supply chain security, and the application of privacy-enhancing methodologies such as federated learning and homomorphic encryption. Additionally, it discusses strategies for identifying and mitigating bias within AI systems.<\/p>\n

For now, let\u2019s explore some of the key factors in securing AI implementations and the LLM Stack.<\/p>\n

What is the LLM Stack?<\/strong><\/h2>\n

The \u201cLLM stack\u201d generally refers to a stack of technologies or components centered around Large Language Models (LLMs). This \u201cstack\u201d can include a wide range of technologies and methodologies aimed at leveraging the capabilities of LLMs (e.g., vector databases, embedding models, APIs, plugins, orchestration libraries like LangChain, guardrail tools, etc.).<\/p>\n

Many organizations are trying to implement Retrieval-Augmented Generation (RAG)<\/a> nowadays. This is because RAG significantly enhances the accuracy of LLMs by combining the generative capabilities of these models with the retrieval of relevant information from a database or knowledge base. I introduced RAG in this article<\/a>, but in short, RAG works by first querying a database with a question or prompt to retrieve relevant information. This information is then fed into an LLM, which generates a response based on both the input prompt and the retrieved documents. The result is a more accurate, informed, and contextually relevant output than what could be achieved by the LLM alone.<\/p>\n

Let\u2019s go over the typical \u201cLLM stack\u201d components that make RAG and other applications work. The following figure illustrates the LLM stack.<\/p>\n

Vectorizing Data and Security<\/strong><\/h2>\n

Vectorizing data and creating embeddings are crucial steps in preparing your dataset for effective use with RAG and underlying tools. Vector embeddings, also known as vectorization, involve transforming words and different types of data into numerical values, where each piece of data is depicted as a vector within a high-dimensional space. \u00a0OpenAI offers different embedding models<\/a> that can be used via their API. \u00a0You can also use open source embedding models from Hugging Face<\/a>. The following is an example of how the text \u201cExample from Omar for this blog<\/em>\u201d was converted into \u201cnumbers\u201d (embeddings<\/em>) using the text-embedding-3-small<\/a> model from OpenAI.<\/p>\n

“object”: “list”,
\n “data”: [
\n \u00a0\u00a0 {
\n \u00a0\u00a0\u00a0\u00a0 “object”: “embedding”,
\n \u00a0\u00a0\u00a0\u00a0 “index”: 0,
\n \u00a0\u00a0\u00a0\u00a0 “embedding”: [
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.051343333,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.004879803,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.06099363,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.0071908776,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.020674748,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.00012919278,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.014209986,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.0034705158,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.005566879,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.02899774,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.03065297,
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.034541197,
\n<output omitted for brevity>
\n \u00a0\u00a0\u00a0\u00a0 ]
\n \u00a0\u00a0 }
\n ],
\n “model”: “text-embedding-3-small”,
\n “usage”: {
\n \u00a0\u00a0 “prompt_tokens”: 6,
\n \u00a0\u00a0 “total_tokens”: 6
\n }
\n}<\/p>\n

The first step (even before you start creating embeddings) is data collection and ingestion. Gather and ingest the raw data from different sources (e.g., databases, PDFs, JSON, log files and other information from Splunk, etc.) into a centralized data storage system called a vector database<\/a>.<\/p>\n

Note<\/strong>: Depending on the type of data you will need to clean and normalize the data to remove noise, such as irrelevant information and duplicates.<\/p>\n

Ensuring the security of the embedding creation process involves a multi-faceted approach that spans from the selection of embedding models to the handling and storage of the generated embeddings. Let\u2019s start discussing some security considerations in the embedding creation process.<\/p>\n

Use well-known, commercial or open-source embedding models that have been thoroughly vetted by the community. Opt for models that are widely used and have a strong community support. Like any software, embedding models and their dependencies can have vulnerabilities that are discovered over time. Some embedding models could be manipulated by threat actors. This is why supply chain security is so important.<\/p>\n

You should also validate and sanitize input data. The data used to create embeddings may contain sensitive or personal information that needs to be protected to comply with data protection regulations (e.g., GDPR, CCPA). Apply data anonymization or pseudonymization techniques where possible. Ensure that data processing is performed in a secure environment, using encryption for data at rest and in transit.<\/p>\n

Unauthorized access to embedding models and the data they process can lead to data exposure and other security issues. Use strong authentication and access control mechanisms to restrict access to embedding models and data.<\/p>\n

Indexing and Storage of Embeddings<\/strong><\/h2>\n

Once the data is vectorized, the next step is to store these vectors in a searchable database or a vector database such as ChromaDB, pgvector, MongoDB Atlas, FAISS (Facebook AI Similarity Search), or Pinecone. These systems allow for efficient retrieval of similar vectors.<\/p>\n

Did you know that some vector databases do not support encryption? Make sure that the solution you use supports encryption.<\/p>\n

Orchestration Libraries and Frameworks like LangChain<\/strong><\/h2>\n

In the diagram I used earlier, you can see a reference to libraries like LangChain and LlamaIndex. LangChain is a framework for developing applications powered by LLMs. It enables context-aware and reasoning applications, providing libraries, templates, and a developer platform for building, testing, and deploying applications. LangChain consists of several parts, including libraries, templates, LangServe for deploying chains as a REST API, and LangSmith for debugging and monitoring chains. It also offers a LangChain Expression Language (LCEL)<\/a> for composing chains and provides standard interfaces and integrations for modules like model I\/O, retrieval, and AI agents. I wrote an article<\/a> about numerous LangChain resources and related tools that are also available at one of my GitHub repositories<\/a>.<\/p>\n

Many organizations use LangChain supports many use cases, such as personal assistants, question answering, chatbots, querying tabular data, and more. It also provides example code for building applications with an emphasis on more applied and end-to-end examples.<\/p>\n

Langchain can interact with external APIs to fetch or send data in real-time to and from other applications. This capability allows LLMs to access up-to-date information, perform actions like booking appointments, or retrieve specific data from web services. The framework can dynamically construct API requests based on the context of a conversation or query, thereby extending the functionality of LLMs beyond static knowledge bases. When integrating with external APIs, it\u2019s crucial to use secure authentication methods and encrypt data in transit using protocols like HTTPS. API keys and tokens should be stored securely and never hard-coded into the application code.<\/p>\n

AI Front-end Applications<\/strong><\/h2>\n

AI front-end applications refer to the user-facing part of AI systems where interaction between the machine and humans takes place. These applications leverage AI technologies to provide intelligent, responsive, and personalized experiences to users. The front end for chatbots, virtual assistants, personalized recommendation systems, and many other AI-driven applications can be easily created with libraries like Streamlit<\/a>, Vercel<\/a>, Streamship<\/a>, and others.<\/p>\n

The implementation of traditional web application security practices is essential to protect against a wide range of vulnerabilities, such as broken access control<\/a>, cryptographic failures<\/a>, injection vulnerabilities<\/a> like cross-site scripting (XSS)<\/a>, server-side request forgery (SSRF),<\/a> and many other vulnerabilities.<\/p>\n

LLM Caching<\/strong><\/h2>\n

LLM caching is a technique used to improve the efficiency and performance of LLM interactions. You can use implementations like SQLite Cache, Redis, and GPTCache. LangChain provides examples<\/a> of how these caching methods could be leveraged.<\/p>\n

The basic idea behind LLM caching is to store previously computed results of the model\u2019s outputs so that if the same or similar inputs are encountered again, the model can quickly retrieve the stored output instead of recomputing it from scratch. This can significantly reduce the computational overhead, making the model more responsive and cost-effective, especially for frequently repeated queries or common patterns of interaction.<\/p>\n

Caching strategies must be carefully designed to ensure they do not compromise the model\u2019s ability to generate relevant and updated responses, especially in scenarios where the input context or the external world knowledge changes over time. Moreover, effective cache invalidation strategies are crucial to prevent outdated or irrelevant information from being served, which can be challenging given the dynamic nature of knowledge and language.<\/p>\n

LLM Monitoring and Policy Enforcement Tools<\/strong><\/h2>\n

Monitoring is one of the most important elements of LLM stack security. There are many open source and commercial LLM monitoring tools such as MLFlow.<\/a> \u00a0There are also several tools that can help protect against prompt injection attacks, such as Rebuff. Many of these work in isolation. Cisco recently announced Motific.ai<\/a>.<\/p>\n

Motific enhances your ability to implement both predefined and tailored controls over Personally Identifiable Information (PII), toxicity, hallucination, topics, token limits, prompt injection, and data poisoning. It provides comprehensive visibility into operational metrics, policy flags, and audit trails, ensuring that you have a clear oversight of your system\u2019s performance and security. Additionally, by analyzing user prompts, Motific enables you to grasp user intents more accurately, optimizing the utilization of foundation models for improved outcomes.<\/p>\n

Cisco also provides an LLM security protection suite inside Panoptica<\/a>. \u00a0Panoptica is Cisco\u2019s cloud application security solution for code to cloud. It provides seamless scalability across clusters and multi-cloud environments.<\/p>\n

AI Bill of Materials and Supply Chain Security<\/strong><\/h2>\n

The need for transparency, and traceability in AI development has never been more crucial. Supply chain security is top-of-mind for many individuals in the industry. This is why AI Bill of Materials (AI BOMs) are so important. But what exactly are AI BOMs, and why are they so important? How do Software Bills of Materials (SBOMs) differ from AI Bills of Materials (AI BOMs)? SBOMs serve a crucial role in the software development industry by providing a detailed inventory of all components within a software application. This documentation is essential for understanding the software\u2019s composition, including its libraries, packages, and any third-party code. On the other hand, AI BOMs cater specifically to artificial intelligence implementations. They offer comprehensive documentation of an AI system\u2019s many elements, including model specifications, model architecture, intended applications, training datasets, and additional pertinent information. This distinction highlights the specialized nature of AI BOMs in addressing the unique complexities and requirements of AI systems, compared to the broader scope of SBOMs in software documentation.<\/p>\n

I published a paper<\/a> with Oxford University, titled \u201cToward Trustworthy AI: An Analysis of Artificial Intelligence (AI) Bill of Materials (AI BOMs)\u201d, that explains the concept of AI BOMs. Dr. Allan Friedman (CISA), Daniel Bardenstein, and I presented in a webinar<\/a> describing the role of AI BOMs. Since then, the Linux Foundation SPDX<\/a> and OWASP CycloneDX<\/a> have started working on AI BOMs (otherwise known as AI profile SBOMs).<\/p>\n

Securing the LLM stack is essential not only for protecting data and preserving user trust but also for ensuring the operational integrity, reliability, and ethical use of these powerful AI models. As LLMs become increasingly integrated into various aspects of society and industry, their security becomes paramount to prevent potential negative impacts on individuals, organizations, and society at large.<\/p>\n

Sign up for\u00a0Cisco U.<\/a>\u00a0| Join the\u202fCisco Learning Network<\/a>.<\/p>\n

Follow Cisco Learning & Certifications<\/strong><\/h2>\n

Twitter<\/a>\u202f|\u202fFacebook<\/a>\u202f|\u202fLinkedIn<\/a>\u202f|\u202fInstagram<\/a><\/strong>\u202f|\u202fYouTube<\/a><\/strong><\/h3>\n

Use\u00a0#CiscoU<\/strong>\u00a0and\u202f#CiscoCert<\/strong>\u202fto join the conversation.<\/p>\n

\n\t\tShare<\/p>\n
\n
<\/div>\n<\/div>\n
\n
\n\t\t<\/a>\n\t<\/div>\n<\/div>\n
\n
\n\t\t<\/a>\n\t<\/div>\n<\/div>\n
\n
\n\t <\/a>\n\t<\/div>\n<\/div>\n<\/div>\n
Share:<\/div>\n
\n
\n
<\/div>\n<\/div>\n
\n
\n\t\t<\/a>\n\t<\/div>\n<\/div>\n
\n
\n\t\t<\/a>\n\t<\/div>\n<\/div>\n
\n
\n\t <\/a>\n\t<\/div>\n<\/div>\n<\/div>\n

“}]]\u00a0\u00a0Learn how to secure the LLM stack, which is essential to protecting data and preserving user trust, as well as ensuring the operational integrity, reliability, and ethical use of these powerful AI models.\u00a0\u00a0Read More<\/a>\u00a0Cisco Blogs\u00a0<\/p>\n

<\/p>\n","protected":false},"author":0,"featured_media":2846,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[12],"tags":[],"class_list":["post-2845","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-cisco-learning"],"yoast_head":"\nSecuring the LLM Stack on March 26, 2024 at 8:38 pm - JHC<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Securing the LLM Stack on March 26, 2024 at 8:38 pm\" \/>\n<meta property=\"og:description\" content=\"A few months ago, I wrote about the security of AI models, fine-tuning techniques, and the use of Retrieval-Augmented Generation (RAG) in a Cisco Security Blog post. In this blog post, I will\u2026 Read more on Cisco Blogs \u200b[[{"value":" A few months ago, I wrote about the security of AI models, fine-tuning techniques, and the use of Retrieval-Augmented Generation (RAG) in a Cisco Security Blog post. In this blog post, I will continue the discussion on the critical importance of learning how to secure AI systems, with a special focus on current LLM implementations and the \u201cLLM stack.\u201d I also recently published two books. The first book is titled \u201cThe AI Revolution in Networking, Cybersecurity, and Emerging Technologies\u201d where my co-authors and I cover the way AI is already revolutionizing networking, cybersecurity, and emerging technologies. The second book, \u201cBeyond the Algorithm: AI, Security, Privacy, and Ethics,\u201d co-authored with Dr. Petar Radanliev of Oxford University, presents an in-depth exploration of critical subjects including red teaming AI models, monitoring AI deployments, AI supply chain security, and the application of privacy-enhancing methodologies such as federated learning and homomorphic encryption. Additionally, it discusses strategies for identifying and mitigating bias within AI systems. For now, let\u2019s explore some of the key factors in securing AI implementations and the LLM Stack. What is the LLM Stack? The \u201cLLM stack\u201d generally refers to a stack of technologies or components centered around Large Language Models (LLMs). This \u201cstack\u201d can include a wide range of technologies and methodologies aimed at leveraging the capabilities of LLMs (e.g., vector databases, embedding models, APIs, plugins, orchestration libraries like LangChain, guardrail tools, etc.). Many organizations are trying to implement Retrieval-Augmented Generation (RAG) nowadays. This is because RAG significantly enhances the accuracy of LLMs by combining the generative capabilities of these models with the retrieval of relevant information from a database or knowledge base. I introduced RAG in this article, but in short, RAG works by first querying a database with a question or prompt to retrieve relevant information. This information is then fed into an LLM, which generates a response based on both the input prompt and the retrieved documents. The result is a more accurate, informed, and contextually relevant output than what could be achieved by the LLM alone. Let\u2019s go over the typical \u201cLLM stack\u201d components that make RAG and other applications work. The following figure illustrates the LLM stack. Vectorizing Data and Security Vectorizing data and creating embeddings are crucial steps in preparing your dataset for effective use with RAG and underlying tools. Vector embeddings, also known as vectorization, involve transforming words and different types of data into numerical values, where each piece of data is depicted as a vector within a high-dimensional space. \u00a0OpenAI offers different embedding models that can be used via their API. \u00a0You can also use open source embedding models from Hugging Face. The following is an example of how the text \u201cExample from Omar for this blog\u201d was converted into \u201cnumbers\u201d (embeddings) using the text-embedding-3-small model from OpenAI. "object": "list", "data": [ \u00a0\u00a0 { \u00a0\u00a0\u00a0\u00a0 "object": "embedding", \u00a0\u00a0\u00a0\u00a0 "index": 0, \u00a0\u00a0\u00a0\u00a0 "embedding": [ \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.051343333, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.004879803, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.06099363, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.0071908776, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.020674748, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.00012919278, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.014209986, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.0034705158, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.005566879, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.02899774, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.03065297, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.034541197, <output omitted for brevity> \u00a0\u00a0\u00a0\u00a0 ] \u00a0\u00a0 } ], "model": "text-embedding-3-small", "usage": { \u00a0\u00a0 "prompt_tokens": 6, \u00a0\u00a0 "total_tokens": 6 } } The first step (even before you start creating embeddings) is data collection and ingestion. Gather and ingest the raw data from different sources (e.g., databases, PDFs, JSON, log files and other information from Splunk, etc.) into a centralized data storage system called a vector database. Note: Depending on the type of data you will need to clean and normalize the data to remove noise, such as irrelevant information and duplicates. Ensuring the security of the embedding creation process involves a multi-faceted approach that spans from the selection of embedding models to the handling and storage of the generated embeddings. Let\u2019s start discussing some security considerations in the embedding creation process. Use well-known, commercial or open-source embedding models that have been thoroughly vetted by the community. Opt for models that are widely used and have a strong community support. Like any software, embedding models and their dependencies can have vulnerabilities that are discovered over time. Some embedding models could be manipulated by threat actors. This is why supply chain security is so important. You should also validate and sanitize input data. The data used to create embeddings may contain sensitive or personal information that needs to be protected to comply with data protection regulations (e.g., GDPR, CCPA). Apply data anonymization or pseudonymization techniques where possible. Ensure that data processing is performed in a secure environment, using encryption for data at rest and in transit. Unauthorized access to embedding models and the data they process can lead to data exposure and other security issues. Use strong authentication and access control mechanisms to restrict access to embedding models and data. Indexing and Storage of Embeddings Once the data is vectorized, the next step is to store these vectors in a searchable database or a vector database such as ChromaDB, pgvector, MongoDB Atlas, FAISS (Facebook AI Similarity Search), or Pinecone. These systems allow for efficient retrieval of similar vectors. Did you know that some vector databases do not support encryption? Make sure that the solution you use supports encryption. Orchestration Libraries and Frameworks like LangChain In the diagram I used earlier, you can see a reference to libraries like LangChain and LlamaIndex. LangChain is a framework for developing applications powered by LLMs. It enables context-aware and reasoning applications, providing libraries, templates, and a developer platform for building, testing, and deploying applications. LangChain consists of several parts, including libraries, templates, LangServe for deploying chains as a REST API, and LangSmith for debugging and monitoring chains. It also offers a LangChain Expression Language (LCEL) for composing chains and provides standard interfaces and integrations for modules like model I\/O, retrieval, and AI agents. I wrote an article about numerous LangChain resources and related tools that are also available at one of my GitHub repositories. Many organizations use LangChain supports many use cases, such as personal assistants, question answering, chatbots, querying tabular data, and more. It also provides example code for building applications with an emphasis on more applied and end-to-end examples. Langchain can interact with external APIs to fetch or send data in real-time to and from other applications. This capability allows LLMs to access up-to-date information, perform actions like booking appointments, or retrieve specific data from web services. The framework can dynamically construct API requests based on the context of a conversation or query, thereby extending the functionality of LLMs beyond static knowledge bases. When integrating with external APIs, it\u2019s crucial to use secure authentication methods and encrypt data in transit using protocols like HTTPS. API keys and tokens should be stored securely and never hard-coded into the application code. AI Front-end Applications AI front-end applications refer to the user-facing part of AI systems where interaction between the machine and humans takes place. These applications leverage AI technologies to provide intelligent, responsive, and personalized experiences to users. The front end for chatbots, virtual assistants, personalized recommendation systems, and many other AI-driven applications can be easily created with libraries like Streamlit, Vercel, Streamship, and others. The implementation of traditional web application security practices is essential to protect against a wide range of vulnerabilities, such as broken access control, cryptographic failures, injection vulnerabilities like cross-site scripting (XSS), server-side request forgery (SSRF), and many other vulnerabilities. LLM Caching LLM caching is a technique used to improve the efficiency and performance of LLM interactions. You can use implementations like SQLite Cache, Redis, and GPTCache. LangChain provides examples of how these caching methods could be leveraged. The basic idea behind LLM caching is to store previously computed results of the model\u2019s outputs so that if the same or similar inputs are encountered again, the model can quickly retrieve the stored output instead of recomputing it from scratch. This can significantly reduce the computational overhead, making the model more responsive and cost-effective, especially for frequently repeated queries or common patterns of interaction. Caching strategies must be carefully designed to ensure they do not compromise the model\u2019s ability to generate relevant and updated responses, especially in scenarios where the input context or the external world knowledge changes over time. Moreover, effective cache invalidation strategies are crucial to prevent outdated or irrelevant information from being served, which can be challenging given the dynamic nature of knowledge and language. LLM Monitoring and Policy Enforcement Tools Monitoring is one of the most important elements of LLM stack security. There are many open source and commercial LLM monitoring tools such as MLFlow. \u00a0There are also several tools that can help protect against prompt injection attacks, such as Rebuff. Many of these work in isolation. Cisco recently announced Motific.ai. Motific enhances your ability to implement both predefined and tailored controls over Personally Identifiable Information (PII), toxicity, hallucination, topics, token limits, prompt injection, and data poisoning. It provides comprehensive visibility into operational metrics, policy flags, and audit trails, ensuring that you have a clear oversight of your system\u2019s performance and security. Additionally, by analyzing user prompts, Motific enables you to grasp user intents more accurately, optimizing the utilization of foundation models for improved outcomes. Cisco also provides an LLM security protection suite inside Panoptica. \u00a0Panoptica is Cisco\u2019s cloud application security solution for code to cloud. It provides seamless scalability across clusters and multi-cloud environments. AI Bill of Materials and Supply Chain Security The need for transparency, and traceability in AI development has never been more crucial. Supply chain security is top-of-mind for many individuals in the industry. This is why AI Bill of Materials (AI BOMs) are so important. But what exactly are AI BOMs, and why are they so important? How do Software Bills of Materials (SBOMs) differ from AI Bills of Materials (AI BOMs)? SBOMs serve a crucial role in the software development industry by providing a detailed inventory of all components within a software application. This documentation is essential for understanding the software\u2019s composition, including its libraries, packages, and any third-party code. On the other hand, AI BOMs cater specifically to artificial intelligence implementations. They offer comprehensive documentation of an AI system\u2019s many elements, including model specifications, model architecture, intended applications, training datasets, and additional pertinent information. This distinction highlights the specialized nature of AI BOMs in addressing the unique complexities and requirements of AI systems, compared to the broader scope of SBOMs in software documentation. I published a paper with Oxford University, titled \u201cToward Trustworthy AI: An Analysis of Artificial Intelligence (AI) Bill of Materials (AI BOMs)\u201d, that explains the concept of AI BOMs. Dr. Allan Friedman (CISA), Daniel Bardenstein, and I presented in a webinar describing the role of AI BOMs. Since then, the Linux Foundation SPDX and OWASP CycloneDX have started working on AI BOMs (otherwise known as AI profile SBOMs). Securing the LLM stack is essential not only for protecting data and preserving user trust but also for ensuring the operational integrity, reliability, and ethical use of these powerful AI models. As LLMs become increasingly integrated into various aspects of society and industry, their security becomes paramount to prevent potential negative impacts on individuals, organizations, and society at large. Sign up for\u00a0Cisco U.\u00a0| Join the\u202fCisco Learning Network. Follow Cisco Learning & Certifications Twitter\u202f|\u202fFacebook\u202f|\u202fLinkedIn\u202f|\u202fInstagram\u202f|\u202fYouTube Use\u00a0#CiscoU\u00a0and\u202f#CiscoCert\u202fto join the conversation. Share Share: "}]]\u00a0\u00a0Learn how to secure the LLM stack, which is essential to protecting data and preserving user trust, as well as ensuring the operational integrity, reliability, and ethical use of these powerful AI models.\u00a0\u00a0Read More\u00a0Cisco Blogs\u00a0\" \/>\n<meta property=\"og:url\" content=\"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/\" \/>\n<meta property=\"og:site_name\" content=\"JHC\" \/>\n<meta property=\"article:published_time\" content=\"2024-03-27T02:52:00+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16632169-Ii34T5.gif\" \/>\n\t<meta property=\"og:image:width\" content=\"1\" \/>\n\t<meta property=\"og:image:height\" content=\"1\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/gif\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/\"},\"author\":{\"name\":\"\",\"@id\":\"\"},\"headline\":\"Securing the LLM Stack on March 26, 2024 at 8:38 pm\",\"datePublished\":\"2024-03-27T02:52:00+00:00\",\"dateModified\":\"2024-03-27T02:52:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/\"},\"wordCount\":1947,\"publisher\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16632169-Ii34T5.gif\",\"articleSection\":[\"Cisco: Learning\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/\",\"url\":\"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/\",\"name\":\"Securing the LLM Stack on March 26, 2024 at 8:38 pm - JHC\",\"isPartOf\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16632169-Ii34T5.gif\",\"datePublished\":\"2024-03-27T02:52:00+00:00\",\"dateModified\":\"2024-03-27T02:52:00+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/#primaryimage\",\"url\":\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16632169-Ii34T5.gif\",\"contentUrl\":\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16632169-Ii34T5.gif\",\"width\":1,\"height\":1},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/jacksonholdingcompany.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Securing the LLM Stack on March 26, 2024 at 8:38 pm\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/#website\",\"url\":\"https:\/\/jacksonholdingcompany.com\/\",\"name\":\"JHC\",\"description\":\"Your Business Is Our Business\",\"publisher\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/jacksonholdingcompany.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/#organization\",\"name\":\"JHC\",\"url\":\"https:\/\/jacksonholdingcompany.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2023\/07\/cropped-cropped-jHC-white-500-\u00d7-200-px-1-1.png\",\"contentUrl\":\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2023\/07\/cropped-cropped-jHC-white-500-\u00d7-200-px-1-1.png\",\"width\":452,\"height\":149,\"caption\":\"JHC\"},\"image\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/#\/schema\/logo\/image\/\"}}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Securing the LLM Stack on March 26, 2024 at 8:38 pm - JHC","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/","og_locale":"en_US","og_type":"article","og_title":"Securing the LLM Stack on March 26, 2024 at 8:38 pm","og_description":"A few months ago, I wrote about the security of AI models, fine-tuning techniques, and the use of Retrieval-Augmented Generation (RAG) in a Cisco Security Blog post. In this blog post, I will\u2026 Read more on Cisco Blogs \u200b[[{\"value\":\" A few months ago, I wrote about the security of AI models, fine-tuning techniques, and the use of Retrieval-Augmented Generation (RAG) in a Cisco Security Blog post. In this blog post, I will continue the discussion on the critical importance of learning how to secure AI systems, with a special focus on current LLM implementations and the \u201cLLM stack.\u201d I also recently published two books. The first book is titled \u201cThe AI Revolution in Networking, Cybersecurity, and Emerging Technologies\u201d where my co-authors and I cover the way AI is already revolutionizing networking, cybersecurity, and emerging technologies. The second book, \u201cBeyond the Algorithm: AI, Security, Privacy, and Ethics,\u201d co-authored with Dr. Petar Radanliev of Oxford University, presents an in-depth exploration of critical subjects including red teaming AI models, monitoring AI deployments, AI supply chain security, and the application of privacy-enhancing methodologies such as federated learning and homomorphic encryption. Additionally, it discusses strategies for identifying and mitigating bias within AI systems. For now, let\u2019s explore some of the key factors in securing AI implementations and the LLM Stack. What is the LLM Stack? The \u201cLLM stack\u201d generally refers to a stack of technologies or components centered around Large Language Models (LLMs). This \u201cstack\u201d can include a wide range of technologies and methodologies aimed at leveraging the capabilities of LLMs (e.g., vector databases, embedding models, APIs, plugins, orchestration libraries like LangChain, guardrail tools, etc.). Many organizations are trying to implement Retrieval-Augmented Generation (RAG) nowadays. This is because RAG significantly enhances the accuracy of LLMs by combining the generative capabilities of these models with the retrieval of relevant information from a database or knowledge base. I introduced RAG in this article, but in short, RAG works by first querying a database with a question or prompt to retrieve relevant information. This information is then fed into an LLM, which generates a response based on both the input prompt and the retrieved documents. The result is a more accurate, informed, and contextually relevant output than what could be achieved by the LLM alone. Let\u2019s go over the typical \u201cLLM stack\u201d components that make RAG and other applications work. The following figure illustrates the LLM stack. Vectorizing Data and Security Vectorizing data and creating embeddings are crucial steps in preparing your dataset for effective use with RAG and underlying tools. Vector embeddings, also known as vectorization, involve transforming words and different types of data into numerical values, where each piece of data is depicted as a vector within a high-dimensional space. \u00a0OpenAI offers different embedding models that can be used via their API. \u00a0You can also use open source embedding models from Hugging Face. The following is an example of how the text \u201cExample from Omar for this blog\u201d was converted into \u201cnumbers\u201d (embeddings) using the text-embedding-3-small model from OpenAI. \"object\": \"list\", \"data\": [ \u00a0\u00a0 { \u00a0\u00a0\u00a0\u00a0 \"object\": \"embedding\", \u00a0\u00a0\u00a0\u00a0 \"index\": 0, \u00a0\u00a0\u00a0\u00a0 \"embedding\": [ \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.051343333, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.004879803, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.06099363, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.0071908776, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.020674748, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.00012919278, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.014209986, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.0034705158, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.005566879, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.02899774, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0.03065297, \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -0.034541197, <output omitted for brevity> \u00a0\u00a0\u00a0\u00a0 ] \u00a0\u00a0 } ], \"model\": \"text-embedding-3-small\", \"usage\": { \u00a0\u00a0 \"prompt_tokens\": 6, \u00a0\u00a0 \"total_tokens\": 6 } } The first step (even before you start creating embeddings) is data collection and ingestion. Gather and ingest the raw data from different sources (e.g., databases, PDFs, JSON, log files and other information from Splunk, etc.) into a centralized data storage system called a vector database. Note: Depending on the type of data you will need to clean and normalize the data to remove noise, such as irrelevant information and duplicates. Ensuring the security of the embedding creation process involves a multi-faceted approach that spans from the selection of embedding models to the handling and storage of the generated embeddings. Let\u2019s start discussing some security considerations in the embedding creation process. Use well-known, commercial or open-source embedding models that have been thoroughly vetted by the community. Opt for models that are widely used and have a strong community support. Like any software, embedding models and their dependencies can have vulnerabilities that are discovered over time. Some embedding models could be manipulated by threat actors. This is why supply chain security is so important. You should also validate and sanitize input data. The data used to create embeddings may contain sensitive or personal information that needs to be protected to comply with data protection regulations (e.g., GDPR, CCPA). Apply data anonymization or pseudonymization techniques where possible. Ensure that data processing is performed in a secure environment, using encryption for data at rest and in transit. Unauthorized access to embedding models and the data they process can lead to data exposure and other security issues. Use strong authentication and access control mechanisms to restrict access to embedding models and data. Indexing and Storage of Embeddings Once the data is vectorized, the next step is to store these vectors in a searchable database or a vector database such as ChromaDB, pgvector, MongoDB Atlas, FAISS (Facebook AI Similarity Search), or Pinecone. These systems allow for efficient retrieval of similar vectors. Did you know that some vector databases do not support encryption? Make sure that the solution you use supports encryption. Orchestration Libraries and Frameworks like LangChain In the diagram I used earlier, you can see a reference to libraries like LangChain and LlamaIndex. LangChain is a framework for developing applications powered by LLMs. It enables context-aware and reasoning applications, providing libraries, templates, and a developer platform for building, testing, and deploying applications. LangChain consists of several parts, including libraries, templates, LangServe for deploying chains as a REST API, and LangSmith for debugging and monitoring chains. It also offers a LangChain Expression Language (LCEL) for composing chains and provides standard interfaces and integrations for modules like model I\/O, retrieval, and AI agents. I wrote an article about numerous LangChain resources and related tools that are also available at one of my GitHub repositories. Many organizations use LangChain supports many use cases, such as personal assistants, question answering, chatbots, querying tabular data, and more. It also provides example code for building applications with an emphasis on more applied and end-to-end examples. Langchain can interact with external APIs to fetch or send data in real-time to and from other applications. This capability allows LLMs to access up-to-date information, perform actions like booking appointments, or retrieve specific data from web services. The framework can dynamically construct API requests based on the context of a conversation or query, thereby extending the functionality of LLMs beyond static knowledge bases. When integrating with external APIs, it\u2019s crucial to use secure authentication methods and encrypt data in transit using protocols like HTTPS. API keys and tokens should be stored securely and never hard-coded into the application code. AI Front-end Applications AI front-end applications refer to the user-facing part of AI systems where interaction between the machine and humans takes place. These applications leverage AI technologies to provide intelligent, responsive, and personalized experiences to users. The front end for chatbots, virtual assistants, personalized recommendation systems, and many other AI-driven applications can be easily created with libraries like Streamlit, Vercel, Streamship, and others. The implementation of traditional web application security practices is essential to protect against a wide range of vulnerabilities, such as broken access control, cryptographic failures, injection vulnerabilities like cross-site scripting (XSS), server-side request forgery (SSRF), and many other vulnerabilities. LLM Caching LLM caching is a technique used to improve the efficiency and performance of LLM interactions. You can use implementations like SQLite Cache, Redis, and GPTCache. LangChain provides examples of how these caching methods could be leveraged. The basic idea behind LLM caching is to store previously computed results of the model\u2019s outputs so that if the same or similar inputs are encountered again, the model can quickly retrieve the stored output instead of recomputing it from scratch. This can significantly reduce the computational overhead, making the model more responsive and cost-effective, especially for frequently repeated queries or common patterns of interaction. Caching strategies must be carefully designed to ensure they do not compromise the model\u2019s ability to generate relevant and updated responses, especially in scenarios where the input context or the external world knowledge changes over time. Moreover, effective cache invalidation strategies are crucial to prevent outdated or irrelevant information from being served, which can be challenging given the dynamic nature of knowledge and language. LLM Monitoring and Policy Enforcement Tools Monitoring is one of the most important elements of LLM stack security. There are many open source and commercial LLM monitoring tools such as MLFlow. \u00a0There are also several tools that can help protect against prompt injection attacks, such as Rebuff. Many of these work in isolation. Cisco recently announced Motific.ai. Motific enhances your ability to implement both predefined and tailored controls over Personally Identifiable Information (PII), toxicity, hallucination, topics, token limits, prompt injection, and data poisoning. It provides comprehensive visibility into operational metrics, policy flags, and audit trails, ensuring that you have a clear oversight of your system\u2019s performance and security. Additionally, by analyzing user prompts, Motific enables you to grasp user intents more accurately, optimizing the utilization of foundation models for improved outcomes. Cisco also provides an LLM security protection suite inside Panoptica. \u00a0Panoptica is Cisco\u2019s cloud application security solution for code to cloud. It provides seamless scalability across clusters and multi-cloud environments. AI Bill of Materials and Supply Chain Security The need for transparency, and traceability in AI development has never been more crucial. Supply chain security is top-of-mind for many individuals in the industry. This is why AI Bill of Materials (AI BOMs) are so important. But what exactly are AI BOMs, and why are they so important? How do Software Bills of Materials (SBOMs) differ from AI Bills of Materials (AI BOMs)? SBOMs serve a crucial role in the software development industry by providing a detailed inventory of all components within a software application. This documentation is essential for understanding the software\u2019s composition, including its libraries, packages, and any third-party code. On the other hand, AI BOMs cater specifically to artificial intelligence implementations. They offer comprehensive documentation of an AI system\u2019s many elements, including model specifications, model architecture, intended applications, training datasets, and additional pertinent information. This distinction highlights the specialized nature of AI BOMs in addressing the unique complexities and requirements of AI systems, compared to the broader scope of SBOMs in software documentation. I published a paper with Oxford University, titled \u201cToward Trustworthy AI: An Analysis of Artificial Intelligence (AI) Bill of Materials (AI BOMs)\u201d, that explains the concept of AI BOMs. Dr. Allan Friedman (CISA), Daniel Bardenstein, and I presented in a webinar describing the role of AI BOMs. Since then, the Linux Foundation SPDX and OWASP CycloneDX have started working on AI BOMs (otherwise known as AI profile SBOMs). Securing the LLM stack is essential not only for protecting data and preserving user trust but also for ensuring the operational integrity, reliability, and ethical use of these powerful AI models. As LLMs become increasingly integrated into various aspects of society and industry, their security becomes paramount to prevent potential negative impacts on individuals, organizations, and society at large. Sign up for\u00a0Cisco U.\u00a0| Join the\u202fCisco Learning Network. Follow Cisco Learning & Certifications Twitter\u202f|\u202fFacebook\u202f|\u202fLinkedIn\u202f|\u202fInstagram\u202f|\u202fYouTube Use\u00a0#CiscoU\u00a0and\u202f#CiscoCert\u202fto join the conversation. Share Share: \"}]]\u00a0\u00a0Learn how to secure the LLM stack, which is essential to protecting data and preserving user trust, as well as ensuring the operational integrity, reliability, and ethical use of these powerful AI models.\u00a0\u00a0Read More\u00a0Cisco Blogs\u00a0","og_url":"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/","og_site_name":"JHC","article_published_time":"2024-03-27T02:52:00+00:00","og_image":[{"width":1,"height":1,"url":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16632169-Ii34T5.gif","type":"image\/gif"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/#article","isPartOf":{"@id":"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/"},"author":{"name":"","@id":""},"headline":"Securing the LLM Stack on March 26, 2024 at 8:38 pm","datePublished":"2024-03-27T02:52:00+00:00","dateModified":"2024-03-27T02:52:00+00:00","mainEntityOfPage":{"@id":"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/"},"wordCount":1947,"publisher":{"@id":"https:\/\/jacksonholdingcompany.com\/#organization"},"image":{"@id":"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/#primaryimage"},"thumbnailUrl":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16632169-Ii34T5.gif","articleSection":["Cisco: Learning"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/","url":"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/","name":"Securing the LLM Stack on March 26, 2024 at 8:38 pm - JHC","isPartOf":{"@id":"https:\/\/jacksonholdingcompany.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/#primaryimage"},"image":{"@id":"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/#primaryimage"},"thumbnailUrl":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16632169-Ii34T5.gif","datePublished":"2024-03-27T02:52:00+00:00","dateModified":"2024-03-27T02:52:00+00:00","breadcrumb":{"@id":"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/#primaryimage","url":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16632169-Ii34T5.gif","contentUrl":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16632169-Ii34T5.gif","width":1,"height":1},{"@type":"BreadcrumbList","@id":"https:\/\/jacksonholdingcompany.com\/securing-the-llm-stack-on-march-26-2024-at-838-pm\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/jacksonholdingcompany.com\/"},{"@type":"ListItem","position":2,"name":"Securing the LLM Stack on March 26, 2024 at 8:38 pm"}]},{"@type":"WebSite","@id":"https:\/\/jacksonholdingcompany.com\/#website","url":"https:\/\/jacksonholdingcompany.com\/","name":"JHC","description":"Your Business Is Our Business","publisher":{"@id":"https:\/\/jacksonholdingcompany.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/jacksonholdingcompany.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/jacksonholdingcompany.com\/#organization","name":"JHC","url":"https:\/\/jacksonholdingcompany.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/jacksonholdingcompany.com\/#\/schema\/logo\/image\/","url":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2023\/07\/cropped-cropped-jHC-white-500-\u00d7-200-px-1-1.png","contentUrl":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2023\/07\/cropped-cropped-jHC-white-500-\u00d7-200-px-1-1.png","width":452,"height":149,"caption":"JHC"},"image":{"@id":"https:\/\/jacksonholdingcompany.com\/#\/schema\/logo\/image\/"}}]}},"_links":{"self":[{"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/posts\/2845","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/comments?post=2845"}],"version-history":[{"count":0,"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/posts\/2845\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/media\/2846"}],"wp:attachment":[{"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/media?parent=2845"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/categories?post=2845"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/tags?post=2845"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}