Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the easy-accordion-free domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the zoho-flow domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893
{"id":2672,"date":"2024-03-07T03:54:13","date_gmt":"2024-03-07T03:54:13","guid":{"rendered":"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/"},"modified":"2024-03-07T03:54:13","modified_gmt":"2024-03-07T03:54:13","slug":"using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm","status":"publish","type":"post","link":"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/","title":{"rendered":"Using the Power of Artificial Intelligence to Augment Network Automation John Capobianco on March 6, 2024 at 5:25 pm"},"content":{"rendered":"

Talking to your Network<\/h2>\n

Embarking on my journey as a network engineer nearly two decades ago, I was among the early adopters who recognized the transformative potential of network automation. In\u2026 Read more on Cisco Blogs<\/a><\/p>\n

\u200b[[“value”:”<\/p>\n

Talking to your Network<\/h2>\n

Embarking on my journey as a network engineer nearly two decades ago, I was among the early adopters who recognized the transformative potential of network automation. In 2015, after attending Cisco Live in San Diego, I gained a new appreciation of the realm of the possible. Leveraging tools like Ansible<\/a> and Cisco pyATS<\/a>, I began to streamline processes and enhance efficiencies within network operations, setting a foundation for what would become a career-long pursuit of innovation. This initial foray into automation was not just about simplifying repetitive tasks; it was about envisioning a future where networks could be more resilient, adaptable, and intelligent. As I navigated through the complexities of network systems, these technologies became indispensable allies, helping me to not only manage but also to anticipate the needs of increasingly sophisticated networks.<\/p>\n

In recent years, my exploration has taken a pivotal turn with the advent of generative AI, marking a new chapter in the story of network automation. The integration of artificial intelligence into network operations has opened up unprecedented possibilities, allowing for even greater levels of efficiency, predictive analysis, and decision-making capabilities. This blog, accompanying the CiscoU Tutorial, delves into the cutting-edge intersection of AI and network automation, highlighting my experiences with Docker, LangChain, Streamlit, and, of course, Cisco pyATS. It\u2019s a reflection on how the landscape of network engineering is being reshaped by AI, transforming not just how we manage networks, but how we envision their growth and potential in the digital age. Through this narrative, I aim to share insights and practical knowledge on harnessing the power of AI to augment the capabilities of network automation, offering a glimpse into the future of network operations.<\/p>\n

In the spirit of modern software deployment practices, the solution I architected is encapsulated within Docker, a platform that packages an application and all its dependencies in a virtual container that can run on any Linux server. This encapsulation ensures that it works seamlessly in different computing environments. The heart of this dockerized solution lies within three key files: the Dockerfile, the startup script, and the docker-compose.yml.<\/p>\n

The Dockerfile serves as the blueprint for building the application\u2019s Docker image. It starts with a base image, ubuntu:latest, ensuring that all the operations have a solid foundation. From there, it outlines a series of commands that prepare the environment:<\/p>\n

FROM ubuntu:latest<\/strong><\/p>\n

# Set the noninteractive frontend (useful for automated builds)
\nARG DEBIAN_FRONTEND=noninteractive
\n# A series of RUN commands to install necessary packages
\nRUN apt-get update && apt-get install -y wget sudo …
\n# Python, pip, and essential tools are installed
\nRUN apt-get install python3 -y && apt-get install python3-pip -y …
\n# Specific Python packages are installed, including pyATS[full]
\nRUN pip install pyats[full]
\n# Other utilities like dos2unix for script compatibility adjustments
\nRUN sudo apt-get install dos2unix -y
\n# Installation of LangChain and related packages
\nRUN pip install -U langchain-openai langchain-community …
\n# Install Streamlit, the web framework
\nRUN pip install streamlit<\/p>\n

Each command is preceded by an echo statement that prints out the action being taken, which is incredibly helpful for debugging and understanding the build process as it happens.<\/p>\n

The startup.sh script is a simple yet crucial component that dictates what happens when the Docker container starts:<\/p>\n

cd streamlit_langchain_pyats
\nstreamlit run chat_with_routing_table.py<\/p>\n

It navigates into the directory containing the Streamlit app and starts the app using streamlit run. This is the command that actually gets our app up and running within the container.<\/p>\n

Lastly, the docker-compose.yml file orchestrates the deployment of our Dockerized application. It defines the services, volumes, and networks to run our containerized application:<\/p>\n

version: ‘3’
\nservices:
\n streamlit_langchain_pyats:
\n image: [Docker Hub image]
\n container_name: streamlit_langchain_pyats
\n restart: always
\n build:
\n context: .\/
\n dockerfile: .\/Dockerfile
\n ports:
\n – “8501:8501”<\/p>\n

This docker-compose.yml file makes it incredibly easy to manage the application lifecycle, from starting and stopping to rebuilding the application. It binds the host\u2019s port 8501 to the container\u2019s port 8501, which is the default port for Streamlit applications.<\/p>\n

Together, these files create a robust framework that ensures the Streamlit application \u2014 enhanced with the AI capabilities of LangChain and the powerful testing features of Cisco pyATS \u2014 is containerized, making deployment and scaling consistent and efficient.<\/p>\n

The journey into the realm of automated testing begins with the creation of the testbed.yaml file. This YAML file is not just a configuration file; it\u2019s the cornerstone of our automated testing strategy. It contains all the essential information about the devices in our network: hostnames, IP addresses, device types, and credentials. But why is it so crucial? The testbed.yaml file serves as the single source of truth for the pyATS framework to understand the network it will be interacting with. It\u2019s the map that guides the automation tools to the right devices, ensuring that our scripts don\u2019t get lost in the vast sea of the network topology.<\/p>\n

Sample testbed.yaml<\/h2>\n


\ndevices:
\n cat8000v:
\n alias: “Sandbox Router”
\n type: “router”
\n os: “iosxe”
\n platform: Cat8000v
\n credentials:
\n default:
\n username: developer
\n password: C1sco12345
\n connections:
\n cli:
\n protocol: ssh
\n ip: 10.10.20.48
\n port: 22
\n arguments:
\n connection_timeout: 360<\/p>\n

With our testbed defined, we then turn our attention to the _job file. This is the conductor of our automation orchestra, the control file that orchestrates the entire testing process. It loads the testbed and the Python test script into the pyATS framework, setting the stage for the execution of our automated tests. It tells pyATS not only what devices to test but also how to test them, and in what order. This level of control is indispensable for running complex test sequences across a range of network devices.<\/p>\n

Sample _job.py pyATS Job<\/h2>\n

import os
\nfrom genie.testbed import load
\ndef main(runtime):
\n # —————-
\n # Load the testbed
\n # —————-
\n if not runtime.testbed:
\n # If no testbed is provided, load the default one.
\n # Load default location of Testbed
\n testbedfile = os.path.join(‘testbed.yaml’)
\n testbed = load(testbedfile)
\n else:
\n # Use the one provided
\n testbed = runtime.testbed
\n # Find the location of the script in relation to the job file
\n testscript = os.path.join(os.path.dirname(__file__), ‘show_ip_route_langchain.py’)
\n # run script
\n runtime.tasks.run(testscript=testscript, testbed=testbed)<\/p>\n

Then comes the pi\u00e8ce de r\u00e9sistance, the Python test script \u2014 let\u2019s call it capture_routing_table.py<\/em>. This script embodies the intelligence of our automated testing process. It\u2019s where we\u2019ve distilled our network expertise into a series of commands and parsers that interact with the Cisco IOS XE devices to retrieve the routing table information. But it doesn\u2019t stop there; this script is designed to capture the output and elegantly transform it into a JSON structure. Why JSON, you ask? Because JSON is the lingua franca for data interchange, making the output from our devices readily available for any number of downstream applications or interfaces that might need to consume it. In doing so, we\u2019re not just automating a task; we\u2019re future-proofing it.<\/p>\n

Excerpt from the pyATS script<\/h2>\n

@aetest.test
\n def get_raw_config(self):
\n raw_json = self.device.parse(“show ip route”)
\n self.parsed_json = “info”: raw_json
\n @aetest.test
\n def create_file(self):
\n with open(‘Show_IP_Route.json’, ‘w’) as f:
\n f.write(json.dumps(self.parsed_json, indent=4, sort_keys=True))<\/p>\n

By focusing solely on pyATS in this phase, we lay a strong foundation for network automation. The testbed.yaml file ensures that our script knows where to go, the _job file gives it the instructions on what to do, and the capture_routing_table.py script does the heavy lifting, turning raw data into structured knowledge. This approach streamlines our processes, making it possible to conduct comprehensive, repeatable, and reliable network testing at scale.<\/p>\n\n

Enhancing AI Conversational Models with RAG and Network JSON: A Guide<\/h2>\n

In the ever-evolving field of AI, conversational models have come a long way. From simple rule-based systems to advanced neural networks, these models can now mimic human-like conversations with a remarkable degree of fluency. However, despite the leaps in generative capabilities, AI can sometimes stumble, providing answers that are nonsensical or \u201challucinated\u201d \u2014 a term used when AI produces information that isn\u2019t grounded in reality. One way to mitigate this is by integrating Retrieval-Augmented Generation (RAG) into the AI pipeline, especially in conjunction with structured data sources like network JSON.<\/p>\n

What is Retrieval-Augmented Generation (RAG)?<\/h2>\n

Retrieval-Augmented Generation is a cutting-edge technique in AI language processing that combines the best of two worlds: the generative power of models like GPT (Generative Pre-trained Transformer) and the precision of retrieval-based systems. Essentially, RAG enhances a language model\u2019s responses by first consulting a database of information. The model retrieves relevant documents or data and then uses this context to inform its generated output.<\/p>\n

The RAG Process<\/h2>\n

The process typically involves several key steps:<\/p>\n

Retrieval<\/strong>: When the model receives a query, it searches through a database to find relevant information.
\nAugmentation<\/strong>: The retrieved information is then fed into the generative model as additional context.
\nGeneration<\/strong>: Armed with this context, the model generates a response that\u2019s not only fluent but also factually grounded in the retrieved data.<\/p>\n

The Role of Network JSON in RAG<\/h2>\n

Network JSON refers to structured data in the JSON (JavaScript Object Notation) format, often used in network communications. Integrating network JSON with RAG serves as a bridge between the generative model and the vast amounts of structured data available on networks. This integration can be critical for several reasons:<\/p>\n

Data-Driven Responses<\/strong>: By pulling in network JSON data, the AI can ground its responses in real, up-to-date information, reducing the risk of \u201challucinations.\u201d
\nEnhanced Accuracy<\/strong>: Access to a wide array of structured data means the AI\u2019s answers can be more accurate and informative.
\nContextual Relevance<\/strong>: RAG can use network JSON to understand the context better, leading to more relevant and precise answers.<\/p>\n

Why Use RAG with Network JSON?<\/h2>\n

Let\u2019s explore why one might choose to use RAG in tandem with network JSON through a simplified example using Python code:<\/p>\n

Source and Load<\/strong>: The AI model begins by sourcing data, which could be network JSON files containing information from various databases or the internet.
\nTransform<\/strong>: The data might undergo a transformation to make it suitable for the AI to process \u2014 for example, splitting a large document into manageable chunks.
\nEmbed<\/strong>: Next, the system converts the transformed data into embeddings, which are numerical representations that encapsulate the semantic meaning of the text.
\nStore<\/strong>: These embeddings are then stored in a retrievable format.
\nRetrieve<\/strong>: When a new query arrives, the AI uses RAG to retrieve the most relevant embeddings to inform its response, thus ensuring that the answer is grounded in factual data.<\/p>\n

By following these steps, the AI model can drastically improve the quality of the output, providing responses that are not only coherent but also factually correct and highly relevant to the user\u2019s query.<\/p>\n

class ChatWithRoutingTable:
\n def __init__(self):
\n self.conversation_history = []
\n self.load_text()
\n self.split_into_chunks()
\n self.store_in_chroma()
\n self.setup_conversation_memory()
\n self.setup_conversation_retrieval_chain()
\n def load_text(self):
\n self.loader = JSONLoader(
\n file_path=’Show_IP_Route.json’,
\n jq_schema=”.info[]”,
\n text_content=False
\n )
\n self.pages = self.loader.load_and_split()
\n def split_into_chunks(self):
\n # Create a text splitter
\n self.text_splitter = RecursiveCharacterTextSplitter(
\n chunk_size=1000,
\n chunk_overlap=100,
\n length_function=len,
\n )
\n self.docs = self.text_splitter.split_documents(self.pages)
\n def store_in_chroma(self):
\n embeddings = OpenAIEmbeddings()
\n self.vectordb = Chroma.from_documents(self.docs, embedding=embeddings)
\n self.vectordb.persist()
\n def setup_conversation_memory(self):
\n self.memory = ConversationBufferMemory(memory_key=”chat_history”, return_messages=True)
\n def setup_conversation_retrieval_chain(self):
\n self.qa = ConversationalRetrievalChain.from_llm(llm, self.vectordb.as_retriever(search_kwargs=”k”: 10), memory=self.memory)
\n def chat(self, question):
\n # Format the user’s prompt and add it to the conversation history
\n user_prompt = f”User: question”
\n self.conversation_history.append(“text”: user_prompt, “sender”: “user”)
\n # Format the entire conversation history for context, excluding the current prompt
\n conversation_context = self.format_conversation_history(include_current=False)
\n # Concatenate the current question with conversation context
\n combined_input = f”Context: conversation_contextnQuestion: question”
\n # Generate a response using the ConversationalRetrievalChain
\nresponse = self.qa.invoke(combined_input)
\n # Extract the answer from the response
\nanswer = response.get(‘answer’, ‘No answer found.’)
\n # Format the AI’s response
\n ai_response = f”Cisco IOS XE: answer”
\n self.conversation_history.append(“text”: ai_response, “sender”: “bot”)
\n # Update the Streamlit session state by appending new history with both user prompt and AI response
\n st.session_state[‘conversation_history’] += f”nuser_promptnai_response”
\n # Return the formatted AI response for immediate display
\n return ai_response<\/p>\n

Conclusion<\/h2>\n

The integration of RAG with network JSON is a powerful way to supercharge conversational AI. It leads to more accurate, reliable, and contextually aware interactions that users can trust. By leveraging the vast amounts of available structured data, AI models can step beyond the limitations of pure generation and towards a more informed and intelligent conversational experience.<\/p>\n

Related resources<\/h2>\n

This\u00a0open source repo<\/a>\u00a0contains this solution in full.\u00a0Try it for yourself<\/a>!
\nCheck out my conversation with Adrian Iliesiu on his NetGRU live stream, \u201c
Automating Your Network with ChatGPT<\/a>\u201c
\n\u00a0If you want a
deeper dive \/ live demo<\/a>, check out my session from Cisco Live Amsterdam 2024<\/a><\/p>\n

\n\t\tShare\n
\n
<\/div>\n<\/div>\n
\n
\n\t\t<\/a>\n\t<\/div>\n<\/div>\n
\n
\n\t\t<\/a>\n\t<\/div>\n<\/div>\n
\n
\n\t <\/a>\n\t<\/div>\n<\/div>\n<\/div>\n
Share:<\/div>\n
\n
\n
<\/div>\n<\/div>\n
\n
\n\t\t<\/a>\n\t<\/div>\n<\/div>\n
\n
\n\t\t<\/a>\n\t<\/div>\n<\/div>\n
\n
\n\t <\/a>\n\t<\/div>\n<\/div>\n<\/div>\n

“]]\u00a0\u00a0The integration of artificial intelligence into network operations has opened up unprecedented levels of efficiency, predictive analysis, and decision-making capabilities. Get a glimpse into the future of network operation, with practical knowledge on harnessing the power of AI to augment network automation.\u00a0\u00a0Read More<\/a>\u00a0Cisco Blogs\u00a0<\/p>","protected":false},"excerpt":{"rendered":"

<\/p>\n

Talking to your Network<\/h2>\n

Embarking on my journey as a network engineer nearly two decades ago, I was among the early adopters who recognized the transformative potential of network automation. In\u2026 Read more on Cisco Blogs<\/a><\/p>\n

\u200b[[“value”:”<\/p>\n

Talking to your Network<\/h2>\n

Embarking on my journey as a network engineer nearly two decades ago, I was among the early adopters who recognized the transformative potential of network automation. In 2015, after attending Cisco Live in San Diego, I gained a new appreciation of the realm of the possible. Leveraging tools like Ansible<\/a> and Cisco pyATS<\/a>, I began to streamline processes and enhance efficiencies within network operations, setting a foundation for what would become a career-long pursuit of innovation. This initial foray into automation was not just about simplifying repetitive tasks; it was about envisioning a future where networks could be more resilient, adaptable, and intelligent. As I navigated through the complexities of network systems, these technologies became indispensable allies, helping me to not only manage but also to anticipate the needs of increasingly sophisticated networks.<\/p>\n

In recent years, my exploration has taken a pivotal turn with the advent of generative AI, marking a new chapter in the story of network automation. The integration of artificial intelligence into network operations has opened up unprecedented possibilities, allowing for even greater levels of efficiency, predictive analysis, and decision-making capabilities. This blog, accompanying the CiscoU Tutorial, delves into the cutting-edge intersection of AI and network automation, highlighting my experiences with Docker, LangChain, Streamlit, and, of course, Cisco pyATS. It\u2019s a reflection on how the landscape of network engineering is being reshaped by AI, transforming not just how we manage networks, but how we envision their growth and potential in the digital age. Through this narrative, I aim to share insights and practical knowledge on harnessing the power of AI to augment the capabilities of network automation, offering a glimpse into the future of network operations.<\/p>\n

In the spirit of modern software deployment practices, the solution I architected is encapsulated within Docker, a platform that packages an application and all its dependencies in a virtual container that can run on any Linux server. This encapsulation ensures that it works seamlessly in different computing environments. The heart of this dockerized solution lies within three key files: the Dockerfile, the startup script, and the docker-compose.yml.<\/p>\n

The Dockerfile serves as the blueprint for building the application\u2019s Docker image. It starts with a base image, ubuntu:latest, ensuring that all the operations have a solid foundation. From there, it outlines a series of commands that prepare the environment:<\/p>\n

FROM ubuntu:latest<\/strong><\/p>\n

# Set the noninteractive frontend (useful for automated builds)
\nARG DEBIAN_FRONTEND=noninteractive
\n# A series of RUN commands to install necessary packages
\nRUN apt-get update && apt-get install -y wget sudo …
\n# Python, pip, and essential tools are installed
\nRUN apt-get install python3 -y && apt-get install python3-pip -y …
\n# Specific Python packages are installed, including pyATS[full]
\nRUN pip install pyats[full]
\n# Other utilities like dos2unix for script compatibility adjustments
\nRUN sudo apt-get install dos2unix -y
\n# Installation of LangChain and related packages
\nRUN pip install -U langchain-openai langchain-community …
\n# Install Streamlit, the web framework
\nRUN pip install streamlit<\/p>\n

Each command is preceded by an echo statement that prints out the action being taken, which is incredibly helpful for debugging and understanding the build process as it happens.<\/p>\n

The startup.sh script is a simple yet crucial component that dictates what happens when the Docker container starts:<\/p>\n

cd streamlit_langchain_pyats
\nstreamlit run chat_with_routing_table.py<\/p>\n

It navigates into the directory containing the Streamlit app and starts the app using streamlit run. This is the command that actually gets our app up and running within the container.<\/p>\n

Lastly, the docker-compose.yml file orchestrates the deployment of our Dockerized application. It defines the services, volumes, and networks to run our containerized application:<\/p>\n

version: ‘3’
\nservices:
\n streamlit_langchain_pyats:
\n image: [Docker Hub image]
\n container_name: streamlit_langchain_pyats
\n restart: always
\n build:
\n context: .\/
\n dockerfile: .\/Dockerfile
\n ports:
\n – “8501:8501”<\/p>\n

This docker-compose.yml file makes it incredibly easy to manage the application lifecycle, from starting and stopping to rebuilding the application. It binds the host\u2019s port 8501 to the container\u2019s port 8501, which is the default port for Streamlit applications.<\/p>\n

Together, these files create a robust framework that ensures the Streamlit application \u2014 enhanced with the AI capabilities of LangChain and the powerful testing features of Cisco pyATS \u2014 is containerized, making deployment and scaling consistent and efficient.<\/p>\n

The journey into the realm of automated testing begins with the creation of the testbed.yaml file. This YAML file is not just a configuration file; it\u2019s the cornerstone of our automated testing strategy. It contains all the essential information about the devices in our network: hostnames, IP addresses, device types, and credentials. But why is it so crucial? The testbed.yaml file serves as the single source of truth for the pyATS framework to understand the network it will be interacting with. It\u2019s the map that guides the automation tools to the right devices, ensuring that our scripts don\u2019t get lost in the vast sea of the network topology.<\/p>\n

Sample testbed.yaml<\/h2>\n


\ndevices:
\n cat8000v:
\n alias: “Sandbox Router”
\n type: “router”
\n os: “iosxe”
\n platform: Cat8000v
\n credentials:
\n default:
\n username: developer
\n password: C1sco12345
\n connections:
\n cli:
\n protocol: ssh
\n ip: 10.10.20.48
\n port: 22
\n arguments:
\n connection_timeout: 360<\/p>\n

With our testbed defined, we then turn our attention to the _job file. This is the conductor of our automation orchestra, the control file that orchestrates the entire testing process. It loads the testbed and the Python test script into the pyATS framework, setting the stage for the execution of our automated tests. It tells pyATS not only what devices to test but also how to test them, and in what order. This level of control is indispensable for running complex test sequences across a range of network devices.<\/p>\n

Sample _job.py pyATS Job<\/h2>\n

import os
\nfrom genie.testbed import load
\ndef main(runtime):
\n # —————-
\n # Load the testbed
\n # —————-
\n if not runtime.testbed:
\n # If no testbed is provided, load the default one.
\n # Load default location of Testbed
\n testbedfile = os.path.join(‘testbed.yaml’)
\n testbed = load(testbedfile)
\n else:
\n # Use the one provided
\n testbed = runtime.testbed
\n # Find the location of the script in relation to the job file
\n testscript = os.path.join(os.path.dirname(__file__), ‘show_ip_route_langchain.py’)
\n # run script
\n runtime.tasks.run(testscript=testscript, testbed=testbed)<\/p>\n

Then comes the pi\u00e8ce de r\u00e9sistance, the Python test script \u2014 let\u2019s call it capture_routing_table.py<\/em>. This script embodies the intelligence of our automated testing process. It\u2019s where we\u2019ve distilled our network expertise into a series of commands and parsers that interact with the Cisco IOS XE devices to retrieve the routing table information. But it doesn\u2019t stop there; this script is designed to capture the output and elegantly transform it into a JSON structure. Why JSON, you ask? Because JSON is the lingua franca for data interchange, making the output from our devices readily available for any number of downstream applications or interfaces that might need to consume it. In doing so, we\u2019re not just automating a task; we\u2019re future-proofing it.<\/p>\n

Excerpt from the pyATS script<\/h2>\n

@aetest.test
\n def get_raw_config(self):
\n raw_json = self.device.parse(“show ip route”)
\n self.parsed_json = “info”: raw_json
\n @aetest.test
\n def create_file(self):
\n with open(‘Show_IP_Route.json’, ‘w’) as f:
\n f.write(json.dumps(self.parsed_json, indent=4, sort_keys=True))<\/p>\n

By focusing solely on pyATS in this phase, we lay a strong foundation for network automation. The testbed.yaml file ensures that our script knows where to go, the _job file gives it the instructions on what to do, and the capture_routing_table.py script does the heavy lifting, turning raw data into structured knowledge. This approach streamlines our processes, making it possible to conduct comprehensive, repeatable, and reliable network testing at scale.<\/p>\n

Enhancing AI Conversational Models with RAG and Network JSON: A Guide<\/h2>\n

In the ever-evolving field of AI, conversational models have come a long way. From simple rule-based systems to advanced neural networks, these models can now mimic human-like conversations with a remarkable degree of fluency. However, despite the leaps in generative capabilities, AI can sometimes stumble, providing answers that are nonsensical or \u201challucinated\u201d \u2014 a term used when AI produces information that isn\u2019t grounded in reality. One way to mitigate this is by integrating Retrieval-Augmented Generation (RAG) into the AI pipeline, especially in conjunction with structured data sources like network JSON.<\/p>\n

What is Retrieval-Augmented Generation (RAG)?<\/h2>\n

Retrieval-Augmented Generation is a cutting-edge technique in AI language processing that combines the best of two worlds: the generative power of models like GPT (Generative Pre-trained Transformer) and the precision of retrieval-based systems. Essentially, RAG enhances a language model\u2019s responses by first consulting a database of information. The model retrieves relevant documents or data and then uses this context to inform its generated output.<\/p>\n

The RAG Process<\/h2>\n

The process typically involves several key steps:<\/p>\n

Retrieval<\/strong>: When the model receives a query, it searches through a database to find relevant information.
\nAugmentation<\/strong>: The retrieved information is then fed into the generative model as additional context.
\nGeneration<\/strong>: Armed with this context, the model generates a response that\u2019s not only fluent but also factually grounded in the retrieved data.<\/p>\n

The Role of Network JSON in RAG<\/h2>\n

Network JSON refers to structured data in the JSON (JavaScript Object Notation) format, often used in network communications. Integrating network JSON with RAG serves as a bridge between the generative model and the vast amounts of structured data available on networks. This integration can be critical for several reasons:<\/p>\n

Data-Driven Responses<\/strong>: By pulling in network JSON data, the AI can ground its responses in real, up-to-date information, reducing the risk of \u201challucinations.\u201d
\nEnhanced Accuracy<\/strong>: Access to a wide array of structured data means the AI\u2019s answers can be more accurate and informative.
\nContextual Relevance<\/strong>: RAG can use network JSON to understand the context better, leading to more relevant and precise answers.<\/p>\n

Why Use RAG with Network JSON?<\/h2>\n

Let\u2019s explore why one might choose to use RAG in tandem with network JSON through a simplified example using Python code:<\/p>\n

Source and Load<\/strong>: The AI model begins by sourcing data, which could be network JSON files containing information from various databases or the internet.
\nTransform<\/strong>: The data might undergo a transformation to make it suitable for the AI to process \u2014 for example, splitting a large document into manageable chunks.
\nEmbed<\/strong>: Next, the system converts the transformed data into embeddings, which are numerical representations that encapsulate the semantic meaning of the text.
\nStore<\/strong>: These embeddings are then stored in a retrievable format.
\nRetrieve<\/strong>: When a new query arrives, the AI uses RAG to retrieve the most relevant embeddings to inform its response, thus ensuring that the answer is grounded in factual data.<\/p>\n

By following these steps, the AI model can drastically improve the quality of the output, providing responses that are not only coherent but also factually correct and highly relevant to the user\u2019s query.<\/p>\n

class ChatWithRoutingTable:
\n def __init__(self):
\n self.conversation_history = []
\n self.load_text()
\n self.split_into_chunks()
\n self.store_in_chroma()
\n self.setup_conversation_memory()
\n self.setup_conversation_retrieval_chain()
\n def load_text(self):
\n self.loader = JSONLoader(
\n file_path=’Show_IP_Route.json’,
\n jq_schema=”.info[]”,
\n text_content=False
\n )
\n self.pages = self.loader.load_and_split()
\n def split_into_chunks(self):
\n # Create a text splitter
\n self.text_splitter = RecursiveCharacterTextSplitter(
\n chunk_size=1000,
\n chunk_overlap=100,
\n length_function=len,
\n )
\n self.docs = self.text_splitter.split_documents(self.pages)
\n def store_in_chroma(self):
\n embeddings = OpenAIEmbeddings()
\n self.vectordb = Chroma.from_documents(self.docs, embedding=embeddings)
\n self.vectordb.persist()
\n def setup_conversation_memory(self):
\n self.memory = ConversationBufferMemory(memory_key=”chat_history”, return_messages=True)
\n def setup_conversation_retrieval_chain(self):
\n self.qa = ConversationalRetrievalChain.from_llm(llm, self.vectordb.as_retriever(search_kwargs=”k”: 10), memory=self.memory)
\n def chat(self, question):
\n # Format the user’s prompt and add it to the conversation history
\n user_prompt = f”User: question”
\n self.conversation_history.append(“text”: user_prompt, “sender”: “user”)
\n # Format the entire conversation history for context, excluding the current prompt
\n conversation_context = self.format_conversation_history(include_current=False)
\n # Concatenate the current question with conversation context
\n combined_input = f”Context: conversation_contextnQuestion: question”
\n # Generate a response using the ConversationalRetrievalChain
\nresponse = self.qa.invoke(combined_input)
\n # Extract the answer from the response
\nanswer = response.get(‘answer’, ‘No answer found.’)
\n # Format the AI’s response
\n ai_response = f”Cisco IOS XE: answer”
\n self.conversation_history.append(“text”: ai_response, “sender”: “bot”)
\n # Update the Streamlit session state by appending new history with both user prompt and AI response
\n st.session_state[‘conversation_history’] += f”nuser_promptnai_response”
\n # Return the formatted AI response for immediate display
\n return ai_response<\/p>\n

Conclusion<\/h2>\n

The integration of RAG with network JSON is a powerful way to supercharge conversational AI. It leads to more accurate, reliable, and contextually aware interactions that users can trust. By leveraging the vast amounts of available structured data, AI models can step beyond the limitations of pure generation and towards a more informed and intelligent conversational experience.<\/p>\n

Related resources<\/h2>\n

This\u00a0open source repo<\/a>\u00a0contains this solution in full.\u00a0Try it for yourself<\/a>!
\nCheck out my conversation with Adrian Iliesiu on his NetGRU live stream, \u201c
Automating Your Network with ChatGPT<\/a>\u201c
\n\u00a0If you want a
deeper dive \/ live demo<\/a>, check out my session from Cisco Live Amsterdam 2024<\/a><\/p>\n

\n\t\tShare<\/p>\n
\n
<\/div>\n<\/div>\n
\n
\n\t\t<\/a>\n\t<\/div>\n<\/div>\n
\n
\n\t\t<\/a>\n\t<\/div>\n<\/div>\n
\n
\n\t <\/a>\n\t<\/div>\n<\/div>\n<\/div>\n
Share:<\/div>\n
\n
\n
<\/div>\n<\/div>\n
\n
\n\t\t<\/a>\n\t<\/div>\n<\/div>\n
\n
\n\t\t<\/a>\n\t<\/div>\n<\/div>\n
\n
\n\t <\/a>\n\t<\/div>\n<\/div>\n<\/div>\n

“]]\u00a0\u00a0The integration of artificial intelligence into network operations has opened up unprecedented levels of efficiency, predictive analysis, and decision-making capabilities. Get a glimpse into the future of network operation, with practical knowledge on harnessing the power of AI to augment network automation.\u00a0\u00a0Read More<\/a>\u00a0Cisco Blogs\u00a0<\/p>\n

<\/p>\n","protected":false},"author":0,"featured_media":2673,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[12],"tags":[],"class_list":["post-2672","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-cisco-learning"],"yoast_head":"\nUsing the Power of Artificial Intelligence to Augment Network Automation John Capobianco on March 6, 2024 at 5:25 pm - JHC<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Using the Power of Artificial Intelligence to Augment Network Automation John Capobianco on March 6, 2024 at 5:25 pm\" \/>\n<meta property=\"og:description\" content=\"Talking to your Network Embarking on my journey as a network engineer nearly two decades ago, I was among the early adopters who recognized the transformative potential of network automation. In\u2026 Read more on Cisco Blogs \u200b[["value":" Talking to your Network Embarking on my journey as a network engineer nearly two decades ago, I was among the early adopters who recognized the transformative potential of network automation. In 2015, after attending Cisco Live in San Diego, I gained a new appreciation of the realm of the possible. Leveraging tools like Ansible and Cisco pyATS, I began to streamline processes and enhance efficiencies within network operations, setting a foundation for what would become a career-long pursuit of innovation. This initial foray into automation was not just about simplifying repetitive tasks; it was about envisioning a future where networks could be more resilient, adaptable, and intelligent. As I navigated through the complexities of network systems, these technologies became indispensable allies, helping me to not only manage but also to anticipate the needs of increasingly sophisticated networks. In recent years, my exploration has taken a pivotal turn with the advent of generative AI, marking a new chapter in the story of network automation. The integration of artificial intelligence into network operations has opened up unprecedented possibilities, allowing for even greater levels of efficiency, predictive analysis, and decision-making capabilities. This blog, accompanying the CiscoU Tutorial, delves into the cutting-edge intersection of AI and network automation, highlighting my experiences with Docker, LangChain, Streamlit, and, of course, Cisco pyATS. It\u2019s a reflection on how the landscape of network engineering is being reshaped by AI, transforming not just how we manage networks, but how we envision their growth and potential in the digital age. Through this narrative, I aim to share insights and practical knowledge on harnessing the power of AI to augment the capabilities of network automation, offering a glimpse into the future of network operations. In the spirit of modern software deployment practices, the solution I architected is encapsulated within Docker, a platform that packages an application and all its dependencies in a virtual container that can run on any Linux server. This encapsulation ensures that it works seamlessly in different computing environments. The heart of this dockerized solution lies within three key files: the Dockerfile, the startup script, and the docker-compose.yml. The Dockerfile serves as the blueprint for building the application\u2019s Docker image. It starts with a base image, ubuntu:latest, ensuring that all the operations have a solid foundation. From there, it outlines a series of commands that prepare the environment: FROM ubuntu:latest # Set the noninteractive frontend (useful for automated builds) ARG DEBIAN_FRONTEND=noninteractive # A series of RUN commands to install necessary packages RUN apt-get update && apt-get install -y wget sudo ... # Python, pip, and essential tools are installed RUN apt-get install python3 -y && apt-get install python3-pip -y ... # Specific Python packages are installed, including pyATS[full] RUN pip install pyats[full] # Other utilities like dos2unix for script compatibility adjustments RUN sudo apt-get install dos2unix -y # Installation of LangChain and related packages RUN pip install -U langchain-openai langchain-community ... # Install Streamlit, the web framework RUN pip install streamlit Each command is preceded by an echo statement that prints out the action being taken, which is incredibly helpful for debugging and understanding the build process as it happens. The startup.sh script is a simple yet crucial component that dictates what happens when the Docker container starts: cd streamlit_langchain_pyats streamlit run chat_with_routing_table.py It navigates into the directory containing the Streamlit app and starts the app using streamlit run. This is the command that actually gets our app up and running within the container. Lastly, the docker-compose.yml file orchestrates the deployment of our Dockerized application. It defines the services, volumes, and networks to run our containerized application: version: '3' services: streamlit_langchain_pyats: image: [Docker Hub image] container_name: streamlit_langchain_pyats restart: always build: context: .\/ dockerfile: .\/Dockerfile ports: - "8501:8501" This docker-compose.yml file makes it incredibly easy to manage the application lifecycle, from starting and stopping to rebuilding the application. It binds the host\u2019s port 8501 to the container\u2019s port 8501, which is the default port for Streamlit applications. Together, these files create a robust framework that ensures the Streamlit application \u2014 enhanced with the AI capabilities of LangChain and the powerful testing features of Cisco pyATS \u2014 is containerized, making deployment and scaling consistent and efficient. The journey into the realm of automated testing begins with the creation of the testbed.yaml file. This YAML file is not just a configuration file; it\u2019s the cornerstone of our automated testing strategy. It contains all the essential information about the devices in our network: hostnames, IP addresses, device types, and credentials. But why is it so crucial? The testbed.yaml file serves as the single source of truth for the pyATS framework to understand the network it will be interacting with. It\u2019s the map that guides the automation tools to the right devices, ensuring that our scripts don\u2019t get lost in the vast sea of the network topology. Sample testbed.yaml --- devices: cat8000v: alias: "Sandbox Router" type: "router" os: "iosxe" platform: Cat8000v credentials: default: username: developer password: C1sco12345 connections: cli: protocol: ssh ip: 10.10.20.48 port: 22 arguments: connection_timeout: 360 With our testbed defined, we then turn our attention to the _job file. This is the conductor of our automation orchestra, the control file that orchestrates the entire testing process. It loads the testbed and the Python test script into the pyATS framework, setting the stage for the execution of our automated tests. It tells pyATS not only what devices to test but also how to test them, and in what order. This level of control is indispensable for running complex test sequences across a range of network devices. Sample _job.py pyATS Job import os from genie.testbed import load def main(runtime): # ---------------- # Load the testbed # ---------------- if not runtime.testbed: # If no testbed is provided, load the default one. # Load default location of Testbed testbedfile = os.path.join('testbed.yaml') testbed = load(testbedfile) else: # Use the one provided testbed = runtime.testbed # Find the location of the script in relation to the job file testscript = os.path.join(os.path.dirname(__file__), 'show_ip_route_langchain.py') # run script runtime.tasks.run(testscript=testscript, testbed=testbed) Then comes the pi\u00e8ce de r\u00e9sistance, the Python test script \u2014 let\u2019s call it capture_routing_table.py. This script embodies the intelligence of our automated testing process. It\u2019s where we\u2019ve distilled our network expertise into a series of commands and parsers that interact with the Cisco IOS XE devices to retrieve the routing table information. But it doesn\u2019t stop there; this script is designed to capture the output and elegantly transform it into a JSON structure. Why JSON, you ask? Because JSON is the lingua franca for data interchange, making the output from our devices readily available for any number of downstream applications or interfaces that might need to consume it. In doing so, we\u2019re not just automating a task; we\u2019re future-proofing it. Excerpt from the pyATS script @aetest.test def get_raw_config(self): raw_json = self.device.parse("show ip route") self.parsed_json = "info": raw_json @aetest.test def create_file(self): with open('Show_IP_Route.json', 'w') as f: f.write(json.dumps(self.parsed_json, indent=4, sort_keys=True)) By focusing solely on pyATS in this phase, we lay a strong foundation for network automation. The testbed.yaml file ensures that our script knows where to go, the _job file gives it the instructions on what to do, and the capture_routing_table.py script does the heavy lifting, turning raw data into structured knowledge. This approach streamlines our processes, making it possible to conduct comprehensive, repeatable, and reliable network testing at scale. Enhancing AI Conversational Models with RAG and Network JSON: A Guide In the ever-evolving field of AI, conversational models have come a long way. From simple rule-based systems to advanced neural networks, these models can now mimic human-like conversations with a remarkable degree of fluency. However, despite the leaps in generative capabilities, AI can sometimes stumble, providing answers that are nonsensical or \u201challucinated\u201d \u2014 a term used when AI produces information that isn\u2019t grounded in reality. One way to mitigate this is by integrating Retrieval-Augmented Generation (RAG) into the AI pipeline, especially in conjunction with structured data sources like network JSON. What is Retrieval-Augmented Generation (RAG)? Retrieval-Augmented Generation is a cutting-edge technique in AI language processing that combines the best of two worlds: the generative power of models like GPT (Generative Pre-trained Transformer) and the precision of retrieval-based systems. Essentially, RAG enhances a language model\u2019s responses by first consulting a database of information. The model retrieves relevant documents or data and then uses this context to inform its generated output. The RAG Process The process typically involves several key steps: Retrieval: When the model receives a query, it searches through a database to find relevant information. Augmentation: The retrieved information is then fed into the generative model as additional context. Generation: Armed with this context, the model generates a response that\u2019s not only fluent but also factually grounded in the retrieved data. The Role of Network JSON in RAG Network JSON refers to structured data in the JSON (JavaScript Object Notation) format, often used in network communications. Integrating network JSON with RAG serves as a bridge between the generative model and the vast amounts of structured data available on networks. This integration can be critical for several reasons: Data-Driven Responses: By pulling in network JSON data, the AI can ground its responses in real, up-to-date information, reducing the risk of \u201challucinations.\u201d Enhanced Accuracy: Access to a wide array of structured data means the AI\u2019s answers can be more accurate and informative. Contextual Relevance: RAG can use network JSON to understand the context better, leading to more relevant and precise answers. Why Use RAG with Network JSON? Let\u2019s explore why one might choose to use RAG in tandem with network JSON through a simplified example using Python code: Source and Load: The AI model begins by sourcing data, which could be network JSON files containing information from various databases or the internet. Transform: The data might undergo a transformation to make it suitable for the AI to process \u2014 for example, splitting a large document into manageable chunks. Embed: Next, the system converts the transformed data into embeddings, which are numerical representations that encapsulate the semantic meaning of the text. Store: These embeddings are then stored in a retrievable format. Retrieve: When a new query arrives, the AI uses RAG to retrieve the most relevant embeddings to inform its response, thus ensuring that the answer is grounded in factual data. By following these steps, the AI model can drastically improve the quality of the output, providing responses that are not only coherent but also factually correct and highly relevant to the user\u2019s query. class ChatWithRoutingTable: def __init__(self): self.conversation_history = [] self.load_text() self.split_into_chunks() self.store_in_chroma() self.setup_conversation_memory() self.setup_conversation_retrieval_chain() def load_text(self): self.loader = JSONLoader( file_path='Show_IP_Route.json', jq_schema=".info[]", text_content=False ) self.pages = self.loader.load_and_split() def split_into_chunks(self): # Create a text splitter self.text_splitter = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=100, length_function=len, ) self.docs = self.text_splitter.split_documents(self.pages) def store_in_chroma(self): embeddings = OpenAIEmbeddings() self.vectordb = Chroma.from_documents(self.docs, embedding=embeddings) self.vectordb.persist() def setup_conversation_memory(self): self.memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) def setup_conversation_retrieval_chain(self): self.qa = ConversationalRetrievalChain.from_llm(llm, self.vectordb.as_retriever(search_kwargs="k": 10), memory=self.memory) def chat(self, question): # Format the user's prompt and add it to the conversation history user_prompt = f"User: question" self.conversation_history.append("text": user_prompt, "sender": "user") # Format the entire conversation history for context, excluding the current prompt conversation_context = self.format_conversation_history(include_current=False) # Concatenate the current question with conversation context combined_input = f"Context: conversation_contextnQuestion: question" # Generate a response using the ConversationalRetrievalChain response = self.qa.invoke(combined_input) # Extract the answer from the response answer = response.get('answer', 'No answer found.') # Format the AI's response ai_response = f"Cisco IOS XE: answer" self.conversation_history.append("text": ai_response, "sender": "bot") # Update the Streamlit session state by appending new history with both user prompt and AI response st.session_state['conversation_history'] += f"nuser_promptnai_response" # Return the formatted AI response for immediate display return ai_response Conclusion The integration of RAG with network JSON is a powerful way to supercharge conversational AI. It leads to more accurate, reliable, and contextually aware interactions that users can trust. By leveraging the vast amounts of available structured data, AI models can step beyond the limitations of pure generation and towards a more informed and intelligent conversational experience. Related resources This\u00a0open source repo\u00a0contains this solution in full.\u00a0Try it for yourself! Check out my conversation with Adrian Iliesiu on his NetGRU live stream, \u201cAutomating Your Network with ChatGPT\u201c \u00a0If you want a deeper dive \/ live demo, check out my session from Cisco Live Amsterdam 2024 Share Share: "]]\u00a0\u00a0The integration of artificial intelligence into network operations has opened up unprecedented levels of efficiency, predictive analysis, and decision-making capabilities. Get a glimpse into the future of network operation, with practical knowledge on harnessing the power of AI to augment network automation.\u00a0\u00a0Read More\u00a0Cisco Blogs\u00a0\" \/>\n<meta property=\"og:url\" content=\"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/\" \/>\n<meta property=\"og:site_name\" content=\"JHC\" \/>\n<meta property=\"article:published_time\" content=\"2024-03-07T03:54:13+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16603865-a0ceoB.gif\" \/>\n\t<meta property=\"og:image:width\" content=\"1\" \/>\n\t<meta property=\"og:image:height\" content=\"1\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/gif\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"11 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/\"},\"author\":{\"name\":\"\",\"@id\":\"\"},\"headline\":\"Using the Power of Artificial Intelligence to Augment Network Automation John Capobianco on March 6, 2024 at 5:25 pm\",\"datePublished\":\"2024-03-07T03:54:13+00:00\",\"dateModified\":\"2024-03-07T03:54:13+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/\"},\"wordCount\":2307,\"publisher\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16603865-a0ceoB.gif\",\"articleSection\":[\"Cisco: Learning\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/\",\"url\":\"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/\",\"name\":\"Using the Power of Artificial Intelligence to Augment Network Automation John Capobianco on March 6, 2024 at 5:25 pm - JHC\",\"isPartOf\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16603865-a0ceoB.gif\",\"datePublished\":\"2024-03-07T03:54:13+00:00\",\"dateModified\":\"2024-03-07T03:54:13+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/#primaryimage\",\"url\":\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16603865-a0ceoB.gif\",\"contentUrl\":\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16603865-a0ceoB.gif\",\"width\":1,\"height\":1},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/jacksonholdingcompany.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Using the Power of Artificial Intelligence to Augment Network Automation John Capobianco on March 6, 2024 at 5:25 pm\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/#website\",\"url\":\"https:\/\/jacksonholdingcompany.com\/\",\"name\":\"JHC\",\"description\":\"Your Business Is Our Business\",\"publisher\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/jacksonholdingcompany.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/#organization\",\"name\":\"JHC\",\"url\":\"https:\/\/jacksonholdingcompany.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2023\/07\/cropped-cropped-jHC-white-500-\u00d7-200-px-1-1.png\",\"contentUrl\":\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2023\/07\/cropped-cropped-jHC-white-500-\u00d7-200-px-1-1.png\",\"width\":452,\"height\":149,\"caption\":\"JHC\"},\"image\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/#\/schema\/logo\/image\/\"}}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Using the Power of Artificial Intelligence to Augment Network Automation John Capobianco on March 6, 2024 at 5:25 pm - JHC","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/","og_locale":"en_US","og_type":"article","og_title":"Using the Power of Artificial Intelligence to Augment Network Automation John Capobianco on March 6, 2024 at 5:25 pm","og_description":"Talking to your Network Embarking on my journey as a network engineer nearly two decades ago, I was among the early adopters who recognized the transformative potential of network automation. In\u2026 Read more on Cisco Blogs \u200b[[\"value\":\" Talking to your Network Embarking on my journey as a network engineer nearly two decades ago, I was among the early adopters who recognized the transformative potential of network automation. In 2015, after attending Cisco Live in San Diego, I gained a new appreciation of the realm of the possible. Leveraging tools like Ansible and Cisco pyATS, I began to streamline processes and enhance efficiencies within network operations, setting a foundation for what would become a career-long pursuit of innovation. This initial foray into automation was not just about simplifying repetitive tasks; it was about envisioning a future where networks could be more resilient, adaptable, and intelligent. As I navigated through the complexities of network systems, these technologies became indispensable allies, helping me to not only manage but also to anticipate the needs of increasingly sophisticated networks. In recent years, my exploration has taken a pivotal turn with the advent of generative AI, marking a new chapter in the story of network automation. The integration of artificial intelligence into network operations has opened up unprecedented possibilities, allowing for even greater levels of efficiency, predictive analysis, and decision-making capabilities. This blog, accompanying the CiscoU Tutorial, delves into the cutting-edge intersection of AI and network automation, highlighting my experiences with Docker, LangChain, Streamlit, and, of course, Cisco pyATS. It\u2019s a reflection on how the landscape of network engineering is being reshaped by AI, transforming not just how we manage networks, but how we envision their growth and potential in the digital age. Through this narrative, I aim to share insights and practical knowledge on harnessing the power of AI to augment the capabilities of network automation, offering a glimpse into the future of network operations. In the spirit of modern software deployment practices, the solution I architected is encapsulated within Docker, a platform that packages an application and all its dependencies in a virtual container that can run on any Linux server. This encapsulation ensures that it works seamlessly in different computing environments. The heart of this dockerized solution lies within three key files: the Dockerfile, the startup script, and the docker-compose.yml. The Dockerfile serves as the blueprint for building the application\u2019s Docker image. It starts with a base image, ubuntu:latest, ensuring that all the operations have a solid foundation. From there, it outlines a series of commands that prepare the environment: FROM ubuntu:latest # Set the noninteractive frontend (useful for automated builds) ARG DEBIAN_FRONTEND=noninteractive # A series of RUN commands to install necessary packages RUN apt-get update && apt-get install -y wget sudo ... # Python, pip, and essential tools are installed RUN apt-get install python3 -y && apt-get install python3-pip -y ... # Specific Python packages are installed, including pyATS[full] RUN pip install pyats[full] # Other utilities like dos2unix for script compatibility adjustments RUN sudo apt-get install dos2unix -y # Installation of LangChain and related packages RUN pip install -U langchain-openai langchain-community ... # Install Streamlit, the web framework RUN pip install streamlit Each command is preceded by an echo statement that prints out the action being taken, which is incredibly helpful for debugging and understanding the build process as it happens. The startup.sh script is a simple yet crucial component that dictates what happens when the Docker container starts: cd streamlit_langchain_pyats streamlit run chat_with_routing_table.py It navigates into the directory containing the Streamlit app and starts the app using streamlit run. This is the command that actually gets our app up and running within the container. Lastly, the docker-compose.yml file orchestrates the deployment of our Dockerized application. It defines the services, volumes, and networks to run our containerized application: version: '3' services: streamlit_langchain_pyats: image: [Docker Hub image] container_name: streamlit_langchain_pyats restart: always build: context: .\/ dockerfile: .\/Dockerfile ports: - \"8501:8501\" This docker-compose.yml file makes it incredibly easy to manage the application lifecycle, from starting and stopping to rebuilding the application. It binds the host\u2019s port 8501 to the container\u2019s port 8501, which is the default port for Streamlit applications. Together, these files create a robust framework that ensures the Streamlit application \u2014 enhanced with the AI capabilities of LangChain and the powerful testing features of Cisco pyATS \u2014 is containerized, making deployment and scaling consistent and efficient. The journey into the realm of automated testing begins with the creation of the testbed.yaml file. This YAML file is not just a configuration file; it\u2019s the cornerstone of our automated testing strategy. It contains all the essential information about the devices in our network: hostnames, IP addresses, device types, and credentials. But why is it so crucial? The testbed.yaml file serves as the single source of truth for the pyATS framework to understand the network it will be interacting with. It\u2019s the map that guides the automation tools to the right devices, ensuring that our scripts don\u2019t get lost in the vast sea of the network topology. Sample testbed.yaml --- devices: cat8000v: alias: \"Sandbox Router\" type: \"router\" os: \"iosxe\" platform: Cat8000v credentials: default: username: developer password: C1sco12345 connections: cli: protocol: ssh ip: 10.10.20.48 port: 22 arguments: connection_timeout: 360 With our testbed defined, we then turn our attention to the _job file. This is the conductor of our automation orchestra, the control file that orchestrates the entire testing process. It loads the testbed and the Python test script into the pyATS framework, setting the stage for the execution of our automated tests. It tells pyATS not only what devices to test but also how to test them, and in what order. This level of control is indispensable for running complex test sequences across a range of network devices. Sample _job.py pyATS Job import os from genie.testbed import load def main(runtime): # ---------------- # Load the testbed # ---------------- if not runtime.testbed: # If no testbed is provided, load the default one. # Load default location of Testbed testbedfile = os.path.join('testbed.yaml') testbed = load(testbedfile) else: # Use the one provided testbed = runtime.testbed # Find the location of the script in relation to the job file testscript = os.path.join(os.path.dirname(__file__), 'show_ip_route_langchain.py') # run script runtime.tasks.run(testscript=testscript, testbed=testbed) Then comes the pi\u00e8ce de r\u00e9sistance, the Python test script \u2014 let\u2019s call it capture_routing_table.py. This script embodies the intelligence of our automated testing process. It\u2019s where we\u2019ve distilled our network expertise into a series of commands and parsers that interact with the Cisco IOS XE devices to retrieve the routing table information. But it doesn\u2019t stop there; this script is designed to capture the output and elegantly transform it into a JSON structure. Why JSON, you ask? Because JSON is the lingua franca for data interchange, making the output from our devices readily available for any number of downstream applications or interfaces that might need to consume it. In doing so, we\u2019re not just automating a task; we\u2019re future-proofing it. Excerpt from the pyATS script @aetest.test def get_raw_config(self): raw_json = self.device.parse(\"show ip route\") self.parsed_json = \"info\": raw_json @aetest.test def create_file(self): with open('Show_IP_Route.json', 'w') as f: f.write(json.dumps(self.parsed_json, indent=4, sort_keys=True)) By focusing solely on pyATS in this phase, we lay a strong foundation for network automation. The testbed.yaml file ensures that our script knows where to go, the _job file gives it the instructions on what to do, and the capture_routing_table.py script does the heavy lifting, turning raw data into structured knowledge. This approach streamlines our processes, making it possible to conduct comprehensive, repeatable, and reliable network testing at scale. Enhancing AI Conversational Models with RAG and Network JSON: A Guide In the ever-evolving field of AI, conversational models have come a long way. From simple rule-based systems to advanced neural networks, these models can now mimic human-like conversations with a remarkable degree of fluency. However, despite the leaps in generative capabilities, AI can sometimes stumble, providing answers that are nonsensical or \u201challucinated\u201d \u2014 a term used when AI produces information that isn\u2019t grounded in reality. One way to mitigate this is by integrating Retrieval-Augmented Generation (RAG) into the AI pipeline, especially in conjunction with structured data sources like network JSON. What is Retrieval-Augmented Generation (RAG)? Retrieval-Augmented Generation is a cutting-edge technique in AI language processing that combines the best of two worlds: the generative power of models like GPT (Generative Pre-trained Transformer) and the precision of retrieval-based systems. Essentially, RAG enhances a language model\u2019s responses by first consulting a database of information. The model retrieves relevant documents or data and then uses this context to inform its generated output. The RAG Process The process typically involves several key steps: Retrieval: When the model receives a query, it searches through a database to find relevant information. Augmentation: The retrieved information is then fed into the generative model as additional context. Generation: Armed with this context, the model generates a response that\u2019s not only fluent but also factually grounded in the retrieved data. The Role of Network JSON in RAG Network JSON refers to structured data in the JSON (JavaScript Object Notation) format, often used in network communications. Integrating network JSON with RAG serves as a bridge between the generative model and the vast amounts of structured data available on networks. This integration can be critical for several reasons: Data-Driven Responses: By pulling in network JSON data, the AI can ground its responses in real, up-to-date information, reducing the risk of \u201challucinations.\u201d Enhanced Accuracy: Access to a wide array of structured data means the AI\u2019s answers can be more accurate and informative. Contextual Relevance: RAG can use network JSON to understand the context better, leading to more relevant and precise answers. Why Use RAG with Network JSON? Let\u2019s explore why one might choose to use RAG in tandem with network JSON through a simplified example using Python code: Source and Load: The AI model begins by sourcing data, which could be network JSON files containing information from various databases or the internet. Transform: The data might undergo a transformation to make it suitable for the AI to process \u2014 for example, splitting a large document into manageable chunks. Embed: Next, the system converts the transformed data into embeddings, which are numerical representations that encapsulate the semantic meaning of the text. Store: These embeddings are then stored in a retrievable format. Retrieve: When a new query arrives, the AI uses RAG to retrieve the most relevant embeddings to inform its response, thus ensuring that the answer is grounded in factual data. By following these steps, the AI model can drastically improve the quality of the output, providing responses that are not only coherent but also factually correct and highly relevant to the user\u2019s query. class ChatWithRoutingTable: def __init__(self): self.conversation_history = [] self.load_text() self.split_into_chunks() self.store_in_chroma() self.setup_conversation_memory() self.setup_conversation_retrieval_chain() def load_text(self): self.loader = JSONLoader( file_path='Show_IP_Route.json', jq_schema=\".info[]\", text_content=False ) self.pages = self.loader.load_and_split() def split_into_chunks(self): # Create a text splitter self.text_splitter = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=100, length_function=len, ) self.docs = self.text_splitter.split_documents(self.pages) def store_in_chroma(self): embeddings = OpenAIEmbeddings() self.vectordb = Chroma.from_documents(self.docs, embedding=embeddings) self.vectordb.persist() def setup_conversation_memory(self): self.memory = ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True) def setup_conversation_retrieval_chain(self): self.qa = ConversationalRetrievalChain.from_llm(llm, self.vectordb.as_retriever(search_kwargs=\"k\": 10), memory=self.memory) def chat(self, question): # Format the user's prompt and add it to the conversation history user_prompt = f\"User: question\" self.conversation_history.append(\"text\": user_prompt, \"sender\": \"user\") # Format the entire conversation history for context, excluding the current prompt conversation_context = self.format_conversation_history(include_current=False) # Concatenate the current question with conversation context combined_input = f\"Context: conversation_contextnQuestion: question\" # Generate a response using the ConversationalRetrievalChain response = self.qa.invoke(combined_input) # Extract the answer from the response answer = response.get('answer', 'No answer found.') # Format the AI's response ai_response = f\"Cisco IOS XE: answer\" self.conversation_history.append(\"text\": ai_response, \"sender\": \"bot\") # Update the Streamlit session state by appending new history with both user prompt and AI response st.session_state['conversation_history'] += f\"nuser_promptnai_response\" # Return the formatted AI response for immediate display return ai_response Conclusion The integration of RAG with network JSON is a powerful way to supercharge conversational AI. It leads to more accurate, reliable, and contextually aware interactions that users can trust. By leveraging the vast amounts of available structured data, AI models can step beyond the limitations of pure generation and towards a more informed and intelligent conversational experience. Related resources This\u00a0open source repo\u00a0contains this solution in full.\u00a0Try it for yourself! Check out my conversation with Adrian Iliesiu on his NetGRU live stream, \u201cAutomating Your Network with ChatGPT\u201c \u00a0If you want a deeper dive \/ live demo, check out my session from Cisco Live Amsterdam 2024 Share Share: \"]]\u00a0\u00a0The integration of artificial intelligence into network operations has opened up unprecedented levels of efficiency, predictive analysis, and decision-making capabilities. Get a glimpse into the future of network operation, with practical knowledge on harnessing the power of AI to augment network automation.\u00a0\u00a0Read More\u00a0Cisco Blogs\u00a0","og_url":"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/","og_site_name":"JHC","article_published_time":"2024-03-07T03:54:13+00:00","og_image":[{"width":1,"height":1,"url":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16603865-a0ceoB.gif","type":"image\/gif"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"11 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/#article","isPartOf":{"@id":"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/"},"author":{"name":"","@id":""},"headline":"Using the Power of Artificial Intelligence to Augment Network Automation John Capobianco on March 6, 2024 at 5:25 pm","datePublished":"2024-03-07T03:54:13+00:00","dateModified":"2024-03-07T03:54:13+00:00","mainEntityOfPage":{"@id":"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/"},"wordCount":2307,"publisher":{"@id":"https:\/\/jacksonholdingcompany.com\/#organization"},"image":{"@id":"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/#primaryimage"},"thumbnailUrl":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16603865-a0ceoB.gif","articleSection":["Cisco: Learning"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/","url":"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/","name":"Using the Power of Artificial Intelligence to Augment Network Automation John Capobianco on March 6, 2024 at 5:25 pm - JHC","isPartOf":{"@id":"https:\/\/jacksonholdingcompany.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/#primaryimage"},"image":{"@id":"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/#primaryimage"},"thumbnailUrl":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16603865-a0ceoB.gif","datePublished":"2024-03-07T03:54:13+00:00","dateModified":"2024-03-07T03:54:13+00:00","breadcrumb":{"@id":"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/#primaryimage","url":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16603865-a0ceoB.gif","contentUrl":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16603865-a0ceoB.gif","width":1,"height":1},{"@type":"BreadcrumbList","@id":"https:\/\/jacksonholdingcompany.com\/using-the-power-of-artificial-intelligence-to-augment-network-automation-john-capobianco-on-march-6-2024-at-525-pm\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/jacksonholdingcompany.com\/"},{"@type":"ListItem","position":2,"name":"Using the Power of Artificial Intelligence to Augment Network Automation John Capobianco on March 6, 2024 at 5:25 pm"}]},{"@type":"WebSite","@id":"https:\/\/jacksonholdingcompany.com\/#website","url":"https:\/\/jacksonholdingcompany.com\/","name":"JHC","description":"Your Business Is Our Business","publisher":{"@id":"https:\/\/jacksonholdingcompany.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/jacksonholdingcompany.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/jacksonholdingcompany.com\/#organization","name":"JHC","url":"https:\/\/jacksonholdingcompany.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/jacksonholdingcompany.com\/#\/schema\/logo\/image\/","url":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2023\/07\/cropped-cropped-jHC-white-500-\u00d7-200-px-1-1.png","contentUrl":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2023\/07\/cropped-cropped-jHC-white-500-\u00d7-200-px-1-1.png","width":452,"height":149,"caption":"JHC"},"image":{"@id":"https:\/\/jacksonholdingcompany.com\/#\/schema\/logo\/image\/"}}]}},"_links":{"self":[{"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/posts\/2672","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/comments?post=2672"}],"version-history":[{"count":0,"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/posts\/2672\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/media\/2673"}],"wp:attachment":[{"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/media?parent=2672"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/categories?post=2672"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/tags?post=2672"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}