easy-accordion-free
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114zoho-flow
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114wordpress-seo
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114If you\u2019re familiar with observability, you know most teams have a \u201cdata problem.\u201d That is, observability data has exploded as teams have modernized their a\u2026 Read more on Cisco Blogs<\/a><\/p>\n \u200b<\/p>\n If you\u2019re familiar with observability, you know most teams have a \u201cdata problem.\u201d That is, observability data has exploded as teams have modernized their application stacks and embraced microservices architectures.<\/p>\n If you had unlimited storage, it\u2019d be feasible to ingest all your metrics, events, logs, and traces (MELT data) in a centralized observability platform\u00a0. However, that is simply not the case. Instead, teams index large volumes of data \u2013 some portions being regularly used and others not. Then, teams have to decide whether datasets are worth keeping or should be discarded altogether.<\/p>\n For the past few months I\u2019ve been playing with a tool called Edge Delta<\/a> to see how it might help IT and DevOps teams to solve this problem by providing a new way to collect, transform, and route your data before<\/em> it is indexed in a downstream platform, like AppDynamics<\/a> or Cisco Full-Stack Observability<\/a>.<\/p>\n You can use Edge Delta to create observability pipelines or analyze your data from their backend. Typically, observability starts by shipping all your raw data to central service before you begin analysis. In essence, Edge Delta helps you flip this model on its head. Said another way, Edge Delta analyzes your data as it\u2019s created at the source. From there, you can create observability pipelines that route processed data and lightweight analytics to your observability platform.<\/p>\n Why might this approach be advantageous? Today, teams don\u2019t have a ton of clarity into their data before it\u2019s ingested in an observability platform. Nor do they have control over how that data is treated or flexibility over where the data lives.<\/p>\n By pushing data processing upstream, Edge Delta enables a new kind of architecture where teams can have\u2026<\/p>\n Transparency into their data: \u201cHow valuable is this dataset, and how do we use it?\u201d The net benefit here is that you\u2019re allocating your resources towards the right data in its optimal shape and location based on your use case.<\/p>\n Over the past few weeks, I\u2019ve explored a couple different use cases with Edge Delta.<\/p>\n Analyzing NGINX log data from the Edge Delta interface<\/strong><\/p>\n First, I wanted to use the Edge Delta console to analyze my log data. To do so, deployed the Edge Delta agent on a Kubernetes cluster running NGINX. From here, I sent both valid and invalid http requests to generate log data and observed the output via Edge Delta\u2019s pre-built dashboards.<\/p>\n Among the most useful screens was \u201cPatterns.\u201d This feature clusters together repetitive loglines, so I can easily interpret each unique log message, understand how frequently it occurs, and whether I should investigate it further.<\/p>\n Edge Delta\u2019s Patterns feature makes it easy to interpret data by clustering Second, I wanted to manipulate data in flight using Edge Delta observability pipelines. Here, I installed the Edge Delta agent on my Mac OS. Then I exported Syslog data from my Cisco ISR1100 to my Mac.<\/p>\n From within the Edge Delta interface, I configured the agent to listen on the appropriate TCP and UDP ports. Now, I can apply processor nodes to transform (and otherwise manipulate) my data before it hits my downstream analytics platform.<\/p>\n Specifically, I applied the following processors:<\/p>\n Mask node<\/strong> to obfuscate sensitive data. Here, I replaced social security numbers in my log data with the string \u2018REDACTED\u2019. Through Edge Delta\u2019s Pipelines interface, you can apply processors For now all of this is being routed to the Edge Delta backend. However, Edge Delta is vendor-agnostic and I can route processed data to different destinations \u2013 like AppDynamics<\/a> or Cisco Full-Stack Observability<\/a> \u2013 in a matter of clicks.<\/p>\n If you\u2019re interested in learning more about Edge Delta, you can visit their website (edgedelta.com). From here, you can deploy your own agent and ingest up to 10GB per day for free. Also, check out our video on the YouTube DevNet channel to see the steps above in action. Feel free to post your questions about my configuration below.<\/p>\n Learn more about Cisco Full-Stack Observability<\/a> \u00a0\u00a0Observable data has exploded as teams modernize their application stacks and embraced microservices architectures. Learn how Edge Delta helps IT and DevOps teams with a new way to collect, transform, and route your data before it is indexed in a downstream platform.\u00a0\u00a0Read More<\/a>\u00a0Cisco Blogs\u00a0<\/p>","protected":false},"excerpt":{"rendered":" <\/p>\n If you\u2019re familiar with observability, you know most teams have a \u201cdata problem.\u201d That is, observability data has exploded as teams have modernized their a\u2026 Read more on Cisco Blogs<\/a><\/p>\n \u200b<\/p>\n If you\u2019re familiar with observability, you know most teams have a \u201cdata problem.\u201d That is, observability data has exploded as teams have modernized their application stacks and embraced microservices architectures.<\/p>\n If you had unlimited storage, it\u2019d be feasible to ingest all your metrics, events, logs, and traces (MELT data) in a centralized observability platform\u00a0. However, that is simply not the case. Instead, teams index large volumes of data \u2013 some portions being regularly used and others not. Then, teams have to decide whether datasets are worth keeping or should be discarded altogether.<\/p>\n For the past few months I\u2019ve been playing with a tool called Edge Delta<\/a> to see how it might help IT and DevOps teams to solve this problem by providing a new way to collect, transform, and route your data before<\/em> it is indexed in a downstream platform, like AppDynamics<\/a> or Cisco Full-Stack Observability<\/a>.<\/p>\n You can use Edge Delta to create observability pipelines or analyze your data from their backend. Typically, observability starts by shipping all your raw data to central service before you begin analysis. In essence, Edge Delta helps you flip this model on its head. Said another way, Edge Delta analyzes your data as it\u2019s created at the source. From there, you can create observability pipelines that route processed data and lightweight analytics to your observability platform.<\/p>\n Why might this approach be advantageous? Today, teams don\u2019t have a ton of clarity into their data before it\u2019s ingested in an observability platform. Nor do they have control over how that data is treated or flexibility over where the data lives.<\/p>\n By pushing data processing upstream, Edge Delta enables a new kind of architecture where teams can have\u2026<\/p>\n Transparency into their data: \u201cHow valuable is this dataset, and how do we use it?\u201d The net benefit here is that you\u2019re allocating your resources towards the right data in its optimal shape and location based on your use case.<\/p>\n Over the past few weeks, I\u2019ve explored a couple different use cases with Edge Delta.<\/p>\n Analyzing NGINX log data from the Edge Delta interface<\/strong><\/p>\n First, I wanted to use the Edge Delta console to analyze my log data. To do so, deployed the Edge Delta agent on a Kubernetes cluster running NGINX. From here, I sent both valid and invalid http requests to generate log data and observed the output via Edge Delta\u2019s pre-built dashboards.<\/p>\n Among the most useful screens was \u201cPatterns.\u201d This feature clusters together repetitive loglines, so I can easily interpret each unique log message, understand how frequently it occurs, and whether I should investigate it further.<\/p>\n Edge Delta\u2019s Patterns feature makes it easy to interpret data by clustering Second, I wanted to manipulate data in flight using Edge Delta observability pipelines. Here, I installed the Edge Delta agent on my Mac OS. Then I exported Syslog data from my Cisco ISR1100 to my Mac.<\/p>\n From within the Edge Delta interface, I configured the agent to listen on the appropriate TCP and UDP ports. Now, I can apply processor nodes to transform (and otherwise manipulate) my data before it hits my downstream analytics platform.<\/p>\n Specifically, I applied the following processors:<\/p>\n Mask node<\/strong> to obfuscate sensitive data. Here, I replaced social security numbers in my log data with the string \u2018REDACTED\u2019. Through Edge Delta\u2019s Pipelines interface, you can apply processors For now all of this is being routed to the Edge Delta backend. However, Edge Delta is vendor-agnostic and I can route processed data to different destinations \u2013 like AppDynamics<\/a> or Cisco Full-Stack Observability<\/a> \u2013 in a matter of clicks.<\/p>\n If you\u2019re interested in learning more about Edge Delta, you can visit their website (edgedelta.com). From here, you can deploy your own agent and ingest up to 10GB per day for free. Also, check out our video on the YouTube DevNet channel to see the steps above in action. Feel free to post your questions about my configuration below.<\/p>\n Learn more about Cisco Full-Stack Observability<\/a> \u00a0\u00a0Observable data has exploded as teams modernize their application stacks and embraced microservices architectures. Learn how Edge Delta helps IT and DevOps teams with a new way to collect, transform, and route your data before it is indexed in a downstream platform.\u00a0\u00a0Read More<\/a>\u00a0Cisco Blogs\u00a0<\/p>\n <\/p>\n","protected":false},"author":0,"featured_media":1515,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[12],"tags":[],"class_list":["post-1514","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-cisco-learning"],"yoast_head":"\nMore data does not mean better observability<\/h2>\n
What is Edge Delta?<\/h2>\n
\nControls to drive usability: \u201cWhat is the ideal shape of that data?\u201d
\nFlexibility to route processed data anywhere: \u201cDo we need this data in our observability platform for real-time analysis, or archive storage for compliance?\u201d<\/p>\nHow I used Edge Delta<\/h2>\n
\ntogether repetitive log messages and provides analytics around each event.<\/em><\/p>\nCreating pipelines with Syslog data<\/strong><\/h3>\n
\nRegex filter node<\/strong> which passes along or discards data based on the regex pattern. For this example, I wanted to exclude DEBUG level logs from downstream storage.
\nLog to metric node<\/strong> for extracting metrics from my log data. The metrics can be ingested downstream in lieu of raw data to support real-time monitoring use cases. I captured metrics to track the rate of errors, exceptions, and negative sentiment logs.
\nLog to pattern node<\/strong> which I alluded to in the section above. This creates \u201cpatterns\u201d from my data by grouping together similar loglines for easier interpretation and less noise.<\/p>\n
\nto your data and route it to different destinations.<\/em><\/p>\nConclusion<\/h2>\n
Related resources<\/h2>\n
\nLearn more about AppDynamics<\/a><\/p>\nMore data does not mean better observability<\/h2>\n
More data does not mean better observability<\/h2>\n
What is Edge Delta?<\/h2>\n
\nControls to drive usability: \u201cWhat is the ideal shape of that data?\u201d
\nFlexibility to route processed data anywhere: \u201cDo we need this data in our observability platform for real-time analysis, or archive storage for compliance?\u201d<\/p>\nHow I used Edge Delta<\/h2>\n
\ntogether repetitive log messages and provides analytics around each event.<\/em><\/p>\nCreating pipelines with Syslog data<\/strong><\/h3>\n
\nRegex filter node<\/strong> which passes along or discards data based on the regex pattern. For this example, I wanted to exclude DEBUG level logs from downstream storage.
\nLog to metric node<\/strong> for extracting metrics from my log data. The metrics can be ingested downstream in lieu of raw data to support real-time monitoring use cases. I captured metrics to track the rate of errors, exceptions, and negative sentiment logs.
\nLog to pattern node<\/strong> which I alluded to in the section above. This creates \u201cpatterns\u201d from my data by grouping together similar loglines for easier interpretation and less noise.<\/p>\n
\nto your data and route it to different destinations.<\/em><\/p>\nConclusion<\/h2>\n
Related resources<\/h2>\n
\nLearn more about AppDynamics<\/a><\/p>\n