Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the easy-accordion-free domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the zoho-flow domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php:6114) in /home/mother99/jacksonholdingcompany.com/wp-includes/rest-api/class-wp-rest-server.php on line 1893
{"id":2649,"date":"2024-03-05T12:58:15","date_gmt":"2024-03-05T12:58:15","guid":{"rendered":"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/"},"modified":"2024-03-05T12:58:15","modified_gmt":"2024-03-05T12:58:15","slug":"data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm","status":"publish","type":"post","link":"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/","title":{"rendered":"Data Processing in Cisco Observability Platform \u2013 A Step-by-Step Guide Anna Bokhan-Dilawari on March 4, 2024 at 6:59 pm"},"content":{"rendered":"

Process Vast Amounts of MELT Data<\/h2>\n

Cisco Observability Platform s designed to ingest and process vast amounts of MELT (Metrics, Events, Logs and Traces) data. It is built on top of open standards like\u2026 Read more on Cisco Blogs<\/a><\/p>\n

\u200b[[“value”:”<\/p>\n

Process Vast Amounts of MELT Data<\/h2>\n

Cisco Observability Platform s designed to ingest and process vast amounts of MELT (Metrics, Events, Logs and Traces) data. It is built on top of open standards like OpenTelemetry<\/a> to ensure interoperability.<\/p>\n

What sets it apart is its provision of extensions, empowering our partners and customers to tailor every facet of its functionality to their unique needs. Our focus today is unveiling the intricacies of customizations specifically tailored for data processing. It is expected that you have an understanding of the platform basics<\/a>, like Flexible Metadata Model (FMM) and solution development. Let\u2019s dive in!<\/p>\n

Understanding Data Processing Stages<\/h1>\n

The data processing pipeline has various stages that lead to data storage. As MELT data moves through the pipeline, it is processed, transformed, and enriched, and eventually lands in the data store where it can be queried with Unified Query Language (UQL):<\/p>\n\n

Each stage marked with a gear icon allows customization of specific logic. Furthermore, the platform enables the creation of entirely custom post-processing logic when data can no longer be altered.<\/p>\n

To streamline customization while maintaining flexibility, we are embracing a new approach: workflows, taps, and plugins, utilizing the CNCF Serverless Workflow specification<\/a> with JSONata<\/a> as the default expression language. Since Serverless Workflows are designed using open standards, we are extensively utilizing CloudEvents<\/a> and OpenAPI<\/a> specifications. By leveraging these open standards, we ensure compatibility and ease of development.<\/p>\n

Data processing stages that allow data mutation are called taps<\/em>, and their customizations plugins. <\/em>Each tap declares an input and output JSON schema<\/a> for its plugins. Plugins are expected to produce an output that adheres to the tap\u2019s output schema. A tap is responsible for merging outputs from all its plugins and producing a new event, which is a modified version of an original event. Taps can only be authored by the platform, while plugins can be created by any solution as well as regular users of the platform.<\/p>\n

Workflows <\/em>are meant for post-processing and thus can only subscribe to triggers <\/em>(see below). Workflow use cases range from simple event counting to sophisticated machine learning model inferences. Anyone can author workflows.<\/p>\n

This abstraction allows developers to reason in terms of a single event, without exposing the complexity of the underlying stream processing, and use familiar well documented standards, both of which lower the barrier of entry.<\/p>\n

Events as a connecting tissue<\/h1>\n

Each data processing stage communicates with other stages via events, which allows us to decouple consumers and producers and seamlessly rearrange the stages should the need arise.
\nEach event has an associated category, which determines whether a specific stage can subscribe to or publish that event. There are two public categories for data-related events:<\/p>\n

data:observation<\/strong> \u2013 a category of events with publish-only permissions which can be thought of as side-effects of processing the original event, for example, an entity derived from resource attributes in OpenTelemetry metric packet. Observations are indicated with upward \u2018publish\u2019 arrows in the above diagram. Taps, workflows and plugins can all produce observations. Observations can only be subscribed to by specific taps.
\ndata:trigger<\/strong> \u2013 subscribe-only events that are emitted after all the mutations have completed. Triggers are indicated with a lightning \u2018trigger\u2019 icon in the above diagram. Only workflows (post-processing logic) can subscribe to triggers, and only specific taps can publish them.<\/p>\n

There are five observation event types in the platform:<\/p>\n

entity.observed<\/strong> \u2013 FMM entity was discovered while processing some data. It can be a new entity or an update to an existing entity. Each update from the same source fully replaces the previous one.
\nassociation.observed<\/strong> \u2013 FMM association was discovered while processing some data. Depending on the cardinality of the association the update logic differs
\nextension.observed<\/strong> \u2013 FMM extension attributes were discovered while processing some data. A target entity must already exist.
\nmeasurement.received<\/strong> \u2013 a measurement event which contributes to a specific FMM metric. These measurements will be aggregated into a metric in Metric aggregation tap. Aggregation logic depends on the metric\u2019s content type.
\nevent.received<\/strong> \u2013 raises a new FMM event. This event will also be processed by the Event processing tap, just like externally ingested events.<\/p>\n

There are 3 trigger event types in the platform, one for each data kind: metric.enriched<\/strong>, event.enriched<\/strong>, trace.encriched<\/strong>. All three events are emitted from the final \u2018Tag enrichment\u2019 tap.<\/p>\n

Each event is registered in a platform\u2019s knowledge store, so that they are easily discoverable. To list all available events, simply use fsoc to query them, i.e., to get all triggers:<\/p>\n

fsoc knowledge get –type=contracts:cloudevent –filter=”data.category eq ‘data:trigger'” –layer-type=TENANT<\/p>\n

Note that all event types are versioned to allow for evolution and are qualified with platform solution identifier for isolation. For example, a fully qualified id of measurement.received<\/strong> event is platform:measurement.received.v1<\/strong><\/p>\n

Authoring Workflows: A Practical Example<\/h1>\n

Let\u2019s illustrate the above concepts with a straightforward example. Consider a workflow designed to count health rule violations for Kubernetes workloads and APM services. The logic of the workflow can be broken down into several steps:<\/p>\n

Subscribe to the trigger event
\nValidate event type and entity relevance
\nPublish a measurement event counting violations while retaining severity<\/p>\n

Development Tools<\/h2>\n

Developers can utilize various tools to aid in workflow development, such as web-based editors or IDEs.<\/p>\n

web-based editor<\/a>
\nVS Code with
Kogito editor<\/a> or default extension<\/a>
\nany IDE that integrates with
org<\/a>, i.e. Intellij IDEA<\/p>\n

It\u2019s crucial to ensure expressions and logic are valid through unit tests and validation against defined schemas.<\/p>\n

To aid in that, you can write unit tests by utilizing stated<\/a>, see an example for this workflow<\/a>.Online JSONata editor<\/a> can also be a helpful tool in writing your expressions.<\/p>\n

A blog on workflow testing is coming soon!<\/p>\n

Step by Step Guide<\/h2>\n

Create the workflow DSL<\/h3>\n

Provide a unique identifier and a name for your workflow:<\/p>\n

id: violations-counter
\nversion: ‘1.0.0’
\nspecVersion: ‘0.8’
\nname: Violations Counter<\/p>\n

Find the trigger event<\/h3>\n

Let\u2019s query our trigger using fsoc:<\/p>\n

fsoc knowledge get –type=contracts:cloudevent –object-id=platform:event.enriched.v1<\/strong> –layer-type=TENANT<\/p>\n

Output:<\/p>\n

type: event.enriched.v1
\ndescription: Indicates that an event was enriched with topology tags
\ndataschema<\/strong>: contracts:jsonSchema\/platform:event.v1<\/strong>
\ncategory: data:trigger
\nextensions:
\n – contracts:cloudeventExtension\/platform:entitytypes
\n\u00a0 – contracts:cloudeventExtension\/platform:source <\/p>\n

Subscribe to the event<\/h3>\n

To subscribe to this event, you need to add an event definition<\/a> and event state<\/a> referencing this definition (note a nature of the reference to the event \u2013 it must be qualified with its knowledge type):<\/p>\n

events:
\n\u00a0 – name: EventReceived<\/strong>
\n\u00a0\u00a0\u00a0 type: contracts:cloudevent\/platform:event.enriched.v1<\/strong>
\n \u00a0\u00a0 kind: consumed
\n \u00a0\u00a0 dataOnly: false
\n \u00a0\u00a0 source: platform
\nstates:
\n – name: event-received
\n \u00a0\u00a0 type: event
\n \u00a0\u00a0 onEvents:
\n \u00a0\u00a0\u00a0\u00a0 – eventRefs:
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – EventReceived<\/strong><\/p>\n

Inspect the event<\/h3>\n

Since the data in workflows is received in JSON format, event data is described in JSON schema.<\/p>\n

Let\u2019s look at the JSON schema of this event (referenced in dataschema)<\/strong>, so you know what to expect in our workflow:<\/p>\n

fsoc knowledge get –type=contracts:jsonSchema –object-id=platform:event.v1<\/strong> –layer-type=TENANT
\nResult:
\n$schema: http:\/\/json-schema.org\/draft-07\/schema#
\ntitle: Event
\n$id: event.v1
\ntype: object
\nrequired:
\n – entities
\n – type
\n – timestamp
\nproperties:
\n entities:
\n \u00a0\u00a0 type: array
\n \u00a0\u00a0 minItems: 1
\n \u00a0\u00a0 items:
\n \u00a0\u00a0\u00a0\u00a0 $ref: ‘#\/definitions\/EntityReference’
\n type:
\n \u00a0\u00a0 $ref: ‘#\/definitions\/TypeReference’
\n timestamp:
\n \u00a0\u00a0 type: integer
\n \u00a0\u00a0 description: The timestamp in milliseconds
\n spanId:
\n \u00a0\u00a0 type: string
\n \u00a0\u00a0 description: Span id
\n traceId:
\n \u00a0\u00a0 type: string
\n \u00a0\u00a0 description: Trace id
\n raw:
\n \u00a0\u00a0 type: string
\n \u00a0\u00a0 description: The raw body of the event record
\n attributes:
\n \u00a0\u00a0 $ref: ‘#\/definitions\/Attributes’
\n tags:
\n \u00a0\u00a0 $ref: ‘#\/definitions\/Tags’
\nadditionalProperties: false
\ndefinitions:
\n Tags:
\n \u00a0\u00a0 type: object
\n \u00a0\u00a0 propertyNames:
\n \u00a0\u00a0\u00a0\u00a0 minLength: 1
\n \u00a0\u00a0\u00a0\u00a0 maxLength: 256
\n \u00a0\u00a0 additionalProperties:
\n \u00a0\u00a0\u00a0\u00a0 type: string
\n Attributes:
\n \u00a0\u00a0 type: object
\n \u00a0\u00a0 propertyNames:
\n \u00a0\u00a0\u00a0\u00a0 minLength: 1
\n \u00a0\u00a0\u00a0\u00a0 maxLength: 256
\n \u00a0\u00a0 additionalProperties:
\n \u00a0\u00a0\u00a0\u00a0 type:
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – string
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – number
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – boolean
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – object
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – array
\n EntityReference:
\n \u00a0\u00a0 type: object
\n \u00a0\u00a0 required:
\n \u00a0\u00a0\u00a0\u00a0 – id
\n \u00a0\u00a0\u00a0\u00a0 – type
\n \u00a0\u00a0 properties:
\n \u00a0\u00a0\u00a0\u00a0 id:
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 type: string
\n\u00a0\u00a0\u00a0\u00a0\u00a0 type:
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 $ref: ‘#\/definitions\/TypeReference’
\n\u00a0\u00a0\u00a0\u00a0\u00a0 additionalProperties: false
\n\u00a0 TypeReference:
\n\u00a0\u00a0\u00a0 type: string
\n\u00a0\u00a0\u00a0 description: A fully qualified FMM type reference
\n\u00a0\u00a0\u00a0 example: k8s:pod<\/p>\n

It\u2019s straightforward \u2013 a single event, with one or more entity references. Since dataOnly=false<\/a>, the payload of the event will be enclosed in the data field, and extension attributes will also be available to the workflow.
\nSince we know the exact FMM event type we are interested in, you can also query its definition to understand the attributes that the workflow will be receiving and their semantics:<\/p>\n

fsoc knowledge get –type=fmm:event –filter=”data.name eq “healthrule.violation” and data.namespace.name eq “alerting”” –layer-type=TENANT<\/p>\n

Validate event relevance<\/h3>\n

You\u2019ll need to ensure that the event you receive is of the correct FMM event type, and that referenced entities are relevant. To do this, you can write an expression<\/a> in JSONata and then use it in an action condition:<\/p>\n

functions:
\n\u00a0 – name: checkType<\/strong>
\n \u00a0\u00a0 type: expression
\n \u00a0\u00a0 operation: ]]\u00a0\u00a0Cisco Observability Platform is designed to ingest and process vast amounts of MELT (Metrics, Events, Logs and Traces) data. It is built on top of open standards like OpenTelemetry to ensure interoperability. See how its provision of extensions let you tailor every facet of its functionality to your unique needs.\u00a0\u00a0
Read More<\/a>\u00a0Cisco Blogs\u00a0<\/p>","protected":false},"excerpt":{"rendered":"

<\/p>\n

Process Vast Amounts of MELT Data<\/h2>\n

Cisco Observability Platform s designed to ingest and process vast amounts of MELT (Metrics, Events, Logs and Traces) data. It is built on top of open standards like\u2026 Read more on Cisco Blogs<\/a><\/p>\n

\u200b[[“value”:”<\/p>\n

Process Vast Amounts of MELT Data<\/h2>\n

Cisco Observability Platform s designed to ingest and process vast amounts of MELT (Metrics, Events, Logs and Traces) data. It is built on top of open standards like OpenTelemetry<\/a> to ensure interoperability.<\/p>\n

What sets it apart is its provision of extensions, empowering our partners and customers to tailor every facet of its functionality to their unique needs. Our focus today is unveiling the intricacies of customizations specifically tailored for data processing. It is expected that you have an understanding of the platform basics<\/a>, like Flexible Metadata Model (FMM) and solution development. Let\u2019s dive in!<\/p>\n

Understanding Data Processing Stages<\/h1>\n

The data processing pipeline has various stages that lead to data storage. As MELT data moves through the pipeline, it is processed, transformed, and enriched, and eventually lands in the data store where it can be queried with Unified Query Language (UQL):<\/p>\n

Each stage marked with a gear icon allows customization of specific logic. Furthermore, the platform enables the creation of entirely custom post-processing logic when data can no longer be altered.<\/p>\n

To streamline customization while maintaining flexibility, we are embracing a new approach: workflows, taps, and plugins, utilizing the CNCF Serverless Workflow specification<\/a> with JSONata<\/a> as the default expression language. Since Serverless Workflows are designed using open standards, we are extensively utilizing CloudEvents<\/a> and OpenAPI<\/a> specifications. By leveraging these open standards, we ensure compatibility and ease of development.<\/p>\n

Data processing stages that allow data mutation are called taps<\/em>, and their customizations plugins. <\/em>Each tap declares an input and output JSON schema<\/a> for its plugins. Plugins are expected to produce an output that adheres to the tap\u2019s output schema. A tap is responsible for merging outputs from all its plugins and producing a new event, which is a modified version of an original event. Taps can only be authored by the platform, while plugins can be created by any solution as well as regular users of the platform.<\/p>\n

Workflows <\/em>are meant for post-processing and thus can only subscribe to triggers <\/em>(see below). Workflow use cases range from simple event counting to sophisticated machine learning model inferences. Anyone can author workflows.<\/p>\n

This abstraction allows developers to reason in terms of a single event, without exposing the complexity of the underlying stream processing, and use familiar well documented standards, both of which lower the barrier of entry.<\/p>\n

Events as a connecting tissue<\/h1>\n

Each data processing stage communicates with other stages via events, which allows us to decouple consumers and producers and seamlessly rearrange the stages should the need arise.
\nEach event has an associated category, which determines whether a specific stage can subscribe to or publish that event. There are two public categories for data-related events:<\/p>\n

data:observation<\/strong> \u2013 a category of events with publish-only permissions which can be thought of as side-effects of processing the original event, for example, an entity derived from resource attributes in OpenTelemetry metric packet. Observations are indicated with upward \u2018publish\u2019 arrows in the above diagram. Taps, workflows and plugins can all produce observations. Observations can only be subscribed to by specific taps.
\ndata:trigger<\/strong> \u2013 subscribe-only events that are emitted after all the mutations have completed. Triggers are indicated with a lightning \u2018trigger\u2019 icon in the above diagram. Only workflows (post-processing logic) can subscribe to triggers, and only specific taps can publish them.<\/p>\n

There are five observation event types in the platform:<\/p>\n

entity.observed<\/strong> \u2013 FMM entity was discovered while processing some data. It can be a new entity or an update to an existing entity. Each update from the same source fully replaces the previous one.
\nassociation.observed<\/strong> \u2013 FMM association was discovered while processing some data. Depending on the cardinality of the association the update logic differs
\nextension.observed<\/strong> \u2013 FMM extension attributes were discovered while processing some data. A target entity must already exist.
\nmeasurement.received<\/strong> \u2013 a measurement event which contributes to a specific FMM metric. These measurements will be aggregated into a metric in Metric aggregation tap. Aggregation logic depends on the metric\u2019s content type.
\nevent.received<\/strong> \u2013 raises a new FMM event. This event will also be processed by the Event processing tap, just like externally ingested events.<\/p>\n

There are 3 trigger event types in the platform, one for each data kind: metric.enriched<\/strong>, event.enriched<\/strong>, trace.encriched<\/strong>. All three events are emitted from the final \u2018Tag enrichment\u2019 tap.<\/p>\n

Each event is registered in a platform\u2019s knowledge store, so that they are easily discoverable. To list all available events, simply use fsoc to query them, i.e., to get all triggers:<\/p>\n

fsoc knowledge get –type=contracts:cloudevent –filter=”data.category eq ‘data:trigger'” –layer-type=TENANT<\/p>\n

Note that all event types are versioned to allow for evolution and are qualified with platform solution identifier for isolation. For example, a fully qualified id of measurement.received<\/strong> event is platform:measurement.received.v1<\/strong><\/p>\n

Authoring Workflows: A Practical Example<\/h1>\n

Let\u2019s illustrate the above concepts with a straightforward example. Consider a workflow designed to count health rule violations for Kubernetes workloads and APM services. The logic of the workflow can be broken down into several steps:<\/p>\n

Subscribe to the trigger event
\nValidate event type and entity relevance
\nPublish a measurement event counting violations while retaining severity<\/p>\n

Development Tools<\/h2>\n

Developers can utilize various tools to aid in workflow development, such as web-based editors or IDEs.<\/p>\n

web-based editor<\/a>
\nVS Code with
Kogito editor<\/a> or default extension<\/a>
\nany IDE that integrates with
org<\/a>, i.e. Intellij IDEA<\/p>\n

It\u2019s crucial to ensure expressions and logic are valid through unit tests and validation against defined schemas.<\/p>\n

To aid in that, you can write unit tests by utilizing stated<\/a>, see an example for this workflow<\/a>.Online JSONata editor<\/a> can also be a helpful tool in writing your expressions.<\/p>\n

A blog on workflow testing is coming soon!<\/p>\n

Step by Step Guide<\/h2>\n

Create the workflow DSL<\/h3>\n

Provide a unique identifier and a name for your workflow:<\/p>\n

id: violations-counter
\nversion: ‘1.0.0’
\nspecVersion: ‘0.8’
\nname: Violations Counter<\/p>\n

Find the trigger event<\/h3>\n

Let\u2019s query our trigger using fsoc:<\/p>\n

fsoc knowledge get –type=contracts:cloudevent –object-id=platform:event.enriched.v1<\/strong> –layer-type=TENANT<\/p>\n

Output:<\/p>\n

type: event.enriched.v1
\ndescription: Indicates that an event was enriched with topology tags
\ndataschema<\/strong>: contracts:jsonSchema\/platform:event.v1<\/strong>
\ncategory: data:trigger
\nextensions:
\n – contracts:cloudeventExtension\/platform:entitytypes
\n\u00a0 – contracts:cloudeventExtension\/platform:source <\/p>\n

Subscribe to the event<\/h3>\n

To subscribe to this event, you need to add an event definition<\/a> and event state<\/a> referencing this definition (note a nature of the reference to the event \u2013 it must be qualified with its knowledge type):<\/p>\n

events:
\n\u00a0 – name: EventReceived<\/strong>
\n\u00a0\u00a0\u00a0 type: contracts:cloudevent\/platform:event.enriched.v1<\/strong>
\n \u00a0\u00a0 kind: consumed
\n \u00a0\u00a0 dataOnly: false
\n \u00a0\u00a0 source: platform
\nstates:
\n – name: event-received
\n \u00a0\u00a0 type: event
\n \u00a0\u00a0 onEvents:
\n \u00a0\u00a0\u00a0\u00a0 – eventRefs:
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – EventReceived<\/strong><\/p>\n

Inspect the event<\/h3>\n

Since the data in workflows is received in JSON format, event data is described in JSON schema.<\/p>\n

Let\u2019s look at the JSON schema of this event (referenced in dataschema)<\/strong>, so you know what to expect in our workflow:<\/p>\n

fsoc knowledge get –type=contracts:jsonSchema –object-id=platform:event.v1<\/strong> –layer-type=TENANT
\nResult:
\n$schema: http:\/\/json-schema.org\/draft-07\/schema#
\ntitle: Event
\n$id: event.v1
\ntype: object
\nrequired:
\n – entities
\n – type
\n – timestamp
\nproperties:
\n entities:
\n \u00a0\u00a0 type: array
\n \u00a0\u00a0 minItems: 1
\n \u00a0\u00a0 items:
\n \u00a0\u00a0\u00a0\u00a0 $ref: ‘#\/definitions\/EntityReference’
\n type:
\n \u00a0\u00a0 $ref: ‘#\/definitions\/TypeReference’
\n timestamp:
\n \u00a0\u00a0 type: integer
\n \u00a0\u00a0 description: The timestamp in milliseconds
\n spanId:
\n \u00a0\u00a0 type: string
\n \u00a0\u00a0 description: Span id
\n traceId:
\n \u00a0\u00a0 type: string
\n \u00a0\u00a0 description: Trace id
\n raw:
\n \u00a0\u00a0 type: string
\n \u00a0\u00a0 description: The raw body of the event record
\n attributes:
\n \u00a0\u00a0 $ref: ‘#\/definitions\/Attributes’
\n tags:
\n \u00a0\u00a0 $ref: ‘#\/definitions\/Tags’
\nadditionalProperties: false
\ndefinitions:
\n Tags:
\n \u00a0\u00a0 type: object
\n \u00a0\u00a0 propertyNames:
\n \u00a0\u00a0\u00a0\u00a0 minLength: 1
\n \u00a0\u00a0\u00a0\u00a0 maxLength: 256
\n \u00a0\u00a0 additionalProperties:
\n \u00a0\u00a0\u00a0\u00a0 type: string
\n Attributes:
\n \u00a0\u00a0 type: object
\n \u00a0\u00a0 propertyNames:
\n \u00a0\u00a0\u00a0\u00a0 minLength: 1
\n \u00a0\u00a0\u00a0\u00a0 maxLength: 256
\n \u00a0\u00a0 additionalProperties:
\n \u00a0\u00a0\u00a0\u00a0 type:
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – string
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – number
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – boolean
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – object
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – array
\n EntityReference:
\n \u00a0\u00a0 type: object
\n \u00a0\u00a0 required:
\n \u00a0\u00a0\u00a0\u00a0 – id
\n \u00a0\u00a0\u00a0\u00a0 – type
\n \u00a0\u00a0 properties:
\n \u00a0\u00a0\u00a0\u00a0 id:
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 type: string
\n\u00a0\u00a0\u00a0\u00a0\u00a0 type:
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 $ref: ‘#\/definitions\/TypeReference’
\n\u00a0\u00a0\u00a0\u00a0\u00a0 additionalProperties: false
\n\u00a0 TypeReference:
\n\u00a0\u00a0\u00a0 type: string
\n\u00a0\u00a0\u00a0 description: A fully qualified FMM type reference
\n\u00a0\u00a0\u00a0 example: k8s:pod<\/p>\n

It\u2019s straightforward \u2013 a single event, with one or more entity references. Since dataOnly=false<\/a>, the payload of the event will be enclosed in the data field, and extension attributes will also be available to the workflow.
\nSince we know the exact FMM event type we are interested in, you can also query its definition to understand the attributes that the workflow will be receiving and their semantics:<\/p>\n

fsoc knowledge get –type=fmm:event –filter=”data.name eq “healthrule.violation” and data.namespace.name eq “alerting”” –layer-type=TENANT<\/p>\n

Validate event relevance<\/h3>\n

You\u2019ll need to ensure that the event you receive is of the correct FMM event type, and that referenced entities are relevant. To do this, you can write an expression<\/a> in JSONata and then use it in an action condition:<\/p>\n

functions:
\n\u00a0 – name: checkType<\/strong>
\n \u00a0\u00a0 type: expression
\n \u00a0\u00a0 operation: ]]\u00a0\u00a0Cisco Observability Platform is designed to ingest and process vast amounts of MELT (Metrics, Events, Logs and Traces) data. It is built on top of open standards like OpenTelemetry to ensure interoperability. See how its provision of extensions let you tailor every facet of its functionality to your unique needs.\u00a0\u00a0
Read More<\/a>\u00a0Cisco Blogs\u00a0<\/p>\n

<\/p>\n","protected":false},"author":0,"featured_media":2650,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[12],"tags":[],"class_list":["post-2649","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-cisco-learning"],"yoast_head":"\nData Processing in Cisco Observability Platform \u2013 A Step-by-Step Guide Anna Bokhan-Dilawari on March 4, 2024 at 6:59 pm - JHC<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Data Processing in Cisco Observability Platform \u2013 A Step-by-Step Guide Anna Bokhan-Dilawari on March 4, 2024 at 6:59 pm\" \/>\n<meta property=\"og:description\" content=\"Process Vast Amounts of MELT Data Cisco Observability Platform s designed to ingest and process vast amounts of MELT (Metrics, Events, Logs and Traces) data. It is built on top of open standards like\u2026 Read more on Cisco Blogs \u200b[["value":" Process Vast Amounts of MELT Data Cisco Observability Platform s designed to ingest and process vast amounts of MELT (Metrics, Events, Logs and Traces) data. It is built on top of open standards like OpenTelemetry to ensure interoperability. What sets it apart is its provision of extensions, empowering our partners and customers to tailor every facet of its functionality to their unique needs. Our focus today is unveiling the intricacies of customizations specifically tailored for data processing. It is expected that you have an understanding of the platform basics, like Flexible Metadata Model (FMM) and solution development. Let\u2019s dive in! Understanding Data Processing Stages The data processing pipeline has various stages that lead to data storage. As MELT data moves through the pipeline, it is processed, transformed, and enriched, and eventually lands in the data store where it can be queried with Unified Query Language (UQL): Each stage marked with a gear icon allows customization of specific logic. Furthermore, the platform enables the creation of entirely custom post-processing logic when data can no longer be altered. To streamline customization while maintaining flexibility, we are embracing a new approach: workflows, taps, and plugins, utilizing the CNCF Serverless Workflow specification with JSONata as the default expression language. Since Serverless Workflows are designed using open standards, we are extensively utilizing CloudEvents and OpenAPI specifications. By leveraging these open standards, we ensure compatibility and ease of development. Data processing stages that allow data mutation are called taps, and their customizations plugins. Each tap declares an input and output JSON schema for its plugins. Plugins are expected to produce an output that adheres to the tap\u2019s output schema. A tap is responsible for merging outputs from all its plugins and producing a new event, which is a modified version of an original event. Taps can only be authored by the platform, while plugins can be created by any solution as well as regular users of the platform. Workflows are meant for post-processing and thus can only subscribe to triggers (see below). Workflow use cases range from simple event counting to sophisticated machine learning model inferences. Anyone can author workflows. This abstraction allows developers to reason in terms of a single event, without exposing the complexity of the underlying stream processing, and use familiar well documented standards, both of which lower the barrier of entry. Events as a connecting tissue Each data processing stage communicates with other stages via events, which allows us to decouple consumers and producers and seamlessly rearrange the stages should the need arise. Each event has an associated category, which determines whether a specific stage can subscribe to or publish that event. There are two public categories for data-related events: data:observation \u2013 a category of events with publish-only permissions which can be thought of as side-effects of processing the original event, for example, an entity derived from resource attributes in OpenTelemetry metric packet. Observations are indicated with upward \u2018publish\u2019 arrows in the above diagram. Taps, workflows and plugins can all produce observations. Observations can only be subscribed to by specific taps. data:trigger \u2013 subscribe-only events that are emitted after all the mutations have completed. Triggers are indicated with a lightning \u2018trigger\u2019 icon in the above diagram. Only workflows (post-processing logic) can subscribe to triggers, and only specific taps can publish them. There are five observation event types in the platform: entity.observed \u2013 FMM entity was discovered while processing some data. It can be a new entity or an update to an existing entity. Each update from the same source fully replaces the previous one. association.observed \u2013 FMM association was discovered while processing some data. Depending on the cardinality of the association the update logic differs extension.observed \u2013 FMM extension attributes were discovered while processing some data. A target entity must already exist. measurement.received \u2013 a measurement event which contributes to a specific FMM metric. These measurements will be aggregated into a metric in Metric aggregation tap. Aggregation logic depends on the metric\u2019s content type. event.received \u2013 raises a new FMM event. This event will also be processed by the Event processing tap, just like externally ingested events. There are 3 trigger event types in the platform, one for each data kind: metric.enriched, event.enriched, trace.encriched. All three events are emitted from the final \u2018Tag enrichment\u2019 tap. Each event is registered in a platform\u2019s knowledge store, so that they are easily discoverable. To list all available events, simply use fsoc to query them, i.e., to get all triggers: fsoc knowledge get --type=contracts:cloudevent --filter="data.category eq 'data:trigger'" --layer-type=TENANT Note that all event types are versioned to allow for evolution and are qualified with platform solution identifier for isolation. For example, a fully qualified id of measurement.received event is platform:measurement.received.v1 Authoring Workflows: A Practical Example Let\u2019s illustrate the above concepts with a straightforward example. Consider a workflow designed to count health rule violations for Kubernetes workloads and APM services. The logic of the workflow can be broken down into several steps: Subscribe to the trigger event Validate event type and entity relevance Publish a measurement event counting violations while retaining severity Development Tools Developers can utilize various tools to aid in workflow development, such as web-based editors or IDEs. web-based editor VS Code with Kogito editor or default extension any IDE that integrates with org, i.e. Intellij IDEA It\u2019s crucial to ensure expressions and logic are valid through unit tests and validation against defined schemas. To aid in that, you can write unit tests by utilizing stated, see an example for this workflow.Online JSONata editor can also be a helpful tool in writing your expressions. A blog on workflow testing is coming soon! Step by Step Guide Create the workflow DSL Provide a unique identifier and a name for your workflow: id: violations-counter version: '1.0.0' specVersion: '0.8' name: Violations Counter Find the trigger event Let\u2019s query our trigger using fsoc: fsoc knowledge get --type=contracts:cloudevent --object-id=platform:event.enriched.v1 --layer-type=TENANT Output: type: event.enriched.v1 description: Indicates that an event was enriched with topology tags dataschema: contracts:jsonSchema\/platform:event.v1 category: data:trigger extensions: - contracts:cloudeventExtension\/platform:entitytypes \u00a0 - contracts:cloudeventExtension\/platform:source Subscribe to the event To subscribe to this event, you need to add an event definition and event state referencing this definition (note a nature of the reference to the event \u2013 it must be qualified with its knowledge type): events: \u00a0 - name: EventReceived \u00a0\u00a0\u00a0 type: contracts:cloudevent\/platform:event.enriched.v1 \u00a0\u00a0 kind: consumed \u00a0\u00a0 dataOnly: false \u00a0\u00a0 source: platform states: - name: event-received \u00a0\u00a0 type: event \u00a0\u00a0 onEvents: \u00a0\u00a0\u00a0\u00a0 - eventRefs: \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 - EventReceived Inspect the event Since the data in workflows is received in JSON format, event data is described in JSON schema. Let\u2019s look at the JSON schema of this event (referenced in dataschema), so you know what to expect in our workflow: fsoc knowledge get --type=contracts:jsonSchema --object-id=platform:event.v1 --layer-type=TENANT Result: $schema: http:\/\/json-schema.org\/draft-07\/schema# title: Event $id: event.v1 type: object required: - entities - type - timestamp properties: entities: \u00a0\u00a0 type: array \u00a0\u00a0 minItems: 1 \u00a0\u00a0 items: \u00a0\u00a0\u00a0\u00a0 $ref: '#\/definitions\/EntityReference' type: \u00a0\u00a0 $ref: '#\/definitions\/TypeReference' timestamp: \u00a0\u00a0 type: integer \u00a0\u00a0 description: The timestamp in milliseconds spanId: \u00a0\u00a0 type: string \u00a0\u00a0 description: Span id traceId: \u00a0\u00a0 type: string \u00a0\u00a0 description: Trace id raw: \u00a0\u00a0 type: string \u00a0\u00a0 description: The raw body of the event record attributes: \u00a0\u00a0 $ref: '#\/definitions\/Attributes' tags: \u00a0\u00a0 $ref: '#\/definitions\/Tags' additionalProperties: false definitions: Tags: \u00a0\u00a0 type: object \u00a0\u00a0 propertyNames: \u00a0\u00a0\u00a0\u00a0 minLength: 1 \u00a0\u00a0\u00a0\u00a0 maxLength: 256 \u00a0\u00a0 additionalProperties: \u00a0\u00a0\u00a0\u00a0 type: string Attributes: \u00a0\u00a0 type: object \u00a0\u00a0 propertyNames: \u00a0\u00a0\u00a0\u00a0 minLength: 1 \u00a0\u00a0\u00a0\u00a0 maxLength: 256 \u00a0\u00a0 additionalProperties: \u00a0\u00a0\u00a0\u00a0 type: \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 - string \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 - number \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 - boolean \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 - object \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 - array EntityReference: \u00a0\u00a0 type: object \u00a0\u00a0 required: \u00a0\u00a0\u00a0\u00a0 - id \u00a0\u00a0\u00a0\u00a0 - type \u00a0\u00a0 properties: \u00a0\u00a0\u00a0\u00a0 id: \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 type: string \u00a0\u00a0\u00a0\u00a0\u00a0 type: \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 $ref: '#\/definitions\/TypeReference' \u00a0\u00a0\u00a0\u00a0\u00a0 additionalProperties: false \u00a0 TypeReference: \u00a0\u00a0\u00a0 type: string \u00a0\u00a0\u00a0 description: A fully qualified FMM type reference \u00a0\u00a0\u00a0 example: k8s:pod It\u2019s straightforward \u2013 a single event, with one or more entity references. Since dataOnly=false, the payload of the event will be enclosed in the data field, and extension attributes will also be available to the workflow. Since we know the exact FMM event type we are interested in, you can also query its definition to understand the attributes that the workflow will be receiving and their semantics: fsoc knowledge get --type=fmm:event --filter="data.name eq "healthrule.violation" and data.namespace.name eq "alerting"" --layer-type=TENANT Validate event relevance You\u2019ll need to ensure that the event you receive is of the correct FMM event type, and that referenced entities are relevant. To do this, you can write an expression in JSONata and then use it in an action condition: functions: \u00a0 - name: checkType \u00a0\u00a0 type: expression \u00a0\u00a0 operation: ]]\u00a0\u00a0Cisco Observability Platform is designed to ingest and process vast amounts of MELT (Metrics, Events, Logs and Traces) data. It is built on top of open standards like OpenTelemetry to ensure interoperability. See how its provision of extensions let you tailor every facet of its functionality to your unique needs.\u00a0\u00a0Read More\u00a0Cisco Blogs\u00a0\" \/>\n<meta property=\"og:url\" content=\"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/\" \/>\n<meta property=\"og:site_name\" content=\"JHC\" \/>\n<meta property=\"article:published_time\" content=\"2024-03-05T12:58:15+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16601628-DNDmaM.gif\" \/>\n\t<meta property=\"og:image:width\" content=\"1\" \/>\n\t<meta property=\"og:image:height\" content=\"1\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/gif\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/\"},\"author\":{\"name\":\"\",\"@id\":\"\"},\"headline\":\"Data Processing in Cisco Observability Platform \u2013 A Step-by-Step Guide Anna Bokhan-Dilawari on March 4, 2024 at 6:59 pm\",\"datePublished\":\"2024-03-05T12:58:15+00:00\",\"dateModified\":\"2024-03-05T12:58:15+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/\"},\"wordCount\":1545,\"publisher\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16601628-DNDmaM.gif\",\"articleSection\":[\"Cisco: Learning\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/\",\"url\":\"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/\",\"name\":\"Data Processing in Cisco Observability Platform \u2013 A Step-by-Step Guide Anna Bokhan-Dilawari on March 4, 2024 at 6:59 pm - JHC\",\"isPartOf\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16601628-DNDmaM.gif\",\"datePublished\":\"2024-03-05T12:58:15+00:00\",\"dateModified\":\"2024-03-05T12:58:15+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/#primaryimage\",\"url\":\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16601628-DNDmaM.gif\",\"contentUrl\":\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16601628-DNDmaM.gif\",\"width\":1,\"height\":1},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/jacksonholdingcompany.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Data Processing in Cisco Observability Platform \u2013 A Step-by-Step Guide Anna Bokhan-Dilawari on March 4, 2024 at 6:59 pm\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/#website\",\"url\":\"https:\/\/jacksonholdingcompany.com\/\",\"name\":\"JHC\",\"description\":\"Your Business Is Our Business\",\"publisher\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/jacksonholdingcompany.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/#organization\",\"name\":\"JHC\",\"url\":\"https:\/\/jacksonholdingcompany.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/jacksonholdingcompany.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2023\/07\/cropped-cropped-jHC-white-500-\u00d7-200-px-1-1.png\",\"contentUrl\":\"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2023\/07\/cropped-cropped-jHC-white-500-\u00d7-200-px-1-1.png\",\"width\":452,\"height\":149,\"caption\":\"JHC\"},\"image\":{\"@id\":\"https:\/\/jacksonholdingcompany.com\/#\/schema\/logo\/image\/\"}}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Data Processing in Cisco Observability Platform \u2013 A Step-by-Step Guide Anna Bokhan-Dilawari on March 4, 2024 at 6:59 pm - JHC","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/","og_locale":"en_US","og_type":"article","og_title":"Data Processing in Cisco Observability Platform \u2013 A Step-by-Step Guide Anna Bokhan-Dilawari on March 4, 2024 at 6:59 pm","og_description":"Process Vast Amounts of MELT Data Cisco Observability Platform s designed to ingest and process vast amounts of MELT (Metrics, Events, Logs and Traces) data. It is built on top of open standards like\u2026 Read more on Cisco Blogs \u200b[[\"value\":\" Process Vast Amounts of MELT Data Cisco Observability Platform s designed to ingest and process vast amounts of MELT (Metrics, Events, Logs and Traces) data. It is built on top of open standards like OpenTelemetry to ensure interoperability. What sets it apart is its provision of extensions, empowering our partners and customers to tailor every facet of its functionality to their unique needs. Our focus today is unveiling the intricacies of customizations specifically tailored for data processing. It is expected that you have an understanding of the platform basics, like Flexible Metadata Model (FMM) and solution development. Let\u2019s dive in! Understanding Data Processing Stages The data processing pipeline has various stages that lead to data storage. As MELT data moves through the pipeline, it is processed, transformed, and enriched, and eventually lands in the data store where it can be queried with Unified Query Language (UQL): Each stage marked with a gear icon allows customization of specific logic. Furthermore, the platform enables the creation of entirely custom post-processing logic when data can no longer be altered. To streamline customization while maintaining flexibility, we are embracing a new approach: workflows, taps, and plugins, utilizing the CNCF Serverless Workflow specification with JSONata as the default expression language. Since Serverless Workflows are designed using open standards, we are extensively utilizing CloudEvents and OpenAPI specifications. By leveraging these open standards, we ensure compatibility and ease of development. Data processing stages that allow data mutation are called taps, and their customizations plugins. Each tap declares an input and output JSON schema for its plugins. Plugins are expected to produce an output that adheres to the tap\u2019s output schema. A tap is responsible for merging outputs from all its plugins and producing a new event, which is a modified version of an original event. Taps can only be authored by the platform, while plugins can be created by any solution as well as regular users of the platform. Workflows are meant for post-processing and thus can only subscribe to triggers (see below). Workflow use cases range from simple event counting to sophisticated machine learning model inferences. Anyone can author workflows. This abstraction allows developers to reason in terms of a single event, without exposing the complexity of the underlying stream processing, and use familiar well documented standards, both of which lower the barrier of entry. Events as a connecting tissue Each data processing stage communicates with other stages via events, which allows us to decouple consumers and producers and seamlessly rearrange the stages should the need arise. Each event has an associated category, which determines whether a specific stage can subscribe to or publish that event. There are two public categories for data-related events: data:observation \u2013 a category of events with publish-only permissions which can be thought of as side-effects of processing the original event, for example, an entity derived from resource attributes in OpenTelemetry metric packet. Observations are indicated with upward \u2018publish\u2019 arrows in the above diagram. Taps, workflows and plugins can all produce observations. Observations can only be subscribed to by specific taps. data:trigger \u2013 subscribe-only events that are emitted after all the mutations have completed. Triggers are indicated with a lightning \u2018trigger\u2019 icon in the above diagram. Only workflows (post-processing logic) can subscribe to triggers, and only specific taps can publish them. There are five observation event types in the platform: entity.observed \u2013 FMM entity was discovered while processing some data. It can be a new entity or an update to an existing entity. Each update from the same source fully replaces the previous one. association.observed \u2013 FMM association was discovered while processing some data. Depending on the cardinality of the association the update logic differs extension.observed \u2013 FMM extension attributes were discovered while processing some data. A target entity must already exist. measurement.received \u2013 a measurement event which contributes to a specific FMM metric. These measurements will be aggregated into a metric in Metric aggregation tap. Aggregation logic depends on the metric\u2019s content type. event.received \u2013 raises a new FMM event. This event will also be processed by the Event processing tap, just like externally ingested events. There are 3 trigger event types in the platform, one for each data kind: metric.enriched, event.enriched, trace.encriched. All three events are emitted from the final \u2018Tag enrichment\u2019 tap. Each event is registered in a platform\u2019s knowledge store, so that they are easily discoverable. To list all available events, simply use fsoc to query them, i.e., to get all triggers: fsoc knowledge get --type=contracts:cloudevent --filter=\"data.category eq 'data:trigger'\" --layer-type=TENANT Note that all event types are versioned to allow for evolution and are qualified with platform solution identifier for isolation. For example, a fully qualified id of measurement.received event is platform:measurement.received.v1 Authoring Workflows: A Practical Example Let\u2019s illustrate the above concepts with a straightforward example. Consider a workflow designed to count health rule violations for Kubernetes workloads and APM services. The logic of the workflow can be broken down into several steps: Subscribe to the trigger event Validate event type and entity relevance Publish a measurement event counting violations while retaining severity Development Tools Developers can utilize various tools to aid in workflow development, such as web-based editors or IDEs. web-based editor VS Code with Kogito editor or default extension any IDE that integrates with org, i.e. Intellij IDEA It\u2019s crucial to ensure expressions and logic are valid through unit tests and validation against defined schemas. To aid in that, you can write unit tests by utilizing stated, see an example for this workflow.Online JSONata editor can also be a helpful tool in writing your expressions. A blog on workflow testing is coming soon! Step by Step Guide Create the workflow DSL Provide a unique identifier and a name for your workflow: id: violations-counter version: '1.0.0' specVersion: '0.8' name: Violations Counter Find the trigger event Let\u2019s query our trigger using fsoc: fsoc knowledge get --type=contracts:cloudevent --object-id=platform:event.enriched.v1 --layer-type=TENANT Output: type: event.enriched.v1 description: Indicates that an event was enriched with topology tags dataschema: contracts:jsonSchema\/platform:event.v1 category: data:trigger extensions: - contracts:cloudeventExtension\/platform:entitytypes \u00a0 - contracts:cloudeventExtension\/platform:source Subscribe to the event To subscribe to this event, you need to add an event definition and event state referencing this definition (note a nature of the reference to the event \u2013 it must be qualified with its knowledge type): events: \u00a0 - name: EventReceived \u00a0\u00a0\u00a0 type: contracts:cloudevent\/platform:event.enriched.v1 \u00a0\u00a0 kind: consumed \u00a0\u00a0 dataOnly: false \u00a0\u00a0 source: platform states: - name: event-received \u00a0\u00a0 type: event \u00a0\u00a0 onEvents: \u00a0\u00a0\u00a0\u00a0 - eventRefs: \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 - EventReceived Inspect the event Since the data in workflows is received in JSON format, event data is described in JSON schema. Let\u2019s look at the JSON schema of this event (referenced in dataschema), so you know what to expect in our workflow: fsoc knowledge get --type=contracts:jsonSchema --object-id=platform:event.v1 --layer-type=TENANT Result: $schema: http:\/\/json-schema.org\/draft-07\/schema# title: Event $id: event.v1 type: object required: - entities - type - timestamp properties: entities: \u00a0\u00a0 type: array \u00a0\u00a0 minItems: 1 \u00a0\u00a0 items: \u00a0\u00a0\u00a0\u00a0 $ref: '#\/definitions\/EntityReference' type: \u00a0\u00a0 $ref: '#\/definitions\/TypeReference' timestamp: \u00a0\u00a0 type: integer \u00a0\u00a0 description: The timestamp in milliseconds spanId: \u00a0\u00a0 type: string \u00a0\u00a0 description: Span id traceId: \u00a0\u00a0 type: string \u00a0\u00a0 description: Trace id raw: \u00a0\u00a0 type: string \u00a0\u00a0 description: The raw body of the event record attributes: \u00a0\u00a0 $ref: '#\/definitions\/Attributes' tags: \u00a0\u00a0 $ref: '#\/definitions\/Tags' additionalProperties: false definitions: Tags: \u00a0\u00a0 type: object \u00a0\u00a0 propertyNames: \u00a0\u00a0\u00a0\u00a0 minLength: 1 \u00a0\u00a0\u00a0\u00a0 maxLength: 256 \u00a0\u00a0 additionalProperties: \u00a0\u00a0\u00a0\u00a0 type: string Attributes: \u00a0\u00a0 type: object \u00a0\u00a0 propertyNames: \u00a0\u00a0\u00a0\u00a0 minLength: 1 \u00a0\u00a0\u00a0\u00a0 maxLength: 256 \u00a0\u00a0 additionalProperties: \u00a0\u00a0\u00a0\u00a0 type: \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 - string \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 - number \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 - boolean \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 - object \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 - array EntityReference: \u00a0\u00a0 type: object \u00a0\u00a0 required: \u00a0\u00a0\u00a0\u00a0 - id \u00a0\u00a0\u00a0\u00a0 - type \u00a0\u00a0 properties: \u00a0\u00a0\u00a0\u00a0 id: \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 type: string \u00a0\u00a0\u00a0\u00a0\u00a0 type: \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 $ref: '#\/definitions\/TypeReference' \u00a0\u00a0\u00a0\u00a0\u00a0 additionalProperties: false \u00a0 TypeReference: \u00a0\u00a0\u00a0 type: string \u00a0\u00a0\u00a0 description: A fully qualified FMM type reference \u00a0\u00a0\u00a0 example: k8s:pod It\u2019s straightforward \u2013 a single event, with one or more entity references. Since dataOnly=false, the payload of the event will be enclosed in the data field, and extension attributes will also be available to the workflow. Since we know the exact FMM event type we are interested in, you can also query its definition to understand the attributes that the workflow will be receiving and their semantics: fsoc knowledge get --type=fmm:event --filter=\"data.name eq \"healthrule.violation\" and data.namespace.name eq \"alerting\"\" --layer-type=TENANT Validate event relevance You\u2019ll need to ensure that the event you receive is of the correct FMM event type, and that referenced entities are relevant. To do this, you can write an expression in JSONata and then use it in an action condition: functions: \u00a0 - name: checkType \u00a0\u00a0 type: expression \u00a0\u00a0 operation: ]]\u00a0\u00a0Cisco Observability Platform is designed to ingest and process vast amounts of MELT (Metrics, Events, Logs and Traces) data. It is built on top of open standards like OpenTelemetry to ensure interoperability. See how its provision of extensions let you tailor every facet of its functionality to your unique needs.\u00a0\u00a0Read More\u00a0Cisco Blogs\u00a0","og_url":"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/","og_site_name":"JHC","article_published_time":"2024-03-05T12:58:15+00:00","og_image":[{"width":1,"height":1,"url":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16601628-DNDmaM.gif","type":"image\/gif"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/#article","isPartOf":{"@id":"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/"},"author":{"name":"","@id":""},"headline":"Data Processing in Cisco Observability Platform \u2013 A Step-by-Step Guide Anna Bokhan-Dilawari on March 4, 2024 at 6:59 pm","datePublished":"2024-03-05T12:58:15+00:00","dateModified":"2024-03-05T12:58:15+00:00","mainEntityOfPage":{"@id":"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/"},"wordCount":1545,"publisher":{"@id":"https:\/\/jacksonholdingcompany.com\/#organization"},"image":{"@id":"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/#primaryimage"},"thumbnailUrl":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16601628-DNDmaM.gif","articleSection":["Cisco: Learning"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/","url":"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/","name":"Data Processing in Cisco Observability Platform \u2013 A Step-by-Step Guide Anna Bokhan-Dilawari on March 4, 2024 at 6:59 pm - JHC","isPartOf":{"@id":"https:\/\/jacksonholdingcompany.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/#primaryimage"},"image":{"@id":"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/#primaryimage"},"thumbnailUrl":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16601628-DNDmaM.gif","datePublished":"2024-03-05T12:58:15+00:00","dateModified":"2024-03-05T12:58:15+00:00","breadcrumb":{"@id":"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/#primaryimage","url":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16601628-DNDmaM.gif","contentUrl":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2024\/03\/16601628-DNDmaM.gif","width":1,"height":1},{"@type":"BreadcrumbList","@id":"https:\/\/jacksonholdingcompany.com\/data-processing-in-cisco-observability-platform-a-step-by-step-guide-anna-bokhan-dilawari-on-march-4-2024-at-659-pm\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/jacksonholdingcompany.com\/"},{"@type":"ListItem","position":2,"name":"Data Processing in Cisco Observability Platform \u2013 A Step-by-Step Guide Anna Bokhan-Dilawari on March 4, 2024 at 6:59 pm"}]},{"@type":"WebSite","@id":"https:\/\/jacksonholdingcompany.com\/#website","url":"https:\/\/jacksonholdingcompany.com\/","name":"JHC","description":"Your Business Is Our Business","publisher":{"@id":"https:\/\/jacksonholdingcompany.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/jacksonholdingcompany.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/jacksonholdingcompany.com\/#organization","name":"JHC","url":"https:\/\/jacksonholdingcompany.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/jacksonholdingcompany.com\/#\/schema\/logo\/image\/","url":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2023\/07\/cropped-cropped-jHC-white-500-\u00d7-200-px-1-1.png","contentUrl":"https:\/\/jacksonholdingcompany.com\/wp-content\/uploads\/2023\/07\/cropped-cropped-jHC-white-500-\u00d7-200-px-1-1.png","width":452,"height":149,"caption":"JHC"},"image":{"@id":"https:\/\/jacksonholdingcompany.com\/#\/schema\/logo\/image\/"}}]}},"_links":{"self":[{"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/posts\/2649","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/comments?post=2649"}],"version-history":[{"count":0,"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/posts\/2649\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/media\/2650"}],"wp:attachment":[{"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/media?parent=2649"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/categories?post=2649"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jacksonholdingcompany.com\/wp-json\/wp\/v2\/tags?post=2649"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}