easy-accordion-free
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114zoho-flow
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114wordpress-seo
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114Cisco Observability Platform s designed to ingest and process vast amounts of MELT (Metrics, Events, Logs and Traces) data. It is built on top of open standards like\u2026 Read more on Cisco Blogs<\/a><\/p>\n \u200b[[“value”:”<\/p>\n Cisco Observability Platform s designed to ingest and process vast amounts of MELT (Metrics, Events, Logs and Traces) data. It is built on top of open standards like OpenTelemetry<\/a> to ensure interoperability.<\/p>\n What sets it apart is its provision of extensions, empowering our partners and customers to tailor every facet of its functionality to their unique needs. Our focus today is unveiling the intricacies of customizations specifically tailored for data processing. It is expected that you have an understanding of the platform basics<\/a>, like Flexible Metadata Model (FMM) and solution development. Let\u2019s dive in!<\/p>\n The data processing pipeline has various stages that lead to data storage. As MELT data moves through the pipeline, it is processed, transformed, and enriched, and eventually lands in the data store where it can be queried with Unified Query Language (UQL):<\/p>\n\n Each stage marked with a gear icon allows customization of specific logic. Furthermore, the platform enables the creation of entirely custom post-processing logic when data can no longer be altered.<\/p>\n To streamline customization while maintaining flexibility, we are embracing a new approach: workflows, taps, and plugins, utilizing the CNCF Serverless Workflow specification<\/a> with JSONata<\/a> as the default expression language. Since Serverless Workflows are designed using open standards, we are extensively utilizing CloudEvents<\/a> and OpenAPI<\/a> specifications. By leveraging these open standards, we ensure compatibility and ease of development.<\/p>\n Data processing stages that allow data mutation are called taps<\/em>, and their customizations plugins. <\/em>Each tap declares an input and output JSON schema<\/a> for its plugins. Plugins are expected to produce an output that adheres to the tap\u2019s output schema. A tap is responsible for merging outputs from all its plugins and producing a new event, which is a modified version of an original event. Taps can only be authored by the platform, while plugins can be created by any solution as well as regular users of the platform.<\/p>\n Workflows <\/em>are meant for post-processing and thus can only subscribe to triggers <\/em>(see below). Workflow use cases range from simple event counting to sophisticated machine learning model inferences. Anyone can author workflows.<\/p>\n This abstraction allows developers to reason in terms of a single event, without exposing the complexity of the underlying stream processing, and use familiar well documented standards, both of which lower the barrier of entry.<\/p>\n Each data processing stage communicates with other stages via events, which allows us to decouple consumers and producers and seamlessly rearrange the stages should the need arise. data:observation<\/strong> \u2013 a category of events with publish-only permissions which can be thought of as side-effects of processing the original event, for example, an entity derived from resource attributes in OpenTelemetry metric packet. Observations are indicated with upward \u2018publish\u2019 arrows in the above diagram. Taps, workflows and plugins can all produce observations. Observations can only be subscribed to by specific taps. There are five observation event types in the platform:<\/p>\n entity.observed<\/strong> \u2013 FMM entity was discovered while processing some data. It can be a new entity or an update to an existing entity. Each update from the same source fully replaces the previous one. There are 3 trigger event types in the platform, one for each data kind: metric.enriched<\/strong>, event.enriched<\/strong>, trace.encriched<\/strong>. All three events are emitted from the final \u2018Tag enrichment\u2019 tap.<\/p>\n Each event is registered in a platform\u2019s knowledge store, so that they are easily discoverable. To list all available events, simply use fsoc to query them, i.e., to get all triggers:<\/p>\n fsoc knowledge get –type=contracts:cloudevent –filter=”data.category eq ‘data:trigger'” –layer-type=TENANT<\/p>\n Note that all event types are versioned to allow for evolution and are qualified with platform solution identifier for isolation. For example, a fully qualified id of measurement.received<\/strong> event is platform:measurement.received.v1<\/strong><\/p>\n Let\u2019s illustrate the above concepts with a straightforward example. Consider a workflow designed to count health rule violations for Kubernetes workloads and APM services. The logic of the workflow can be broken down into several steps:<\/p>\n Subscribe to the trigger event Developers can utilize various tools to aid in workflow development, such as web-based editors or IDEs.<\/p>\n web-based editor<\/a> It\u2019s crucial to ensure expressions and logic are valid through unit tests and validation against defined schemas.<\/p>\n To aid in that, you can write unit tests by utilizing stated<\/a>, see an example for this workflow<\/a>.Online JSONata editor<\/a> can also be a helpful tool in writing your expressions.<\/p>\n A blog on workflow testing is coming soon!<\/p>\n Provide a unique identifier and a name for your workflow:<\/p>\n id: violations-counter
Let\u2019s query our trigger using fsoc:<\/p>\n fsoc knowledge get –type=contracts:cloudevent –object-id=platform:event.enriched.v1<\/strong> –layer-type=TENANT<\/p>\n Output:<\/p>\n type: event.enriched.v1
To subscribe to this event, you need to add an event definition<\/a> and event state<\/a> referencing this definition (note a nature of the reference to the event \u2013 it must be qualified with its knowledge type):<\/p>\n events:
Since the data in workflows is received in JSON format, event data is described in JSON schema.<\/p>\n Let\u2019s look at the JSON schema of this event (referenced in dataschema)<\/strong>, so you know what to expect in our workflow:<\/p>\n fsoc knowledge get –type=contracts:jsonSchema –object-id=platform:event.v1<\/strong> –layer-type=TENANT
It\u2019s straightforward \u2013 a single event, with one or more entity references. Since dataOnly=false<\/a>, the payload of the event will be enclosed in the data field, and extension attributes will also be available to the workflow. fsoc knowledge get –type=fmm:event –filter=”data.name eq “healthrule.violation” and data.namespace.name eq “alerting”” –layer-type=TENANT<\/p>\n You\u2019ll need to ensure that the event you receive is of the correct FMM event type, and that referenced entities are relevant. To do this, you can write an expression<\/a> in JSONata and then use it in an action condition:<\/p>\n functions:
<\/p>\n Cisco Observability Platform s designed to ingest and process vast amounts of MELT (Metrics, Events, Logs and Traces) data. It is built on top of open standards like\u2026 Read more on Cisco Blogs<\/a><\/p>\n \u200b[[“value”:”<\/p>\n Cisco Observability Platform s designed to ingest and process vast amounts of MELT (Metrics, Events, Logs and Traces) data. It is built on top of open standards like OpenTelemetry<\/a> to ensure interoperability.<\/p>\n What sets it apart is its provision of extensions, empowering our partners and customers to tailor every facet of its functionality to their unique needs. Our focus today is unveiling the intricacies of customizations specifically tailored for data processing. It is expected that you have an understanding of the platform basics<\/a>, like Flexible Metadata Model (FMM) and solution development. Let\u2019s dive in!<\/p>\n The data processing pipeline has various stages that lead to data storage. As MELT data moves through the pipeline, it is processed, transformed, and enriched, and eventually lands in the data store where it can be queried with Unified Query Language (UQL):<\/p>\n Each stage marked with a gear icon allows customization of specific logic. Furthermore, the platform enables the creation of entirely custom post-processing logic when data can no longer be altered.<\/p>\n To streamline customization while maintaining flexibility, we are embracing a new approach: workflows, taps, and plugins, utilizing the CNCF Serverless Workflow specification<\/a> with JSONata<\/a> as the default expression language. Since Serverless Workflows are designed using open standards, we are extensively utilizing CloudEvents<\/a> and OpenAPI<\/a> specifications. By leveraging these open standards, we ensure compatibility and ease of development.<\/p>\n Data processing stages that allow data mutation are called taps<\/em>, and their customizations plugins. <\/em>Each tap declares an input and output JSON schema<\/a> for its plugins. Plugins are expected to produce an output that adheres to the tap\u2019s output schema. A tap is responsible for merging outputs from all its plugins and producing a new event, which is a modified version of an original event. Taps can only be authored by the platform, while plugins can be created by any solution as well as regular users of the platform.<\/p>\n Workflows <\/em>are meant for post-processing and thus can only subscribe to triggers <\/em>(see below). Workflow use cases range from simple event counting to sophisticated machine learning model inferences. Anyone can author workflows.<\/p>\n This abstraction allows developers to reason in terms of a single event, without exposing the complexity of the underlying stream processing, and use familiar well documented standards, both of which lower the barrier of entry.<\/p>\n Each data processing stage communicates with other stages via events, which allows us to decouple consumers and producers and seamlessly rearrange the stages should the need arise. data:observation<\/strong> \u2013 a category of events with publish-only permissions which can be thought of as side-effects of processing the original event, for example, an entity derived from resource attributes in OpenTelemetry metric packet. Observations are indicated with upward \u2018publish\u2019 arrows in the above diagram. Taps, workflows and plugins can all produce observations. Observations can only be subscribed to by specific taps. There are five observation event types in the platform:<\/p>\n entity.observed<\/strong> \u2013 FMM entity was discovered while processing some data. It can be a new entity or an update to an existing entity. Each update from the same source fully replaces the previous one. There are 3 trigger event types in the platform, one for each data kind: metric.enriched<\/strong>, event.enriched<\/strong>, trace.encriched<\/strong>. All three events are emitted from the final \u2018Tag enrichment\u2019 tap.<\/p>\n Each event is registered in a platform\u2019s knowledge store, so that they are easily discoverable. To list all available events, simply use fsoc to query them, i.e., to get all triggers:<\/p>\n fsoc knowledge get –type=contracts:cloudevent –filter=”data.category eq ‘data:trigger'” –layer-type=TENANT<\/p>\n Note that all event types are versioned to allow for evolution and are qualified with platform solution identifier for isolation. For example, a fully qualified id of measurement.received<\/strong> event is platform:measurement.received.v1<\/strong><\/p>\n Let\u2019s illustrate the above concepts with a straightforward example. Consider a workflow designed to count health rule violations for Kubernetes workloads and APM services. The logic of the workflow can be broken down into several steps:<\/p>\n Subscribe to the trigger event Developers can utilize various tools to aid in workflow development, such as web-based editors or IDEs.<\/p>\n web-based editor<\/a> It\u2019s crucial to ensure expressions and logic are valid through unit tests and validation against defined schemas.<\/p>\n To aid in that, you can write unit tests by utilizing stated<\/a>, see an example for this workflow<\/a>.Online JSONata editor<\/a> can also be a helpful tool in writing your expressions.<\/p>\n A blog on workflow testing is coming soon!<\/p>\n Provide a unique identifier and a name for your workflow:<\/p>\n id: violations-counter
Let\u2019s query our trigger using fsoc:<\/p>\n fsoc knowledge get –type=contracts:cloudevent –object-id=platform:event.enriched.v1<\/strong> –layer-type=TENANT<\/p>\n Output:<\/p>\n type: event.enriched.v1
To subscribe to this event, you need to add an event definition<\/a> and event state<\/a> referencing this definition (note a nature of the reference to the event \u2013 it must be qualified with its knowledge type):<\/p>\n events:
Since the data in workflows is received in JSON format, event data is described in JSON schema.<\/p>\n Let\u2019s look at the JSON schema of this event (referenced in dataschema)<\/strong>, so you know what to expect in our workflow:<\/p>\n fsoc knowledge get –type=contracts:jsonSchema –object-id=platform:event.v1<\/strong> –layer-type=TENANT
It\u2019s straightforward \u2013 a single event, with one or more entity references. Since dataOnly=false<\/a>, the payload of the event will be enclosed in the data field, and extension attributes will also be available to the workflow. fsoc knowledge get –type=fmm:event –filter=”data.name eq “healthrule.violation” and data.namespace.name eq “alerting”” –layer-type=TENANT<\/p>\n You\u2019ll need to ensure that the event you receive is of the correct FMM event type, and that referenced entities are relevant. To do this, you can write an expression<\/a> in JSONata and then use it in an action condition:<\/p>\n functions:
<\/p>\n","protected":false},"author":0,"featured_media":2650,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[12],"tags":[],"class_list":["post-2649","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-cisco-learning"],"yoast_head":"\nProcess Vast Amounts of MELT Data<\/h2>\n
Understanding Data Processing Stages<\/h1>\n
Events as a connecting tissue<\/h1>\n
\nEach event has an associated category, which determines whether a specific stage can subscribe to or publish that event. There are two public categories for data-related events:<\/p>\n
\ndata:trigger<\/strong> \u2013 subscribe-only events that are emitted after all the mutations have completed. Triggers are indicated with a lightning \u2018trigger\u2019 icon in the above diagram. Only workflows (post-processing logic) can subscribe to triggers, and only specific taps can publish them.<\/p>\n
\nassociation.observed<\/strong> \u2013 FMM association was discovered while processing some data. Depending on the cardinality of the association the update logic differs
\nextension.observed<\/strong> \u2013 FMM extension attributes were discovered while processing some data. A target entity must already exist.
\nmeasurement.received<\/strong> \u2013 a measurement event which contributes to a specific FMM metric. These measurements will be aggregated into a metric in Metric aggregation tap. Aggregation logic depends on the metric\u2019s content type.
\nevent.received<\/strong> \u2013 raises a new FMM event. This event will also be processed by the Event processing tap, just like externally ingested events.<\/p>\nAuthoring Workflows: A Practical Example<\/h1>\n
\nValidate event type and entity relevance
\nPublish a measurement event counting violations while retaining severity<\/p>\nDevelopment Tools<\/h2>\n
\nVS Code with Kogito editor<\/a> or default extension<\/a>
\nany IDE that integrates with org<\/a>, i.e. Intellij IDEA<\/p>\nStep by Step Guide<\/h2>\n
Create the workflow DSL<\/h3>\n
\nversion: ‘1.0.0’
\nspecVersion: ‘0.8’
\nname: Violations Counter<\/p>\nFind the trigger event<\/h3>\n
\ndescription: Indicates that an event was enriched with topology tags
\ndataschema<\/strong>: contracts:jsonSchema\/platform:event.v1<\/strong>
\ncategory: data:trigger
\nextensions:
\n – contracts:cloudeventExtension\/platform:entitytypes
\n\u00a0 – contracts:cloudeventExtension\/platform:source
<\/p>\nSubscribe to the event<\/h3>\n
\n\u00a0 – name: EventReceived<\/strong>
\n\u00a0\u00a0\u00a0 type: contracts:cloudevent\/platform:event.enriched.v1<\/strong>
\n \u00a0\u00a0 kind: consumed
\n \u00a0\u00a0 dataOnly: false
\n \u00a0\u00a0 source: platform
\nstates:
\n – name: event-received
\n \u00a0\u00a0 type: event
\n \u00a0\u00a0 onEvents:
\n \u00a0\u00a0\u00a0\u00a0 – eventRefs:
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – EventReceived<\/strong><\/p>\nInspect the event<\/h3>\n
\nResult:
\n$schema: http:\/\/json-schema.org\/draft-07\/schema#
\ntitle: Event
\n$id: event.v1
\ntype: object
\nrequired:
\n – entities
\n – type
\n – timestamp
\nproperties:
\n entities:
\n \u00a0\u00a0 type: array
\n \u00a0\u00a0 minItems: 1
\n \u00a0\u00a0 items:
\n \u00a0\u00a0\u00a0\u00a0 $ref: ‘#\/definitions\/EntityReference’
\n type:
\n \u00a0\u00a0 $ref: ‘#\/definitions\/TypeReference’
\n timestamp:
\n \u00a0\u00a0 type: integer
\n \u00a0\u00a0 description: The timestamp in milliseconds
\n spanId:
\n \u00a0\u00a0 type: string
\n \u00a0\u00a0 description: Span id
\n traceId:
\n \u00a0\u00a0 type: string
\n \u00a0\u00a0 description: Trace id
\n raw:
\n \u00a0\u00a0 type: string
\n \u00a0\u00a0 description: The raw body of the event record
\n attributes:
\n \u00a0\u00a0 $ref: ‘#\/definitions\/Attributes’
\n tags:
\n \u00a0\u00a0 $ref: ‘#\/definitions\/Tags’
\nadditionalProperties: false
\ndefinitions:
\n Tags:
\n \u00a0\u00a0 type: object
\n \u00a0\u00a0 propertyNames:
\n \u00a0\u00a0\u00a0\u00a0 minLength: 1
\n \u00a0\u00a0\u00a0\u00a0 maxLength: 256
\n \u00a0\u00a0 additionalProperties:
\n \u00a0\u00a0\u00a0\u00a0 type: string
\n Attributes:
\n \u00a0\u00a0 type: object
\n \u00a0\u00a0 propertyNames:
\n \u00a0\u00a0\u00a0\u00a0 minLength: 1
\n \u00a0\u00a0\u00a0\u00a0 maxLength: 256
\n \u00a0\u00a0 additionalProperties:
\n \u00a0\u00a0\u00a0\u00a0 type:
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – string
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – number
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – boolean
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – object
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – array
\n EntityReference:
\n \u00a0\u00a0 type: object
\n \u00a0\u00a0 required:
\n \u00a0\u00a0\u00a0\u00a0 – id
\n \u00a0\u00a0\u00a0\u00a0 – type
\n \u00a0\u00a0 properties:
\n \u00a0\u00a0\u00a0\u00a0 id:
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 type: string
\n\u00a0\u00a0\u00a0\u00a0\u00a0 type:
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 $ref: ‘#\/definitions\/TypeReference’
\n\u00a0\u00a0\u00a0\u00a0\u00a0 additionalProperties: false
\n\u00a0 TypeReference:
\n\u00a0\u00a0\u00a0 type: string
\n\u00a0\u00a0\u00a0 description: A fully qualified FMM type reference
\n\u00a0\u00a0\u00a0 example: k8s:pod<\/p>\n
\nSince we know the exact FMM event type we are interested in, you can also query its definition to understand the attributes that the workflow will be receiving and their semantics:<\/p>\nValidate event relevance<\/h3>\n
\n\u00a0 – name: checkType<\/strong>
\n \u00a0\u00a0 type: expression
\n \u00a0\u00a0 operation: ]]\u00a0\u00a0Cisco Observability Platform is designed to ingest and process vast amounts of MELT (Metrics, Events, Logs and Traces) data. It is built on top of open standards like OpenTelemetry to ensure interoperability. See how its provision of extensions let you tailor every facet of its functionality to your unique needs.\u00a0\u00a0Read More<\/a>\u00a0Cisco Blogs\u00a0<\/p>","protected":false},"excerpt":{"rendered":"Process Vast Amounts of MELT Data<\/h2>\n
Process Vast Amounts of MELT Data<\/h2>\n
Understanding Data Processing Stages<\/h1>\n
Events as a connecting tissue<\/h1>\n
\nEach event has an associated category, which determines whether a specific stage can subscribe to or publish that event. There are two public categories for data-related events:<\/p>\n
\ndata:trigger<\/strong> \u2013 subscribe-only events that are emitted after all the mutations have completed. Triggers are indicated with a lightning \u2018trigger\u2019 icon in the above diagram. Only workflows (post-processing logic) can subscribe to triggers, and only specific taps can publish them.<\/p>\n
\nassociation.observed<\/strong> \u2013 FMM association was discovered while processing some data. Depending on the cardinality of the association the update logic differs
\nextension.observed<\/strong> \u2013 FMM extension attributes were discovered while processing some data. A target entity must already exist.
\nmeasurement.received<\/strong> \u2013 a measurement event which contributes to a specific FMM metric. These measurements will be aggregated into a metric in Metric aggregation tap. Aggregation logic depends on the metric\u2019s content type.
\nevent.received<\/strong> \u2013 raises a new FMM event. This event will also be processed by the Event processing tap, just like externally ingested events.<\/p>\nAuthoring Workflows: A Practical Example<\/h1>\n
\nValidate event type and entity relevance
\nPublish a measurement event counting violations while retaining severity<\/p>\nDevelopment Tools<\/h2>\n
\nVS Code with Kogito editor<\/a> or default extension<\/a>
\nany IDE that integrates with org<\/a>, i.e. Intellij IDEA<\/p>\nStep by Step Guide<\/h2>\n
Create the workflow DSL<\/h3>\n
\nversion: ‘1.0.0’
\nspecVersion: ‘0.8’
\nname: Violations Counter<\/p>\nFind the trigger event<\/h3>\n
\ndescription: Indicates that an event was enriched with topology tags
\ndataschema<\/strong>: contracts:jsonSchema\/platform:event.v1<\/strong>
\ncategory: data:trigger
\nextensions:
\n – contracts:cloudeventExtension\/platform:entitytypes
\n\u00a0 – contracts:cloudeventExtension\/platform:source
<\/p>\nSubscribe to the event<\/h3>\n
\n\u00a0 – name: EventReceived<\/strong>
\n\u00a0\u00a0\u00a0 type: contracts:cloudevent\/platform:event.enriched.v1<\/strong>
\n \u00a0\u00a0 kind: consumed
\n \u00a0\u00a0 dataOnly: false
\n \u00a0\u00a0 source: platform
\nstates:
\n – name: event-received
\n \u00a0\u00a0 type: event
\n \u00a0\u00a0 onEvents:
\n \u00a0\u00a0\u00a0\u00a0 – eventRefs:
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – EventReceived<\/strong><\/p>\nInspect the event<\/h3>\n
\nResult:
\n$schema: http:\/\/json-schema.org\/draft-07\/schema#
\ntitle: Event
\n$id: event.v1
\ntype: object
\nrequired:
\n – entities
\n – type
\n – timestamp
\nproperties:
\n entities:
\n \u00a0\u00a0 type: array
\n \u00a0\u00a0 minItems: 1
\n \u00a0\u00a0 items:
\n \u00a0\u00a0\u00a0\u00a0 $ref: ‘#\/definitions\/EntityReference’
\n type:
\n \u00a0\u00a0 $ref: ‘#\/definitions\/TypeReference’
\n timestamp:
\n \u00a0\u00a0 type: integer
\n \u00a0\u00a0 description: The timestamp in milliseconds
\n spanId:
\n \u00a0\u00a0 type: string
\n \u00a0\u00a0 description: Span id
\n traceId:
\n \u00a0\u00a0 type: string
\n \u00a0\u00a0 description: Trace id
\n raw:
\n \u00a0\u00a0 type: string
\n \u00a0\u00a0 description: The raw body of the event record
\n attributes:
\n \u00a0\u00a0 $ref: ‘#\/definitions\/Attributes’
\n tags:
\n \u00a0\u00a0 $ref: ‘#\/definitions\/Tags’
\nadditionalProperties: false
\ndefinitions:
\n Tags:
\n \u00a0\u00a0 type: object
\n \u00a0\u00a0 propertyNames:
\n \u00a0\u00a0\u00a0\u00a0 minLength: 1
\n \u00a0\u00a0\u00a0\u00a0 maxLength: 256
\n \u00a0\u00a0 additionalProperties:
\n \u00a0\u00a0\u00a0\u00a0 type: string
\n Attributes:
\n \u00a0\u00a0 type: object
\n \u00a0\u00a0 propertyNames:
\n \u00a0\u00a0\u00a0\u00a0 minLength: 1
\n \u00a0\u00a0\u00a0\u00a0 maxLength: 256
\n \u00a0\u00a0 additionalProperties:
\n \u00a0\u00a0\u00a0\u00a0 type:
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – string
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – number
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – boolean
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – object
\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 – array
\n EntityReference:
\n \u00a0\u00a0 type: object
\n \u00a0\u00a0 required:
\n \u00a0\u00a0\u00a0\u00a0 – id
\n \u00a0\u00a0\u00a0\u00a0 – type
\n \u00a0\u00a0 properties:
\n \u00a0\u00a0\u00a0\u00a0 id:
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 type: string
\n\u00a0\u00a0\u00a0\u00a0\u00a0 type:
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 $ref: ‘#\/definitions\/TypeReference’
\n\u00a0\u00a0\u00a0\u00a0\u00a0 additionalProperties: false
\n\u00a0 TypeReference:
\n\u00a0\u00a0\u00a0 type: string
\n\u00a0\u00a0\u00a0 description: A fully qualified FMM type reference
\n\u00a0\u00a0\u00a0 example: k8s:pod<\/p>\n
\nSince we know the exact FMM event type we are interested in, you can also query its definition to understand the attributes that the workflow will be receiving and their semantics:<\/p>\nValidate event relevance<\/h3>\n
\n\u00a0 – name: checkType<\/strong>
\n \u00a0\u00a0 type: expression
\n \u00a0\u00a0 operation: ]]\u00a0\u00a0Cisco Observability Platform is designed to ingest and process vast amounts of MELT (Metrics, Events, Logs and Traces) data. It is built on top of open standards like OpenTelemetry to ensure interoperability. See how its provision of extensions let you tailor every facet of its functionality to your unique needs.\u00a0\u00a0Read More<\/a>\u00a0Cisco Blogs\u00a0<\/p>\n