diff --git a/stream-connectors/CONTRIBUTE.md b/stream-connectors/CONTRIBUTE.md index 523558243b9..22abb20d32d 100644 --- a/stream-connectors/CONTRIBUTE.md +++ b/stream-connectors/CONTRIBUTE.md @@ -12,7 +12,7 @@ You can work on Stream Connectors - Update an existing stream connector - [Fix issues](https://github.com/centreon/centreon-stream-connector-scripts/issues) -You can improve our Lua modules +You can improve our Lua modules - Add a new module - Comment it @@ -22,7 +22,7 @@ You can improve our Lua modules - Update the documentation (if it changes the input and/or output of a method) - Update usage examples if there are any and if they are impacted by the change -### For everybody +### For everybody Since we are not all found of code, there are still ways to be part of this project @@ -35,11 +35,11 @@ If you want to work on our LUA modules, you must follow the coding style provide [Coding style guidelines](https://github.com/luarocks/lua-style-guide) While it is mandatory to follow those guidelines for modules, they will not be enforced on community powered Stream Connectors scripts. -It is however recommened to follow them as much as possible. +It is however recommened to follow them as much as possible. ## Documentations -When creating a module you must comment your methods as follow +When creating a module you must comment your methods as follow ```lua --- This is a local function that does things @@ -51,6 +51,6 @@ local function get_age(first_name, last_name) end ``` -You should comment complicated or long code blocks to help people review your code. +You should comment complicated or long code blocks to help people review your code. It is also required to create or update the module documentation for a more casual reading to help people use your module in their Stream Connector diff --git a/stream-connectors/README.md b/stream-connectors/README.md index 62589178b3b..0bfd9219ab8 100644 --- a/stream-connectors/README.md +++ b/stream-connectors/README.md @@ -1,46 +1,45 @@ +# Centreon Stream Connectors + [![Contributors][contributors-shield]][contributors-url] [![Stars][stars-shield]][stars-url] [![Forks][forks-shield]][forks-url] [![Issues][issues-shield]][issues-url] - -# Centreon Stream Connectors # - Centreon stream connectors are LUA scripts that help you send your Centreon monitoring datas to your favorite tools -# Stream connectors +## Stream connectors Available scripts Here is a list of the Centreon powered scripts: -| Software | Connectors | Documentations | -| -------- | ---------- | -------------- | -| BSM | [BSM Stream Connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/bsm) | [Documentation](https://docs.centreon.com/current/en/integrations/event-management/sc-hp-bsm.html) | +| Software | Connectors | Documentations | +| ------------- | --------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| BSM | [BSM Stream Connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/bsm) | [Documentation](https://docs.centreon.com/current/en/integrations/event-management/sc-hp-bsm.html) | | ElasticSearch | [ElasticSearch Stream Connectors](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/elasticsearch) | [Events Documentation](https://docs.centreon.com/current/en/integrations/data-analytics/sc-elastic-events.html), [Metrics Documentation](https://docs.centreon.com/current/en/integrations/data-analytics/sc-elastic-metrics.html) | -| InfluxDB | [InfluxDB Stream Connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/influxdb) | WIP | -| NDO | [NDO Stream Connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/ndo) | [Documentation](https://docs.centreon.com/current/en/integrations/stream-connectors/ndo.html) | -| OMI | [OMI Stream Connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/omi) | [Documentation](https://docs.centreon.com/current/en/integrations/event-management/sc-hp-omi.html) | -| Opsgenie | [Opsgenie Stream Connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/opsgenie) | [Documentation](https://docs.centreon.com/current/en/integrations/event-management/sc-opsgenie.html) | -| PagerDuty | [PagerDuty Stream Connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/pagerduty) | [Documentation](https://docs.centreon.com/current/en/integrations/event-management/sc-pagerduty-events.html) | -| Prometheus | [Prometheus Stream Connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/prometheus) | WIP | -| ServiceNow | [ServiceNow Stream Connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/servicenow) | [Documentation](https://docs.centreon.com/current/en/integrations/event-management/sc-service-now-events.html) | -| Signl4 | [Signl4 Stream Connectors](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/signl4) | [Events Documentation](https://docs.centreon.com/current/en/integrations/event-management/sc-signl4-events.html) | -| Splunk | [Splunk Stream Connectors](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/splunk) | [Events Documentation](https://docs.centreon.com/current/en/integrations/data-analytics/sc-splunk-events.html), [Metrics Documentation](https://docs.centreon.com/current/en/integrations/data-analytics/sc-splunk-metrics.html) | -| Warp10 | [Warp10 Stream Connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/warp10) | [Documentation](https://docs.centreon.com/current/en/integrations/data-analytics/sc-warp10.html) | +| InfluxDB | [InfluxDB Stream Connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/influxdb) | WIP | +| NDO | [NDO Stream Connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/ndo) | [Documentation](https://docs.centreon.com/current/en/integrations/stream-connectors/ndo.html) | +| OMI | [OMI Stream Connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/omi) | [Documentation](https://docs.centreon.com/current/en/integrations/event-management/sc-hp-omi.html) | +| Opsgenie | [Opsgenie Stream Connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/opsgenie) | [Documentation](https://docs.centreon.com/current/en/integrations/event-management/sc-opsgenie.html) | +| PagerDuty | [PagerDuty Stream Connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/pagerduty) | [Documentation](https://docs.centreon.com/current/en/integrations/event-management/sc-pagerduty-events.html) | +| Prometheus | [Prometheus Stream Connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/prometheus) | WIP | +| ServiceNow | [ServiceNow Stream Connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/servicenow) | [Documentation](https://docs.centreon.com/current/en/integrations/event-management/sc-service-now-events.html) | +| Signl4 | [Signl4 Stream Connectors](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/signl4) | [Events Documentation](https://docs.centreon.com/current/en/integrations/event-management/sc-signl4-events.html) | +| Splunk | [Splunk Stream Connectors](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/splunk) | [Events Documentation](https://docs.centreon.com/current/en/integrations/data-analytics/sc-splunk-events.html), [Metrics Documentation](https://docs.centreon.com/current/en/integrations/data-analytics/sc-splunk-metrics.html) | +| Warp10 | [Warp10 Stream Connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/warp10) | [Documentation](https://docs.centreon.com/current/en/integrations/data-analytics/sc-warp10.html) | +| Kafka | [Kafka stream connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/centreon-certified/kafka) | [Documentation](https://docs.centreon.com/docs/integrations/data-analytics/sc-kafka-events/) | Here is a list of the Community powered scripts -| Software | Connectors | Documentations | Contributors | Organizations | -| -------- | ---------- | -------------- | ------------ | ------------- | +| Software | Connectors | Documentations | Contributors | Organizations | +| -------- | --------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------- | --------------------------------------- | | Canopsis | [Canopsis Stream Connector](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/community-powered/canopsis) | [Documentation](https://github.com/centreon/centreon-stream-connector-scripts/tree/master/community-powered/canopsis/README.md) | [ppremont-capensis](https://github.com/ppremont-capensis) | [Capensis](https://www.capensis.fr/en/) | -# Contribute +## Contribute If you wish to help us improve this project, feel free to read the [Contribute.md](https://github.com/centreon/centreon-stream-connector-scripts/blob/master/CONTRIBUTE.md) file. - [contributors-shield]: https://img.shields.io/github/contributors/centreon/centreon-stream-connector-scripts?color=%2384BD00&label=CONTRIBUTORS&style=for-the-badge [stars-shield]: https://img.shields.io/github/stars/centreon/centreon-stream-connector-scripts?color=%23433b02a&label=STARS&style=for-the-badge diff --git a/stream-connectors/centreon-certified/datadog/datadog-metrics-apiv2.lua b/stream-connectors/centreon-certified/datadog/datadog-metrics-apiv2.lua new file mode 100644 index 00000000000..44c6a781499 --- /dev/null +++ b/stream-connectors/centreon-certified/datadog/datadog-metrics-apiv2.lua @@ -0,0 +1,405 @@ +#!/usr/bin/lua +-------------------------------------------------------------------------------- +-- Centreon Broker Datadog Connector Events +-------------------------------------------------------------------------------- + + +-- Libraries +local curl = require "cURL" +local sc_common = require("centreon-stream-connectors-lib.sc_common") +local sc_logger = require("centreon-stream-connectors-lib.sc_logger") +local sc_broker = require("centreon-stream-connectors-lib.sc_broker") +local sc_event = require("centreon-stream-connectors-lib.sc_event") +local sc_params = require("centreon-stream-connectors-lib.sc_params") +local sc_macros = require("centreon-stream-connectors-lib.sc_macros") +local sc_flush = require("centreon-stream-connectors-lib.sc_flush") +local sc_metrics = require("centreon-stream-connectors-lib.sc_metrics") + +-------------------------------------------------------------------------------- +-- Classe event_queue +-------------------------------------------------------------------------------- + +-------------------------------------------------------------------------------- +-- Classe event_queue +-------------------------------------------------------------------------------- + +local EventQueue = {} +EventQueue.__index = EventQueue + +-------------------------------------------------------------------------------- +---- Constructor +---- @param conf The table given by the init() function and returned from the GUI +---- @return the new EventQueue +---------------------------------------------------------------------------------- + +function EventQueue.new(params) + local self = {} + + local mandatory_parameters = { + "api_key" + } + + self.fail = false + + -- set up log configuration + local logfile = params.logfile or "/var/log/centreon-broker/datadog-metrics.log" + local log_level = params.log_level or 3 + + -- initiate mandatory objects + self.sc_logger = sc_logger.new(logfile, log_level) + self.sc_common = sc_common.new(self.sc_logger) + self.sc_broker = sc_broker.new(self.sc_logger) + self.sc_params = sc_params.new(self.sc_common, self.sc_logger) + + -- checking mandatory parameters and setting a fail flag + if not self.sc_params:is_mandatory_config_set(mandatory_parameters, params) then + self.fail = true + end + + --params.max_buffer_size = 1 + + -- overriding default parameters for this stream connector if the default values doesn't suit the basic needs + self.sc_params.params.api_key = params.api_key + self.sc_params.params.datadog_centreon_url = params.datadog_centreon_url or "http://yourcentreonaddress.local" + self.sc_params.params.datadog_metric_endpoint = params.datadog_metric_endpoint or "/api/v1/series" + self.sc_params.params.http_server_url = params.http_server_url or "https://api.datadoghq.com" + self.sc_params.params.accepted_categories = params.accepted_categories or "neb" + self.sc_params.params.accepted_elements = params.accepted_elements or "host_status,service_status" + self.sc_params.params.max_buffer_size = params.max_buffer_size or 30 + self.sc_params.params.hard_only = params.hard_only or 0 + self.sc_params.params.enable_host_status_dedup = params.enable_host_status_dedup or 0 + self.sc_params.params.enable_service_status_dedup = params.enable_service_status_dedup or 0 + self.sc_params.params.metric_name_regex = params.metric_name_regex or "[^a-zA-Z0-9_%.]" + self.sc_params.params.metric_replacement_character = params.metric_replacement_character or "_" + + -- apply users params and check syntax of standard ones + self.sc_params:param_override(params) + self.sc_params:check_params() + + self.sc_macros = sc_macros.new(self.sc_params.params, self.sc_logger) + self.format_template = self.sc_params:load_event_format_file(true) + self.sc_params:build_accepted_elements_info() + self.sc_flush = sc_flush.new(self.sc_params.params, self.sc_logger) + + local categories = self.sc_params.params.bbdo.categories + local elements = self.sc_params.params.bbdo.elements + + self.format_event = { + [categories.neb.id] = { + [elements.host_status.id] = function () return self:format_event_host() end, + [elements.service_status.id] = function () return self:format_event_service() end + } + } + + self.format_metric = { + [categories.neb.id] = { + [elements.host_status.id] = function (metric) return self:format_metric_host(metric) end, + [elements.service_status.id] = function (metric) return self:format_metric_service(metric) end + } + } + + self.send_data_method = { + [1] = function (payload) return self:send_data(payload) end + } + + self.build_payload_method = { + [1] = function (payload, event) return self:build_payload(payload, event) end + } + + -- return EventQueue object + setmetatable(self, { __index = EventQueue }) + return self +end + +-------------------------------------------------------------------------------- +---- EventQueue:format_accepted_event method +-------------------------------------------------------------------------------- +function EventQueue:format_accepted_event() + local category = self.sc_event.event.category + local element = self.sc_event.event.element + + self.sc_logger:debug("[EventQueue:format_event]: starting format event") + + -- can't format event if stream connector is not handling this kind of event and that it is not handled with a template file + if not self.format_event[category][element] then + self.sc_logger:error("[format_event]: You are trying to format an event with category: " + .. tostring(self.sc_params.params.reverse_category_mapping[category]) .. " and element: " + .. tostring(self.sc_params.params.reverse_element_mapping[category][element]) + .. ". If it is a not a misconfiguration, you should create a format file to handle this kind of element") + else + self.format_event[category][element]() + end + + self.sc_logger:debug("[EventQueue:format_event]: event formatting is finished") +end + +-------------------------------------------------------------------------------- +---- EventQueue:format_event_host method +-------------------------------------------------------------------------------- +function EventQueue:format_event_host() + local event = self.sc_event.event + self.sc_logger:debug("[EventQueue:format_event_host]: call build_metric ") + self.sc_metrics:build_metric(self.format_metric[event.category][event.element]) +end + +-------------------------------------------------------------------------------- +---- EventQueue:format_event_service method +-------------------------------------------------------------------------------- +function EventQueue:format_event_service() + self.sc_logger:debug("[EventQueue:format_event_service]: call build_metric ") + local event = self.sc_event.event + self.sc_metrics:build_metric(self.format_metric[event.category][event.element]) +end + +-------------------------------------------------------------------------------- +---- EventQueue:format_metric_host method +-- @param metric {table} a single metric data +-------------------------------------------------------------------------------- +function EventQueue:format_metric_host(metric) + self.sc_logger:debug("[EventQueue:format_metric_host]: call format_metric ") + self:format_metric_event(metric) +end + +-------------------------------------------------------------------------------- +---- EventQueue:format_metric_service method +-- @param metric {table} a single metric data +-------------------------------------------------------------------------------- +function EventQueue:format_metric_service(metric) + self.sc_logger:debug("[EventQueue:format_metric_service]: call format_metric ") + self:format_metric_event(metric) +end + +-------------------------------------------------------------------------------- +---- EventQueue:format_metric_service method +-- @param metric {table} a single metric data +------------------------------------------------------------------------------- +function EventQueue:format_metric_event(metric) + self.sc_logger:debug("[EventQueue:format_metric]: start real format metric ") + local event = self.sc_event.event + self.sc_event.event.formated_event = { + host = tostring(event.cache.host.name), + metric = metric.metric_name, + points = {{event.last_check, metric.value}}, + tags = self:build_metadata(metric) + } + + self:add() + self.sc_logger:debug("[EventQueue:format_metric]: end real format metric ") +end + +-------------------------------------------------------------------------------- +---- EventQueue:build_metadata method +-- @param metric {table} a single metric data +-- @return tags {table} a table with formated metadata +-------------------------------------------------------------------------------- +function EventQueue:build_metadata(metric) + local tags = {} + + -- add service name in tags + if self.sc_event.event.cache.service.description then + table.insert(tags, "service:" .. self.sc_event.event.cache.service.description) + end + + -- add metric instance in tags + if metric.instance ~= "" then + table.insert(tags, "instance:" .. metric.instance) + end + + -- add metric subinstances in tags + if metric.subinstance[1] then + for _, subinstance in ipairs(metric.subinstance) do + table.insert(tags, "subinstance:" .. subinstance) + end + end + + return tags +end + +-------------------------------------------------------------------------------- +-- EventQueue:add, add an event to the sending queue +-------------------------------------------------------------------------------- +function EventQueue:add() + -- store event in self.events lists + local category = self.sc_event.event.category + local element = self.sc_event.event.element + + self.sc_logger:debug("[EventQueue:add]: add event in queue category: " .. tostring(self.sc_params.params.reverse_category_mapping[category]) + .. " element: " .. tostring(self.sc_params.params.reverse_element_mapping[category][element])) + + self.sc_logger:debug("[EventQueue:add]: queue size before adding event: " .. tostring(#self.sc_flush.queues[category][element].events)) + self.sc_flush.queues[category][element].events[#self.sc_flush.queues[category][element].events + 1] = self.sc_event.event.formated_event + + self.sc_logger:info("[EventQueue:add]: queue size is now: " .. tostring(#self.sc_flush.queues[category][element].events) + .. "max is: " .. tostring(self.sc_params.params.max_buffer_size)) +end + +-------------------------------------------------------------------------------- +-- EventQueue:build_payload, concatenate data so it is ready to be sent +-- @param payload {string} json encoded string +-- @param event {table} the event that is going to be added to the payload +-- @return payload {string} json encoded string +-------------------------------------------------------------------------------- +function EventQueue:build_payload(payload, event) + if not payload then + payload = { + series = {event} + } + else + table.insert(payload.series, event) + end + + return payload +end + +function EventQueue:send_data(payload) + self.sc_logger:debug("[EventQueue:send_data]: Starting to send data") + + local url = self.sc_params.params.http_server_url .. tostring(self.sc_params.params.datadog_metric_endpoint) + local payload_json = broker.json_encode(payload) + + -- write payload in the logfile for test purpose + if self.sc_params.params.send_data_test == 1 then + self.sc_logger:notice("[send_data]: " .. tostring(payload_json)) + return true + end + + self.sc_logger:info("[EventQueue:send_data]: Going to send the following json " .. tostring(payload_json)) + self.sc_logger:info("[EventQueue:send_data]: Pagerduty address is: " .. tostring(url)) + + local http_response_body = "" + local http_request = curl.easy() + :setopt_url(url) + :setopt_writefunction( + function (response) + http_response_body = http_response_body .. tostring(response) + end + ) + :setopt(curl.OPT_TIMEOUT, self.sc_params.params.connection_timeout) + :setopt(curl.OPT_SSL_VERIFYPEER, self.sc_params.params.allow_insecure_connection) + :setopt( + curl.OPT_HTTPHEADER, + { + "content-type: application/json", + "DD-API-KEY:" .. self.sc_params.params.api_key + } + ) + + -- set proxy address configuration + if (self.sc_params.params.proxy_address ~= '') then + if (self.sc_params.params.proxy_port ~= '') then + http_request:setopt(curl.OPT_PROXY, self.sc_params.params.proxy_address .. ':' .. self.sc_params.params.proxy_port) + else + self.sc_logger:error("[EventQueue:send_data]: proxy_port parameter is not set but proxy_address is used") + end + end + + -- set proxy user configuration + if (self.sc_params.params.proxy_username ~= '') then + if (self.sc_params.params.proxy_password ~= '') then + http_request:setopt(curl.OPT_PROXYUSERPWD, self.sc_params.params.proxy_username .. ':' .. self.sc_params.params.proxy_password) + else + self.sc_logger:error("[EventQueue:send_data]: proxy_password parameter is not set but proxy_username is used") + end + end + + -- adding the HTTP POST data + http_request:setopt_postfields(payload_json) + + -- performing the HTTP request + http_request:perform() + + -- collecting results + http_response_code = http_request:getinfo(curl.INFO_RESPONSE_CODE) + + http_request:close() + + -- Handling the return code + local retval = false + -- https://docs.datadoghq.com/fr/api/latest/events/ other than 202 is not good + if http_response_code == 202 then + self.sc_logger:info("[EventQueue:send_data]: HTTP POST request successful: return code is " .. tostring(http_response_code)) + retval = true + else + self.sc_logger:error("[EventQueue:send_data]: HTTP POST request FAILED, return code is " .. tostring(http_response_code) .. ". Message is: " .. tostring(http_response_body)) + end + + return retval +end + +-------------------------------------------------------------------------------- +-- Required functions for Broker StreamConnector +-------------------------------------------------------------------------------- + +local queue + +-- Fonction init() +function init(conf) + queue = EventQueue.new(conf) +end + +-- -------------------------------------------------------------------------------- +-- write, +-- @param {table} event, the event from broker +-- @return {boolean} +-------------------------------------------------------------------------------- +function write (event) + -- skip event if a mandatory parameter is missing + if queue.fail then + queue.sc_logger:error("Skipping event because a mandatory parameter is not set") + return false + end + + -- initiate event object + queue.sc_metrics = sc_metrics.new(event, queue.sc_params.params, queue.sc_common, queue.sc_broker, queue.sc_logger) + queue.sc_event = queue.sc_metrics.sc_event + + if queue.sc_event:is_valid_category() then + if queue.sc_metrics:is_valid_bbdo_element() then + -- format event if it is validated + if queue.sc_metrics:is_valid_metric_event() then + queue:format_accepted_event() + end + --- log why the event has been dropped + else + queue.sc_logger:debug("dropping event because element is not valid. Event element is: " + .. tostring(queue.sc_params.params.reverse_element_mapping[queue.sc_event.event.category][queue.sc_event.event.element])) + end + else + queue.sc_logger:debug("dropping event because category is not valid. Event category is: " + .. tostring(queue.sc_params.params.reverse_category_mapping[queue.sc_event.event.category])) + end + + return flush() +end + + +-- flush method is called by broker every now and then (more often when broker has nothing else to do) +function flush() + local queues_size = queue.sc_flush:get_queues_size() + + -- nothing to flush + if queues_size == 0 then + return true + end + + -- flush all queues because last global flush is too old + if queue.sc_flush.last_global_flush < os.time() - queue.sc_params.params.max_all_queues_age then + if not queue.sc_flush:flush_all_queues(queue.build_payload_method[1], queue.send_data_method[1]) then + return false + end + + return true + end + + -- flush queues because too many events are stored in them + if queues_size > queue.sc_params.params.max_buffer_size then + if not queue.sc_flush:flush_all_queues(queue.build_payload_method[1], queue.send_data_method[1]) then + return false + end + + return true + end + + -- there are events in the queue but they were not ready to be send + return false +end diff --git a/stream-connectors/centreon-certified/servicenow/servicenow-events-apiv2.lua b/stream-connectors/centreon-certified/servicenow/servicenow-em-events-apiv2.lua similarity index 99% rename from stream-connectors/centreon-certified/servicenow/servicenow-events-apiv2.lua rename to stream-connectors/centreon-certified/servicenow/servicenow-em-events-apiv2.lua index 6eefecca634..9ba33d61405 100644 --- a/stream-connectors/centreon-certified/servicenow/servicenow-events-apiv2.lua +++ b/stream-connectors/centreon-certified/servicenow/servicenow-em-events-apiv2.lua @@ -47,7 +47,7 @@ function EventQueue.new (params) self.events = {} self.fail = false - local logfile = params.logfile or "/var/log/centreon-broker/servicenow-stream-connector.log" + local logfile = params.logfile or "/var/log/centreon-broker/servicenow-em-stream-connector.log" local log_level = params.log_level or 1 -- initiate mandatory objects diff --git a/stream-connectors/centreon-certified/servicenow/servicenow-incident-events-apiv2.lua b/stream-connectors/centreon-certified/servicenow/servicenow-incident-events-apiv2.lua new file mode 100644 index 00000000000..d5873edb5b6 --- /dev/null +++ b/stream-connectors/centreon-certified/servicenow/servicenow-incident-events-apiv2.lua @@ -0,0 +1,500 @@ +#!/usr/bin/lua + +-------------------------------------------------------------------------------- +-- Centreon Broker Service Now connector +-- documentation: https://docs.centreon.com/current/en/integrations/stream-connectors/servicenow.html +-------------------------------------------------------------------------------- + + +-- libraries +local curl = require "cURL" +local sc_common = require("centreon-stream-connectors-lib.sc_common") +local sc_logger = require("centreon-stream-connectors-lib.sc_logger") +local sc_broker = require("centreon-stream-connectors-lib.sc_broker") +local sc_event = require("centreon-stream-connectors-lib.sc_event") +local sc_params = require("centreon-stream-connectors-lib.sc_params") +local sc_macros = require("centreon-stream-connectors-lib.sc_macros") +local sc_flush = require("centreon-stream-connectors-lib.sc_flush") + +-------------------------------------------------------------------------------- +-- EventQueue class +-------------------------------------------------------------------------------- + +local EventQueue = {} +EventQueue.__index = EventQueue + +-------------------------------------------------------------------------------- +-- Constructor +-- @param conf The table given by the init() function and returned from the GUI +-- @return the new EventQueue +-------------------------------------------------------------------------------- + +function EventQueue.new (params) + local self = {} + local mandatory_parameters = { + [1] = "instance", + [2] = "client_id", + [3] = "client_secret", + [4] = "username", + [5] = "password" + } + + self.tokens = {} + self.tokens.authToken = nil + self.tokens.refreshToken = nil + + + self.events = {} + self.fail = false + + local logfile = params.logfile or "/var/log/centreon-broker/servicenow-incident-stream-connector.log" + local log_level = params.log_level or 1 + + -- initiate mandatory objects + self.sc_logger = sc_logger.new(logfile, log_level) + self.sc_common = sc_common.new(self.sc_logger) + self.sc_broker = sc_broker.new(self.sc_logger) + self.sc_params = sc_params.new(self.sc_common, self.sc_logger) + + self.sc_params.params.instance = params.instance + self.sc_params.params.client_id = params.client_id + self.sc_params.params.client_secret = params.client_secret + self.sc_params.params.username = params.username + self.sc_params.params.password = params.password + self.sc_params.params.http_server_url = params.http_server_url or "service-now.com" + self.sc_params.params.incident_table = params.incident_table or "incident" + self.sc_params.params.source = params.source or "centreon" + + self.sc_params.params.accepted_categories = params.accepted_categories or "neb" + self.sc_params.params.accepted_elements = params.accepted_elements or "host_status,service_status" + -- this is an automatic ticketing stream connector, by default we only open ticket on warning/critical/unknown/down/unreachable states + self.sc_params.params.host_status = params.host_status or "1,2" + self.sc_params.params.service_status = params.service_status or "1,2,3" + + -- checking mandatory parameters and setting a fail flag + if not self.sc_params:is_mandatory_config_set(mandatory_parameters, params) then + self.fail = true + end + + -- force max_buffer_size to 1, we can't send bulk events + params.max_buffer_size = 1 + -- apply users params and check syntax of standard ones + self.sc_params:param_override(params) + self.sc_params:check_params() + self.sc_params.params.http_server_url = self.sc_common:if_wrong_type(self.sc_params.params.http_server_url, "string", "service-now.com") + self.sc_params.params.incident_table = self.sc_common:if_wrong_type(self.sc_params.params.incident_table, "string", "incident") + self.sc_params.params.source = self.sc_common:if_wrong_type(self.sc_params.params.source, "string", "centreon") + + self.sc_macros = sc_macros.new(self.sc_params.params, self.sc_logger) + self.format_template = self.sc_params:load_event_format_file(true) + self.sc_params:build_accepted_elements_info() + self.sc_flush = sc_flush.new(self.sc_params.params, self.sc_logger) + + local categories = self.sc_params.params.bbdo.categories + local elements = self.sc_params.params.bbdo.elements + + self.format_event = { + [categories.neb.id] = { + [elements.host_status.id] = function () return self:format_event_host() end, + [elements.service_status.id] = function () return self:format_event_service() end + }, + [categories.bam.id] = {} + } + + self.send_data_method = { + [1] = function (payload) return self:send_data(payload) end + } + + self.build_payload_method = { + [1] = function (payload, event) return self:build_payload(payload, event) end + } + + setmetatable(self, { __index = EventQueue }) + return self +end + +-------------------------------------------------------------------------------- +-- getAuthToken: obtain a auth token +-- @return {string} self.tokens.authToken.token, the auth token +-------------------------------------------------------------------------------- +function EventQueue:getAuthToken () + if not self:refreshTokenIsValid() then + self:authToken() + end + + if not self:accessTokenIsValid() then + self:refreshToken(self.tokens.refreshToken.token) + end + + return self.tokens.authToken.token +end + +-------------------------------------------------------------------------------- +-- authToken: obtain auth token +-------------------------------------------------------------------------------- +function EventQueue:authToken () + local data = "grant_type=password&client_id=" .. self.sc_params.params.client_id .. "&client_secret=" .. self.sc_params.params.client_secret .. "&username=" .. self.sc_params.params.username .. "&password=" .. self.sc_params.params.password + + local res = self:call( + "oauth_token.do", + "POST", + data + ) + + if not res.access_token then + broker_log:error(1, "EventQueue:authToken: Authentication failed, couldn't get tokens") + return false + end + + self.tokens.authToken = { + token = res.access_token, + expTime = os.time(os.date("!*t")) + 1700 + } + + self.tokens.refreshToken = { + token = res.refresh_token, + expTime = os.time(os.date("!*t")) + 360000 + } +end + +-------------------------------------------------------------------------------- +-- refreshToken: refresh auth token +-------------------------------------------------------------------------------- +function EventQueue:refreshToken (token) + local data = "grant_type=refresh_token&client_id=" .. self.sc_params.params.client_id .. "&client_secret=" .. self.sc_params.params.client_secret .. "&username=" .. self.sc_params.params.username .. "&password=" .. self.sc_params.params.password .. "&refresh_token=" .. token + + local res = self:call( + "oauth_token.do", + "POST", + data + ) + + if not res.access_token then + broker_log:error(1, 'EventQueue:refreshToken Bad access token') + return false + end + + self.tokens.authToken = { + token = res.access_token, + expTime = os.time(os.date("!*t")) + 1700 + } +end + +-------------------------------------------------------------------------------- +-- refreshTokenIsValid: obtain auth token +-------------------------------------------------------------------------------- +function EventQueue:refreshTokenIsValid () + if not self.tokens.refreshToken then + return false + end + + if os.time(os.date("!*t")) > self.tokens.refreshToken.expTime then + self.tokens.refreshToken = nil + return false + end + + return true +end + +-------------------------------------------------------------------------------- +-- accessTokenIsValid: obtain auth token +-------------------------------------------------------------------------------- +function EventQueue:accessTokenIsValid () + if not self.tokens.authToken then + return false + end + + if os.time(os.date("!*t")) > self.tokens.authToken.expTime then + self.tokens.authToken = nil + return false + end + + return true +end + +-------------------------------------------------------------------------------- +-- EventQueue:call run api call +-- @param {string} url, the service now instance url +-- @param {string} method, the HTTP method that is used +-- @param {string} data, the data we want to send to service now +-- @param {string} authToken, the api auth token +-- @return {array} decoded output +-- @throw exception if http call fails or response is empty +-------------------------------------------------------------------------------- +function EventQueue:call (url, method, data, authToken) + method = method or "GET" + data = data or nil + authToken = authToken or nil + + local endpoint = "https://" .. tostring(self.sc_params.params.instance) .. "." .. self.sc_params.params.http_server_url .. "/" .. tostring(url) + self.sc_logger:debug("EventQueue:call: Prepare url " .. endpoint) + + -- write payload in the logfile for test purpose + if self.sc_params.params.send_data_test == 1 then + self.sc_logger:notice("[send_data]: " .. tostring(data) .. " to endpoint: " .. tostring(endpoint)) + return true + end + + local res = "" + local request = curl.easy() + :setopt_url(endpoint) + :setopt_writefunction(function (response) + res = res .. tostring(response) + end) + :setopt(curl.OPT_TIMEOUT, self.sc_params.params.connection_timeout) + + self.sc_logger:debug("EventQueue:call: Request initialize") + + -- set proxy address configuration + if (self.sc_params.params.proxy_address ~= '') then + if (self.sc_params.params.proxy_port ~= '') then + request:setopt(curl.OPT_PROXY, self.sc_params.params.proxy_address .. ':' .. self.sc_params.params.proxy_port) + else + self.sc_logger:error("EventQueue:call: proxy_port parameter is not set but proxy_address is used") + end + end + + -- set proxy user configuration + if (self.sc_params.params.proxy_username ~= '') then + if (self.sc_params.params.proxy_password ~= '') then + request:setopt(curl.OPT_PROXYUSERPWD, self.proxy_username .. ':' .. self.sc_params.params.proxy_password) + else + self.sc_logger:error("EventQueue:call: proxy_password parameter is not set but proxy_username is used") + end + end + + if not authToken then + if method ~= "GET" then + self.sc_logger:debug("EventQueue:call: Add form header") + request:setopt(curl.OPT_HTTPHEADER, { "Content-Type: application/x-www-form-urlencoded" }) + end + else + broker_log:info(3, "Add JSON header") + request:setopt( + curl.OPT_HTTPHEADER, + { + "Accept: application/json", + "Content-Type: application/json", + "Authorization: Bearer " .. authToken + } + ) + end + + if method ~= "GET" then + self.sc_logger:debug("EventQueue:call: Add post data") + request:setopt_postfields(data) + end + + self.sc_logger:debug("EventQueue:call: request body " .. tostring(data)) + self.sc_logger:debug("EventQueue:call: request header " .. tostring(authToken)) + self.sc_logger:warning("EventQueue:call: Call url " .. endpoint) + request:perform() + + respCode = request:getinfo(curl.INFO_RESPONSE_CODE) + self.sc_logger:debug("EventQueue:call: HTTP Code : " .. respCode) + self.sc_logger:debug("EventQueue:call: Response body : " .. tostring(res)) + + request:close() + + if respCode >= 300 then + self.sc_logger:error("EventQueue:call: HTTP Code : " .. respCode) + self.sc_logger:error("EventQueue:call: HTTP Error : " .. res) + return false + end + + if res == "" then + self.sc_logger:warning("EventQueue:call: HTTP Error : " .. res) + return false + end + + return broker.json_decode(res) +end + +function EventQueue:format_accepted_event() + local category = self.sc_event.event.category + local element = self.sc_event.event.element + local template = self.sc_params.params.format_template[category][element] + + self.sc_logger:debug("[EventQueue:format_event]: starting format event") + self.sc_event.event.formated_event = {} + + if self.format_template and template ~= nil and template ~= "" then + self.sc_event.event.formated_event = self.sc_macros:replace_sc_macro(template, self.sc_event.event, true) + else + -- can't format event if stream connector is not handling this kind of event and that it is not handled with a template file + if not self.format_event[category][element] then + self.sc_logger:error("[format_event]: You are trying to format an event with category: " + .. tostring(self.sc_params.params.reverse_category_mapping[category]) .. " and element: " + .. tostring(self.sc_params.params.reverse_element_mapping[category][element]) + .. ". If it is a not a misconfiguration, you should create a format file to handle this kind of element") + else + self.format_event[category][element]() + end + end + + self:add() + self.sc_logger:debug("[EventQueue:format_event]: event formatting is finished") +end + +function EventQueue:format_event_host() + local event = self.sc_event.event + + self.sc_event.event.formated_event = { + source = self.sc_params.params.source, + short_description = self.sc_params.params.status_mapping[event.category][event.element][event.state] .. " " .. tostring(event.cache.host.name) .. " " .. tostring(event.short_output), + cmdb_ci = tostring(event.cache.host.name), + comments = "HOST: " .. tostring(event.cache.host.name) .. "\n" + .. "OUTPUT: " .. tostring(event.output) .. "\n" + } +end + +function EventQueue:format_event_service() + local event = self.sc_event.event + + self.sc_event.event.formated_event = { + source = self.sc_params.params.source, + short_description = self.sc_params.params.status_mapping[event.category][event.element][event.state] .. " " .. tostring(event.cache.host.name) .. " " .. tostring(event.cache.service.description) .. " " .. tostring(event.short_output), + cmdb_ci = tostring(event.cache.host.name), + comments = "HOST: " .. tostring(event.cache.host.name) .. "\n" + .. "SERVICE: " .. tostring(event.cache.service.description) .. "\n" + .. "OUTPUT: " .. tostring(event.output) .. "\n" + } +end + +local queue + +-- Fonction init() +function init(conf) + queue = EventQueue.new(conf) +end + +-------------------------------------------------------------------------------- +-- init, initiate stream connector with parameters from the configuration file +-- @param {table} parameters, the table with all the configuration parameters +-------------------------------------------------------------------------------- +function EventQueue:add() + -- store event in self.events lists + local category = self.sc_event.event.category + local element = self.sc_event.event.element + + self.sc_logger:debug("[EventQueue:add]: add event in queue category: " .. tostring(self.sc_params.params.reverse_category_mapping[category]) + .. " element: " .. tostring(self.sc_params.params.reverse_element_mapping[category][element])) + + self.sc_logger:debug("[EventQueue:add]: queue size before adding event: " .. tostring(#self.sc_flush.queues[category][element].events)) + self.sc_flush.queues[category][element].events[#self.sc_flush.queues[category][element].events + 1] = self.sc_event.event.formated_event + + self.sc_logger:info("[EventQueue:add]: queue size is now: " .. tostring(#self.sc_flush.queues[category][element].events) + .. "max is: " .. tostring(self.sc_params.params.max_buffer_size)) +end + +-------------------------------------------------------------------------------- +-- EventQueue:build_payload, concatenate data so it is ready to be sent +-- @param payload {string} json encoded string +-- @param event {table} the event that is going to be added to the payload +-- @return payload {string} json encoded string +-------------------------------------------------------------------------------- +function EventQueue:build_payload(payload, event) + if not payload then + payload = broker.json_encode(event) + else + payload = payload .. ',' .. broker.json_encode(event) + end + + return payload +end + +-------------------------------------------------------------------------------- +-- EventQueue:send_data, send data to external tool +-- @return {boolean} +-------------------------------------------------------------------------------- +function EventQueue:send_data(payload) + local authToken + local counter = 0 + + -- generate a fake token for test purpose or use a real one if not testing + if self.sc_params.params.send_data_test == 1 then + authToken = "fake_token" + else + authToken = self:getAuthToken() + end + + local http_post_data = payload + self.sc_logger:info('EventQueue:send_data: creating json: ' .. http_post_data) + + if self:call( + "api/now/table/" .. self.sc_params.params.incident_table, + "POST", + http_post_data, + authToken + ) then + return true + end + + return false +end + +-------------------------------------------------------------------------------- +-- write, +-- @param {table} event, the event from broker +-- @return {boolean} +-------------------------------------------------------------------------------- +function write (event) + -- skip event if a mandatory parameter is missing + if queue.fail then + queue.sc_logger:error("Skipping event because a mandatory parameter is not set") + return false + end + + -- initiate event object + queue.sc_event = sc_event.new(event, queue.sc_params.params, queue.sc_common, queue.sc_logger, queue.sc_broker) + + if queue.sc_event:is_valid_category() then + if queue.sc_event:is_valid_element() then + -- format event if it is validated + if queue.sc_event:is_valid_event() then + queue:format_accepted_event() + end + --- log why the event has been dropped + else + queue.sc_logger:debug("dropping event because element is not valid. Event element is: " + .. tostring(queue.sc_params.params.reverse_element_mapping[queue.sc_event.event.category][queue.sc_event.event.element])) + end + else + queue.sc_logger:debug("dropping event because category is not valid. Event category is: " + .. tostring(queue.sc_params.params.reverse_category_mapping[queue.sc_event.event.category])) + end + + return flush() +end + +-- flush method is called by broker every now and then (more often when broker has nothing else to do) +function flush() + local queues_size = queue.sc_flush:get_queues_size() + + -- nothing to flush + if queues_size == 0 then + return true + end + + -- flush all queues because last global flush is too old + if queue.sc_flush.last_global_flush < os.time() - queue.sc_params.params.max_all_queues_age then + if not queue.sc_flush:flush_all_queues(queue.build_payload_method[1], queue.send_data_method[1]) then + return false + end + + return true + end + + -- flush queues because too many events are stored in them + if queues_size > queue.sc_params.params.max_buffer_size then + if not queue.sc_flush:flush_all_queues(queue.build_payload_method[1], queue.send_data_method[1]) then + return false + end + + return true + end + + -- there are events in the queue but they were not ready to be send + return false +end + diff --git a/stream-connectors/centreon-certified/splunk/splunk-metrics-apiv2.lua b/stream-connectors/centreon-certified/splunk/splunk-metrics-apiv2.lua index f6202d686c5..565b2b906e4 100644 --- a/stream-connectors/centreon-certified/splunk/splunk-metrics-apiv2.lua +++ b/stream-connectors/centreon-certified/splunk/splunk-metrics-apiv2.lua @@ -37,7 +37,7 @@ function EventQueue.new(params) -- set up log configuration local logfile = params.logfile or "/var/log/centreon-broker/splunk-metrics.log" - local log_level = params.log_level or 1 + local log_level = params.log_level or 3 -- initiate mandatory objects self.sc_logger = sc_logger.new(logfile, log_level) @@ -57,9 +57,12 @@ function EventQueue.new(params) self.sc_params.params.splunk_host = params.splunk_host or "Central" self.sc_params.params.accepted_categories = params.accepted_categories or "neb" self.sc_params.params.accepted_elements = params.accepted_elements or "host_status,service_status" + self.sc_params.params.max_buffer_size = params.max_buffer_size or 30 self.sc_params.params.hard_only = params.hard_only or 0 self.sc_params.params.enable_host_status_dedup = params.enable_host_status_dedup or 0 self.sc_params.params.enable_service_status_dedup = params.enable_service_status_dedup or 0 + self.sc_params.params.metric_name_regex = params.metric_name_regex or "[^a-zA-Z0-9_]" + self.sc_params.params.metric_replacement_character = params.metric_replacement_character or "_" -- apply users params and check syntax of standard ones self.sc_params:param_override(params) @@ -75,8 +78,14 @@ function EventQueue.new(params) [categories.neb.id] = { [elements.host_status.id] = function () return self:format_metrics_host() end, [elements.service_status.id] = function () return self:format_metrics_service() end - }, - [categories.bam.id] = {} + } + } + + self.format_metric = { + [categories.neb.id] = { + [elements.host_status.id] = function (metric) return self:format_metric_host(metric) end, + [elements.service_status.id] = function (metric) return self:format_metric_service(metric) end + } } self.send_data_method = { @@ -99,7 +108,6 @@ function EventQueue:format_accepted_event() local category = self.sc_event.event.category local element = self.sc_event.event.element self.sc_logger:debug("[EventQueue:format_event]: starting format event") - self.sc_event.event.formated_event = {} -- can't format event if stream connector is not handling this kind of event if not self.format_event[category][element] then @@ -108,38 +116,89 @@ function EventQueue:format_accepted_event() .. tostring(self.sc_params.params.reverse_element_mapping[category][element]) .. ". If it is a not a misconfiguration, you can open an issue at https://github.com/centreon/centreon-stream-connector-scripts/issues") else + self.sc_logger:debug("[EventQueue:format_event]: going to format it") self.format_event[category][element]() - - -- add metrics in the formated event - for metric_name, metric_data in pairs(self.sc_metrics.metrics) do - metric_name = string.gsub(metric_name, "[^a-zA-Z0-9_]", "_") - self.sc_event.event.formated_event["metric_name:" .. tostring(metric_name)] = metric_data.value - end end - self:add() self.sc_logger:debug("[EventQueue:format_event]: event formatting is finished") end -function EventQueue:format_metrics_host() + +-------------------------------------------------------------------------------- +---- EventQueue:format_event_host method +-------------------------------------------------------------------------------- +function EventQueue:format_event_host() + local event = self.sc_event.event + self.sc_event.event.formated_event = { event_type = "host", - state = self.sc_event.event.state, - state_type = self.sc_event.event.state_type, - hostname = self.sc_event.event.cache.host.name, - ctime = self.sc_event.event.last_check + state = event.state, + state_type = event.state_type, + hostname = event.cache.host.name, + ctime = event.last_check } + + self.sc_logger:debug("[EventQueue:format_event_host]: call build_metric ") + self.sc_metrics:build_metric(self.format_metric[event.category][event.element]) end -function EventQueue:format_metrics_service() +-------------------------------------------------------------------------------- +---- EventQueue:format_event_service method +-------------------------------------------------------------------------------- +function EventQueue:format_event_service() + local event = self.sc_event.event + self.sc_event.event.formated_event = { event_type = "service", - state = self.sc_event.event.state, - state_type = self.sc_event.event.state_type, - hostname = self.sc_event.event.cache.host.name, - service_description = self.sc_event.event.cache.service.description, - ctime = self.sc_event.event.last_check + state = event.state, + state_type = event.state_type, + hostname = event.cache.host.name, + service_description = event.cache.service.description, + ctime = event.last_check } + + self.sc_logger:debug("[EventQueue:format_event_service]: call build_metric ") + self.sc_metrics:build_metric(self.format_metric[event.category][event.element]) +end + +-------------------------------------------------------------------------------- +---- EventQueue:format_metric_host method +-- @param metric {table} a single metric data +-------------------------------------------------------------------------------- +function EventQueue:format_metric_host(metric) + self.sc_logger:debug("[EventQueue:format_metric_host]: call format_metric ") + self:format_metric_event(metric) +end + +-------------------------------------------------------------------------------- +---- EventQueue:format_metric_service method +-- @param metric {table} a single metric data +-------------------------------------------------------------------------------- +function EventQueue:format_metric_service(metric) + self.sc_logger:debug("[EventQueue:format_metric_service]: call format_metric ") + self:format_metric_event(metric) +end + +-------------------------------------------------------------------------------- +---- EventQueue:build_metadata method +-- @param metric {table} a single metric data +-- @return tags {table} a table with formated metadata +-------------------------------------------------------------------------------- +function EventQueue:format_metric_event(metric) + self.sc_logger:debug("[EventQueue:format_metric]: start real format metric ") + self.sc_event.event.formated_event["metric_name:" .. tostring(metric.metric_name)] = metric.value + + -- add metric instance in tags + if metric.instance ~= "" then + self.sc_event.event.formated_event["instance"] = metric.instance + end + + if metric.subinstance[1] then + self.sc_event.event.formated_event["subinstances"] = metric.subinstance + end + + self:add() + self.sc_logger:debug("[EventQueue:format_metric]: end real format metric ") end -------------------------------------------------------------------------------- @@ -279,12 +338,13 @@ function write (event) end -- initiate event object - queue.sc_event = sc_event.new(event, queue.sc_params.params, queue.sc_common, queue.sc_logger, queue.sc_broker) + queue.sc_metrics = sc_metrics.new(event, queue.sc_params.params, queue.sc_common, queue.sc_broker, queue.sc_logger) + queue.sc_event = queue.sc_metrics.sc_event if queue.sc_event:is_valid_category() then - if queue.sc_event:is_valid_element() then + if queue.sc_metrics:is_valid_bbdo_element() then -- format event if it is validated - if queue.sc_event:is_valid_event() then + if queue.sc_metrics:is_valid_metric_event() then queue:format_accepted_event() end --- log why the event has been dropped diff --git a/stream-connectors/modules/centreon-stream-connectors-lib/sc_common.lua b/stream-connectors/modules/centreon-stream-connectors-lib/sc_common.lua index 3af30a814a5..72c6c6ce5e3 100644 --- a/stream-connectors/modules/centreon-stream-connectors-lib/sc_common.lua +++ b/stream-connectors/modules/centreon-stream-connectors-lib/sc_common.lua @@ -247,6 +247,9 @@ function ScCommon:json_escape(string) return string end +--- xml_escape: escape xml special characters in a string +-- @param string (string) the string that must be escaped +-- @return string (string) the string with escaped characters function ScCommon:xml_escape(string) local type = type(string) @@ -271,4 +274,53 @@ function ScCommon:xml_escape(string) return string end +--- dumper: dump variables for debug purpose +-- @param variable (any) the variable that must be dumped +-- @param result (string) [opt] the string that contains the dumped variable. ONLY USED INTERNALLY FOR RECURSIVE PURPOSE +-- @param tab_char (string) [opt] the string that contains the tab character. ONLY USED INTERNALLY FOR RECURSIVE PURPOSE (and design) +-- @return result (string) the dumped variable +function ScCommon:dumper(variable, result, tab_char) + -- tabulation handling + if not tab_char then + tab_char = "" + else + tab_char = tab_char .. "\t" + end + + -- non table variables handling + if type(variable) ~= "table" then + if result then + result = result .. "\n" .. tab_char .. "[" .. type(variable) .. "]: " .. tostring(variable) + else + result = "\n[" .. type(variable) .. "]: " .. tostring(variable) + end + else + if not result then + result = "\n[table]" + tab_char = "\t" + end + + -- recursive looping through each tables in the table + for index, value in pairs(variable) do + if type(value) ~= "table" then + if result then + result = result .. "\n" .. tab_char .. "[" .. type(value) .. "] " .. tostring(index) .. ": " .. tostring(value) + else + result = "\n" .. tostring(index) .. " [" .. type(value) .. "]: " .. tostring(value) + end + else + if result then + result = result .. "\n" .. tab_char .. "[" .. type(value) .. "] " .. tostring(index) .. ": " + else + result = "\n[" .. type(value) .. "] " .. tostring(index) .. ": " + end + result = self:dumper(value, result, tab_char) + end + end + end + + return result +end + + return sc_common diff --git a/stream-connectors/modules/centreon-stream-connectors-lib/sc_event.lua b/stream-connectors/modules/centreon-stream-connectors-lib/sc_event.lua index c7d5dc562d7..bf122625783 100644 --- a/stream-connectors/modules/centreon-stream-connectors-lib/sc_event.lua +++ b/stream-connectors/modules/centreon-stream-connectors-lib/sc_event.lua @@ -371,8 +371,8 @@ end -- @return true|false (boolean) function ScEvent:is_valid_event_downtime_state() if not self.sc_common:compare_numbers(self.params.in_downtime, self.event.scheduled_downtime_depth, ">=") then - self.sc_logger:warning("[sc_event:is_valid_event_downtime_state]: event is not in an valid ack state. Event ack state must be above or equal to " .. tostring(self.params.acknowledged) - .. ". Current ack state: " .. tostring(self.sc_common:boolean_to_number(self.event.acknowledged))) + self.sc_logger:warning("[sc_event:is_valid_event_downtime_state]: event is not in an valid downtime state. Event downtime state must be above or equal to " .. tostring(self.params.in_downtime) + .. ". Current downtime state: " .. tostring(self.sc_common:boolean_to_number(self.event.scheduled_downtime_depth))) return false end @@ -1112,6 +1112,8 @@ function ScEvent:build_outputs() local short_output = string.match(self.event.output, "^(.*)\n") if short_output then self.event.short_output = short_output + else + self.event.short_output = self.event.output end -- use shortoutput if it exists diff --git a/stream-connectors/modules/centreon-stream-connectors-lib/sc_metrics.lua b/stream-connectors/modules/centreon-stream-connectors-lib/sc_metrics.lua index e6574c47367..453260f8c26 100644 --- a/stream-connectors/modules/centreon-stream-connectors-lib/sc_metrics.lua +++ b/stream-connectors/modules/centreon-stream-connectors-lib/sc_metrics.lua @@ -49,6 +49,20 @@ function sc_metrics.new(event, params, common, broker, logger) } } +-- open metric (prometheus) : metric name = [a-zA-Z0-9_:], labels [a-zA-Z0-9_] https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md#protocol-negotiation +-- datadog : metric_name = [a-zA-Z0-9_.] https://docs.datadoghq.com/fr/metrics/custom_metrics/#naming-custom-metrics +-- dynatrace matric name [a-zA-Z0-9-_.] https://dynatrace.com/support/help/how-to-use-dynatrace/metrics/metric-ingestion/metric-ingestion-protocol#metric-key +-- metric 2.0 (carbon/grafite/grafana) [a-zA-Z0-9-_./] http://metrics20.org/spec/ (see Data Model section) +-- splunk [^a-zA-Z0-9_] + + if self.params.metrics_name_custom_regex and self.params.metrics_name_custom_regex ~= "" then + self.metrics_name_operations.custom.regex = self.params.metrics_custom_regex + end + + if self.params.metrics_name_custom_replacement_character then + self.metrics_name_operations.custom.replacement_character = self.params.metrics_name_custom_replacement_character + end + -- initiate metrics table self.metrics = {} -- initiate sc_event object @@ -70,6 +84,7 @@ function ScMetrics:is_valid_bbdo_element() -- drop event if event category is not accepted if not self.sc_event:find_in_mapping(self.params.category_mapping, self.params.accepted_categories, event_category) then + self.sc_logger:debug("[sc_metrics:is_valid_bbdo_element] event with category: " .. tostring(event_category) .. " is not an accepted category") return false else -- drop event if accepted category is not supposed to be used for a metric stream connector @@ -80,17 +95,16 @@ function ScMetrics:is_valid_bbdo_element() else -- drop event if element is not accepted if not self.sc_event:find_in_mapping(self.params.element_mapping[event_category], self.params.accepted_elements, event_element) then + self.sc_logger:debug("[sc_metrics:is_valid_bbdo_element] event with element: " .. tostring(event_element) .. " is not an accepted element") return false else -- drop event if element is not an element that carries perfdata - if event_element ~= elements.host.id - and event_element ~= elements.host_status.id - and event_element ~= elements.service.id + if event_element ~= elements.host_status.id and event_element ~= elements.service_status.id and event_element ~= elements.kpi_event.id then self.sc_logger:warning("[sc_metrics:is_valid_bbdo_element] Configuration error. accepted elements from paramters are: " - .. tostring(self.params.accepted_elements) .. ". Only host, host_status, service, service_status and kpi_event can be used for metrics") + .. tostring(self.params.accepted_elements) .. ". Only host_status, service_status and kpi_event can be used for metrics") return false end end @@ -138,7 +152,7 @@ function ScMetrics:is_valid_host_metric_event() return false end - -- return false if there is no perfdata or they it can't be parsed + -- return false if there is no perfdata or it can't be parsed if not self:is_valid_perfdata(self.sc_event.event.perfdata) then self.sc_logger:warning("[sc_metrics:is_vaild_host_metric_event]: host_id: " .. tostring(self.sc_event.event.host_id) .. " is not sending valid perfdata. Received perfdata: " .. tostring(self.sc_event.event.perf_data)) @@ -239,12 +253,28 @@ function ScMetrics:is_valid_perfdata(perfdata) end -- store data from parsed perfdata inside a metrics table - for metric_name, metric_data in pairs(metrics_info) do - self.metrics[metric_name] = metric_data - self.metrics[metric_name].name = metric_name - end + self.metrics_info = metrics_info return true end +-- to name a few : +-- open metric (prometheus) : metric name = [a-zA-Z0-9_:], labels [a-zA-Z0-9_] https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md#protocol-negotiation +-- datadog : metric_name = [a-zA-Z0-9_.] https://docs.datadoghq.com/fr/metrics/custom_metrics/#naming-custom-metrics +-- dynatrace matric name [a-zA-Z0-9-_.] https://dynatrace.com/support/help/how-to-use-dynatrace/metrics/metric-ingestion/metric-ingestion-protocol#metric-key +-- metric 2.0 (carbon/grafite/grafana) [a-zA-Z0-9-_./] http://metrics20.org/spec/ (see Data Model section) + +--- build_metric: use the stream connector format method to parse every metric in the event +-- @param format_metric (function) the format method from the stream connector +function ScMetrics:build_metric(format_metric) + local metrics_info = self.metrics_info + self.sc_logger:debug("perfdata: " .. self.sc_common:dumper(metrics_info)) + + for metric, metric_data in pairs(self.metrics_info) do + metrics_info[metric].metric_name = string.gsub(metric_data.metric_name, self.params.metric_name_regex, self.params.metric_replacement_character) + -- use stream connector method to format the metric event + format_metric(metrics_info[metric]) + end +end + return sc_metrics \ No newline at end of file diff --git a/stream-connectors/modules/centreon-stream-connectors-lib/sc_params.lua b/stream-connectors/modules/centreon-stream-connectors-lib/sc_params.lua index 9b626c63710..87d6185f94d 100644 --- a/stream-connectors/modules/centreon-stream-connectors-lib/sc_params.lua +++ b/stream-connectors/modules/centreon-stream-connectors-lib/sc_params.lua @@ -101,6 +101,10 @@ function sc_params.new(common, logger) logfile = "", log_level = "", + -- metric + metric_name_regex = "", + metric_replacement_character = "_", + -- initiate mappings element_mapping = {}, status_mapping = {}, @@ -109,7 +113,7 @@ function sc_params.new(common, logger) [1] = "HARD" }, validatedEvents = {}, - + -- FIX BROKER ISSUE max_stored_events = 10 -- do not use values above 100 } @@ -647,10 +651,10 @@ function ScParams:check_params() self.params.service_severity_threshold = self.common:if_wrong_type(self.params.service_severity_threshold, "number", nil) self.params.host_severity_operator = self.common:if_wrong_type(self.params.host_severity_operator, "string", ">=") self.params.service_severity_operator = self.common:if_wrong_type(self.params.service_severity_operator, "string", ">=") - self.params.ack_host_status = self.common:ifnil_or_empty(self.params.ack_host_status,self.params.host_status) - self.params.ack_service_status = self.common:ifnil_or_empty(self.params.ack_service_status,self.params.service_status) - self.params.dt_host_status = self.common:ifnil_or_empty(self.params.dt_host_status,self.params.host_status) - self.params.dt_service_status = self.common:ifnil_or_empty(self.params.dt_service_status,self.params.service_status) + self.params.ack_host_status = self.common:ifnil_or_empty(self.params.ack_host_status, self.params.host_status) + self.params.ack_service_status = self.common:ifnil_or_empty(self.params.ack_service_status, self.params.service_status) + self.params.dt_host_status = self.common:ifnil_or_empty(self.params.dt_host_status, self.params.host_status) + self.params.dt_service_status = self.common:ifnil_or_empty(self.params.dt_service_status, self.params.service_status) self.params.enable_host_status_dedup = self.common:check_boolean_number_option_syntax(self.params.enable_host_status_dedup, 0) self.params.enable_service_status_dedup = self.common:check_boolean_number_option_syntax(self.params.enable_service_status_dedup, 0) self.params.send_data_test = self.common:check_boolean_number_option_syntax(self.params.send_data_test, 0) @@ -665,6 +669,8 @@ function ScParams:check_params() self.params.use_long_output = self.common:check_boolean_number_option_syntax(self.params.use_longoutput, 1) self.params.remove_line_break_in_output = self.common:check_boolean_number_option_syntax(self.params.remove_line_break_in_output, 1) self.params.output_line_break_replacement_character = self.common:if_wrong_type(self.params.output_line_break_replacement_character, "string", " ") + self.params.metric_name_regex = self.common:if_wrong_type(self.params.metric_name_regex, "string", "") + self.params.metric_replacement_character = self.common:ifnil_or_empty(self.params.metric_replacement_character, "_") end --- get_kafka_params: retrieve the kafka parameters and store them the self.params.kafka table diff --git a/stream-connectors/modules/docs/README.md b/stream-connectors/modules/docs/README.md index 5dfd6c1d2c1..4b2339b410f 100644 --- a/stream-connectors/modules/docs/README.md +++ b/stream-connectors/modules/docs/README.md @@ -43,6 +43,7 @@ | load_json_file | method loads a json file and parse it | [Documentation](sc_common.md#load_json_file-method) | | json_escape | escape json characters in a string | [Documentation](sc_common.md#json_escape-method) | | xml_escape | escape xml characters in a string | [Documentation](sc_common.md#xml_escape-method) | +| dumper | dump any variable for debug purpose | [Documentation](sc_common.md#dumper-method) | ## sc_logger methods @@ -155,6 +156,7 @@ | is_valid_service_metric_event | makes sure that the metric event is valid service metric event | [Documentation](sc_metrics.md#is_valid_service_metric_event-method) | | is_valid_kpi_metric_event | makes sure that the metric event is valid KPI metric event | [Documentation](sc_metrics.md#is_valid_kpi_metric_event-method) | | is_valid_perfdata | makes sure that the performance data is valid | [Documentation](sc_metrics.md#is_valid_perfdata-method) | +| build_metric | use the stream connector format method to parse every metric in the event | [Documentation](sc_metrics.md#build_metric-method) | ## google.bigquery.bigquery methods diff --git a/stream-connectors/modules/docs/sc_common.md b/stream-connectors/modules/docs/sc_common.md index 0f877f83c44..e0e0a89519a 100644 --- a/stream-connectors/modules/docs/sc_common.md +++ b/stream-connectors/modules/docs/sc_common.md @@ -49,6 +49,10 @@ - [xml_escape: parameters](#xml_escape-parameters) - [xml_escape: returns](#xml_escape-returns) - [xml_escape: example](#xml_escape-example) + - [dumper method](#dumper-method) + - [dumper: parameters](#dumper-parameters) + - [dumper: returns](#dumper-returns) + - [dumper: example](#dumper-example) ## Introduction @@ -429,3 +433,44 @@ local string = 'string with " and < and >' local result = test_common:xml_escape(string) --> result is 'string with " and < and >' ``` + +## dumper method + +The **dumper** dump variables for debug purpose + +### dumper: parameters + +| parameter | type | optional | default value | +| --------------------------------------------------------------------------------------------------- | ------ | -------- | ------------- | +| the variable that must be dumped | any | no | | +| the string that contains the dumped variable. ONLY USED INTERNALLY FOR RECURSIVE PURPOSE | string | yes | | +| the string that contains the tab character. ONLY USED INTERNALLY FOR RECURSIVE PURPOSE (and design) | string | yes | | + +### dumper: returns + +| return | type | always | condition | +| ------------------- | ------ | ------ | --------- | +| the dumped variable | string | yes | | + +### dumper: example + +```lua +local best_city = { + name = "mont-de-marsan", + geocoord = { + lat = 43.89446, + lon = -0.4964242 + } +} + +local result = "best city info: " .. test_common:dumper(best_city) +--> result is +--[[ + best city info: + [table] + [string] name: mont-de-marsan + [table] geocoord: + [number] lon: -0.4964242 + [number] lat: 43.89446 +]]-- +``` diff --git a/stream-connectors/modules/docs/sc_metrics.md b/stream-connectors/modules/docs/sc_metrics.md index a7f3e51f4ef..ef183c49ba3 100644 --- a/stream-connectors/modules/docs/sc_metrics.md +++ b/stream-connectors/modules/docs/sc_metrics.md @@ -24,6 +24,9 @@ - [is_valid_perfdata parameters](#is_valid_perfdata-parameters) - [is_valid_perfdata: returns](#is_valid_perfdata-returns) - [is_valid_perfdata: example](#is_valid_perfdata-example) + - [build_metric method](#build_metric-method) + - [build_metric parameters](#build_metric-parameters) + - [build_metric: example](#build_metric-example) ## Introduction @@ -215,7 +218,7 @@ The **is_valid_perfdata** method makes sure that the performance data is valid. ```lua local perfdata = "pl=45%;40;80;0;100" -local result = test_metrics:is_valid_perfdata() +local result = test_metrics:is_valid_perfdata(perfdata) --> result is true or false --> test_metrics.metrics is now --[[ @@ -236,3 +239,24 @@ local result = test_metrics:is_valid_perfdata() } ]]-- ``` + +## build_metric method + +The **build_metric** method uses the provided stream connector format method to parse every metric in the event + +### build_metric parameters + +| parameter | type | optional | default value | +| -------------------------------------------- | -------- | -------- | ------------- | +| "the format method from the stream connector | function | no | | + +### build_metric: example + +```lua +local function my_format_method(metric_data) + -- your code here +end + +local stored_method = function(metric_data) return my_format_method(metric_data) end +test_metrics:build_metric(stored_method) +``` diff --git a/stream-connectors/modules/docs/sc_param.md b/stream-connectors/modules/docs/sc_param.md index bd7ebdd1d83..4aa4dae92cf 100644 --- a/stream-connectors/modules/docs/sc_param.md +++ b/stream-connectors/modules/docs/sc_param.md @@ -31,50 +31,54 @@ The sc_param module provides methods to help you handle parameters for your stre ### Default parameters -| Parameter name | type | default value | description | default scope | additionnal information | -| --------------------------------------- | ------ | ----------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| accepted_categories | string | neb,bam | each event is linked to a broker category that we can use to filter events | | it is a coma separated list, can use "neb", "bam", "storage". Storage is deprecated, use "neb" to get metrics data [more information](https://docs.centreon.com/current/en/developer/developer-broker-bbdo.html#event-categories) | -| accepted_elements | string | host_status,service_status,ba_status | | each event is linked to a broker element that we can use to filter events | it is a coma separated list, can use any type in the "neb", "bam" and "storage" tables [described here](https://docs.centreon.com/current/en/developer/developer-broker-bbdo.html#neb) (you must use lower case and replace blank space with underscore. "Host status" becomes "host_status") | -| host_status | string | 0,1,2 | coma separated list of accepted host status (0 = UP, 1 = DOWN, 2 = UNREACHABLE) | | | -| service_status | string | 0,1,2,3 | coma separated list of accepted services status (0 = OK, 1 = WARNING, 2 = CRITICAL, 3 = UNKNOWN) | | | -| ba_status | string | 0,1,2 | coma separated list of accepted BA status (0 = OK, 1 = WARNING, 2 = CRITICAL) | | | -| hard_only | number | 1 | accept only events that are in a HARD state (use 0 to accept SOFT state too) | host_status(neb), service_status(neb) | | -| acknowledged | number | 0 | accept only events that aren't acknowledged (use 1 to accept acknowledged events too) | host_status(neb), service_status(neb) | | -| in_downtime | number | 0 | accept only events that aren't in downtime (use 1 to accept events that are in downtime too) | host_status(neb), service_status(neb), ba_status(bam) | | -| accepted_hostgroups | string | | coma separated list of hostgroups that are accepted (for example: my_hostgroup_1,my_hostgroup_2) | host_status(neb), service_status(neb), acknowledgement(neb) | | -| accepted_servicegroups | string | | coma separated list of servicegroups that are accepted (for example: my_servicegroup_1,my_servicegroup_2) | service_status(neb), acknowledgement(neb) | | -| accepted_bvs | string | | coma separated list of BVs that are accepted (for example: my_bv_1,my_bv_2) | ba_status(bam) | | -| accepted_pollers | string | | coma separated list of pollers that are accepted (for example: my_poller_1,my_poller_2) | host_status(neb), service_status(neb),acknowledgement(neb) | | -| skip_anon_events | number | 1 | filter out events if their name can't be found in the broker cache (use 0 to accept them) | host_status(neb), service_status(neb), ba_status(bam), acknowledgement(neb) | | -| skip_nil_id | number | 1 | filter out events if their ID is nil (use 0 to accept them. YOU SHOULDN'T DO THAT) | host_status(neb), service_status(neb), ba_status(bam), acknowledgement(neb) | | -| max_buffer_size | number | 1 | this is the number of events the stream connector is going to store before sending them. (bulk send is made using a value above 1). | | | -| max_buffer_age | number | 5 | if no new event has been stored in the buffer in the past 5 seconds, all stored events are going to be sent even if the max_buffer_size hasn't been reached | | | -| max_all_queues_age | number | 300 | if last global flush date was 300 seconds ago, it will force a flush of each queue | | | -| send_mixed_events | number | 1 | when sending data, it will mix all sorts of events in every payload. It means that you can have events about hosts mixed with events about services when set to 1. Performance wise, it is **better** to set it to **1**. **Only** set it to **0** if the tool that you are sending events to **doesn't handle a payload with mixed events**. | | | -| service_severity_threshold | number | nil | the threshold that will be used to filter severity for services. it must be used with service_severity_operator option | service_status(neb), acknowledgement(neb) | | -| service_severity_operator | string | >= | the mathematical operator used to compare the accepted service severity threshold and the service severity (operation order is: threshold >= service severity) | service_status(neb), acknowledgement(neb) | | -| host_severity_threshold | number | nil | the threshold that will be used to filter severity for hosts. it must be used with host_severity_operator option | host_status(neb), service_status(neb) , acknowledgement(neb) | | -| host_severity_operator | string | >= | the mathematical operator used to compare the accepted host severity threshold and the host severity (operation order is: threshold >= host severity) | host_status(neb), service_status(neb), acknowledgement(neb) | | -| ack_host_status | string | | | coma separated list of accepted host status for an acknowledgement event. It uses the host_status parameter by default (0 = UP, 1 = DOWN, 2 = UNREACHABLE) | acknowledgement(neb) | | -| ack_service_status | string | | | coma separated list of accepted service status for an acknowledgement event. It uses the service_status parameter by default (0 = OK, 1 = WARNING, 2 = CRITICAL, 3 = UNKNOWN) | acknowledgement(neb) | | -| dt_host_status | string | | | coma separated list of accepted host status for a downtime event. It uses the host_status parameter by default (0 = UP, 1 = DOWN, 2 = UNREACHABLE) | downtime(neb) | | -| dt_service_status | string | | | coma separated list of accepted service status for a downtime event. It uses the service_status parameter by default (0 = OK, 1 = WARNING, 2 = CRITICAL, 3 = UNKNOWN) | downtime(neb) | | -| enable_host_status_dedup | number | 1 | | enable the deduplication of host status event when set to 1 | host_status(neb) | | -| enable_service_status_dedup | number | 1 | | enable the deduplication of service status event when set to 1 | service_status(neb) | | -| accepted_authors | string | | | coma separated list of accepted authors for a comment. It uses the alias (login) of the Centreon contacts | downtime(neb), acknowledgement(neb) | | -| local_time_diff_from_utc | number | default value is the time difference the centreon central server has from UTC | | the time difference from UTC in seconds | all | | -| timestamp_conversion_format | string | %Y-%m-%d %X | | the date format used to convert timestamps. Default value will print dates like this: 2021-06-11 10:43:38 | all | [date format information](https://www.lua.org/pil/22.1.html) | -| send_data_test | number | 0 | | When set to 1, send data in the logfile of the stream connector instead of sending it where the stream connector was designed to | all | | -| format_file | string | | | Path to a file that will be used as a template to format events instead of using default format | only usable for events stream connectors (\*-events-apiv2.lua) and not metrics stream connectors (\*-metrics-apiv2.lua) you should put the file in /etc/centreon-broker to keep your broker configuration in a single place. [**See documentation for more information**](templating.md) | -| proxy_address | string | | | address of the proxy | | -| proxy_port | number | | | port of the proxy | | -| proxy_username | string | | | user for the proxy | | -| proxy_password | string | | | pasword of the proxy user | | -| connection_timeout | number | 60 | | time to wait in second when opening connection | | -| allow_insecure_connection | number | 0 | | check the certificate validity of the peer host (0 = needs to be a valid certificate), use 1 if you are using self signed certificates | | -| use_long_output | number | 1 | | use the long output when sending an event (set to 0 to send the short output) | service_status(neb), host_status(neb) | -| remove_line_break_in_output | number | 1 | | replace all line breaks (\n) in the output with the character set in the output_line_break_replacement_character parameter | service_status(neb), host_status(neb) | -| output_line_break_replacement_character | string | " " | | replace all replace line break with this parameter value in the output (default value is a blank space) | service_status(neb), host_status(neb) | +| Parameter name | type | default value | description | default scope | additionnal information | +| --------------------------------------- | ------ | ----------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| accepted_categories | string | neb,bam | each event is linked to a broker category that we can use to filter events | | it is a coma separated list, can use "neb", "bam", "storage". Storage is deprecated, use "neb" to get metrics data [more information](https://docs.centreon.com/current/en/developer/developer-broker-bbdo.html#event-categories) | +| accepted_elements | string | host_status,service_status,ba_status | | each event is linked to a broker element that we can use to filter events | it is a coma separated list, can use any type in the "neb", "bam" and "storage" tables [described here](https://docs.centreon.com/current/en/developer/developer-broker-bbdo.html#neb) (you must use lower case and replace blank space with underscore. "Host status" becomes "host_status") | +| host_status | string | 0,1,2 | coma separated list of accepted host status (0 = UP, 1 = DOWN, 2 = UNREACHABLE) | | | +| service_status | string | 0,1,2,3 | coma separated list of accepted services status (0 = OK, 1 = WARNING, 2 = CRITICAL, 3 = UNKNOWN) | | | +| ba_status | string | 0,1,2 | coma separated list of accepted BA status (0 = OK, 1 = WARNING, 2 = CRITICAL) | | | +| hard_only | number | 1 | accept only events that are in a HARD state (use 0 to accept SOFT state too) | host_status(neb), service_status(neb) | | +| acknowledged | number | 0 | accept only events that aren't acknowledged (use 1 to accept acknowledged events too) | host_status(neb), service_status(neb) | | +| in_downtime | number | 0 | accept only events that aren't in downtime (use 1 to accept events that are in downtime too) | host_status(neb), service_status(neb), ba_status(bam) | | +| accepted_hostgroups | string | | coma separated list of hostgroups that are accepted (for example: my_hostgroup_1,my_hostgroup_2) | host_status(neb), service_status(neb), acknowledgement(neb) | | +| accepted_servicegroups | string | | coma separated list of servicegroups that are accepted (for example: my_servicegroup_1,my_servicegroup_2) | service_status(neb), acknowledgement(neb) | | +| accepted_bvs | string | | coma separated list of BVs that are accepted (for example: my_bv_1,my_bv_2) | ba_status(bam) | | +| accepted_pollers | string | | coma separated list of pollers that are accepted (for example: my_poller_1,my_poller_2) | host_status(neb), service_status(neb),acknowledgement(neb) | | +| skip_anon_events | number | 1 | filter out events if their name can't be found in the broker cache (use 0 to accept them) | host_status(neb), service_status(neb), ba_status(bam), acknowledgement(neb) | | +| skip_nil_id | number | 1 | filter out events if their ID is nil (use 0 to accept them. YOU SHOULDN'T DO THAT) | host_status(neb), service_status(neb), ba_status(bam), acknowledgement(neb) | | +| max_buffer_size | number | 1 | this is the number of events the stream connector is going to store before sending them. (bulk send is made using a value above 1). | | | +| max_buffer_age | number | 5 | if no new event has been stored in the buffer in the past 5 seconds, all stored events are going to be sent even if the max_buffer_size hasn't been reached | | | +| max_all_queues_age | number | 300 | if last global flush date was 300 seconds ago, it will force a flush of each queue | | | +| send_mixed_events | number | 1 | when sending data, it will mix all sorts of events in every payload. It means that you can have events about hosts mixed with events about services when set to 1. Performance wise, it is **better** to set it to **1**. **Only** set it to **0** if the tool that you are sending events to **doesn't handle a payload with mixed events**. | | | +| service_severity_threshold | number | nil | the threshold that will be used to filter severity for services. it must be used with service_severity_operator option | service_status(neb), acknowledgement(neb) | | +| service_severity_operator | string | >= | the mathematical operator used to compare the accepted service severity threshold and the service severity (operation order is: threshold >= service severity) | service_status(neb), acknowledgement(neb) | | +| host_severity_threshold | number | nil | the threshold that will be used to filter severity for hosts. it must be used with host_severity_operator option | host_status(neb), service_status(neb) , acknowledgement(neb) | | +| host_severity_operator | string | >= | the mathematical operator used to compare the accepted host severity threshold and the host severity (operation order is: threshold >= host severity) | host_status(neb), service_status(neb), acknowledgement(neb) | | +| ack_host_status | string | | coma separated list of accepted host status for an acknowledgement event. It uses the host_status parameter by default (0 = UP, 1 = DOWN, 2 = UNREACHABLE) | acknowledgement(neb) | | +| ack_service_status | string | | coma separated list of accepted service status for an acknowledgement event. It uses the service_status parameter by default (0 = OK, 1 = WARNING, 2 = CRITICAL, 3 = UNKNOWN) | acknowledgement(neb) | | +| dt_host_status | string | | coma separated list of accepted host status for a downtime event. It uses the host_status parameter by default (0 = UP, 1 = DOWN, 2 = UNREACHABLE) | downtime(neb) | | +| dt_service_status | string | | coma separated list of accepted service status for a downtime event. It uses the service_status parameter by default (0 = OK, 1 = WARNING, 2 = CRITICAL, 3 = UNKNOWN) | downtime(neb) | | +| enable_host_status_dedup | number | 1 | enable the deduplication of host status event when set to 1 | host_status(neb) | | +| enable_service_status_dedup | number | 1 | enable the deduplication of service status event when set to 1 | service_status(neb) | | +| accepted_authors | string | | coma separated list of accepted authors for a comment. It uses the alias (login) of the Centreon contacts | downtime(neb), acknowledgement(neb) | | +| local_time_diff_from_utc | number | default value is the time difference the centreon central server has from UTC | the time difference from UTC in seconds | all | | +| timestamp_conversion_format | string | %Y-%m-%d %X | the date format used to convert timestamps. Default value will print dates like this: 2021-06-11 10:43:38 | all | [date format information](https://www.lua.org/pil/22.1.html) | +| send_data_test | number | 0 | When set to 1, send data in the logfile of the stream connector instead of sending it where the stream connector was designed to | all | | +| format_file | string | | Path to a file that will be used as a template to format events instead of using default format | only usable for events stream connectors (\*-events-apiv2.lua) and not metrics stream connectors (\*-metrics-apiv2.lua) you should put the file in /etc/centreon-broker to keep your broker configuration in a single place. [**See documentation for more information**](templating.md) | | +| proxy_address | string | | address of the proxy | | | +| proxy_port | number | | port of the proxy | | | +| proxy_username | string | | user for the proxy | | | +| proxy_password | string | | pasword of the proxy user | | | +| connection_timeout | number | 60 | time to wait in second when opening connection | | | +| allow_insecure_connection | number | 0 | check the certificate validity of the peer host (0 = needs to be a valid certificate), use 1 if you are using self signed certificates | | | +| use_long_output | number | 1 | use the long output when sending an event (set to 0 to send the short output) | service_status(neb), host_status(neb) | | +| remove_line_break_in_output | number | 1 | replace all line breaks (\n) in the output with the character set in the output_line_break_replacement_character parameter | service_status(neb), host_status(neb) | | +| output_line_break_replacement_character | string | " " | replace all replace line break with this parameter value in the output (default value is a blank space) | service_status(neb), host_status(neb) | | +| metric_name_regex | string | "" | the regex that will be used to transform the metric name to a compatible name for the software that will receive the data | service_status(neb), host_status(neb) | | +| metric_replacement_character | string | "_" | the character that will be used to replace invalid characters in the metric name | service_status(neb), host_status(neb) | | +| logfile | string | **check the stream connector documentation** | the logfile that will be used for the stream connector | any | | +| log_level | number | 1 | the verbosity level for the logs. 1 = error + notice, 2 = error + warning + notice, 3 = error + warning + notice + debug (you should avoir using level 3) | any | | ## Module initialization