Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add missing mongodb status fields #7613

Merged
merged 11 commits into from Jul 23, 2018
Merged

Add missing mongodb status fields #7613

merged 11 commits into from Jul 23, 2018

Conversation

a3dho3yn
Copy link
Contributor

@a3dho3yn a3dho3yn commented Jul 16, 2018

The status metricset does not reflect many fields that are reported by MongoDB, so we missed the information about locks and latencies.
I added these fields to current metricset.

@elasticmachine
Copy link
Collaborator

Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually?

Copy link
Member

@jsoriano jsoriano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this awesome contribution!

"delete": c.Int("delete"),
"getmore": c.Int("getmore"),
"command": c.Int("command"),
}),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about having these three op objects under the same object? like ops.latencies, ops.counters, ops.replicated.

// readOnly boolean
// persistent boolean
}),
// ToDo add `tcmalloc` field
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you plan to fix these TODOs during this PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I plan to fix just one of them for now (the one related to replication).
But if there is any problem with leaving TODOs in the code, I think I can find free time to fix them.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, it's ok, just to know if we have to wait to approve and merge the PR 🙂

"r": c.Int("r", s.Optional),
"w": c.Int("w", s.Optional),
"R": c.Int("R", s.Optional),
"W": c.Int("W", s.Optional),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we give more human-readable names to these fields? 🙂

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are lock types in MongoDB. First I used their complete names (shared (S), exclusive (X), intent shared (IS), and intent exclusive (IX)) but then I decided to leave them as they were appeared in the MongoDB output (and avoid confusing developers).
But if you think human-readability is more important than sticking with the MongoDB's output, I can rename them.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I'm fine with leaving them this way if this is how MongoDB users/devs would expect them 👍

@@ -28,7 +28,6 @@ import (

/*
TODOs:
* add metricset for "locks" data
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know when should I stick with MongoDB's commands to define metric sets, and when should I define more organized and more human-friendly metric sets. For example, MongoDB provides information about locks or replication in several commands. Should I define a metric set for replication or lock and gather all the information from several commands?
Is there any design guide for defining metric sets?

In #7611, I defined a new metric set (as the ToDo said) for metrics subfield of MongoDB's serverStatus output. But here, I just added locks fields under existing status metric set.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no silver-bullet argument to decide if a new metric should belong to an existing metricset or not. It is going to depend on the service and the metric. Some things you can consider to decide that are:

  • Is the mechanism to collect the metric the same as other metrics?
  • Is the metric recommended to have the same collection period as other metrics?
  • Does the metrics increase significantly the volume of data collected?
  • Can it be confusing for the user to have them together?
    ...

In principle I don't see any problem in your decissions regarding that in your PRs 🙂

@@ -304,6 +304,7 @@ https://github.com/elastic/beats/compare/v6.2.3...master[Check the HEAD diff]
- Add Elasticsearch ml_job metricsets. {pull}7196[7196]
- Add support for bearer token files to HTTP helper. {pull}7527[7527]
- Add Elasticsearch index recovery metricset. {pull}7225[7225]
- Add `locks`, `global_locks`, `oplatencies` and `process` fields to `status` metricset of MongoDB module
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a reference to this PR here?

@@ -24,18 +24,84 @@ import (

var schema = s.Schema{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After all the new fields added, could you also recreate the data.json file?

*`mongodb.status.locks.global.acquire.count.w`*::
+
--
type: long

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a description for all these new fields?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the fields under the locks, there is a general description under the locks:

A document that reports for each lock <type>, data on lock <mode>s.
The possible lock <type>s are global, database, collection, metadata, and oplog.
The possible <mode>s are r, w, R and W which represents shared, exclusive, intent shared and intent exclusive.

locks.<type>.acquire.count.<mode> shows the number of times the lock was acquired in the specified mode.
locks.<type>.wait.count.<mode> shows the number of times the locks.acquireCount lock acquisitions encountered wait because the locks were held in a conflicting mode.
locks.<type>.wait.us.<mode> shows the cumulative wait time in microseconds for the lock acquisitions.
locks.<type>.deadlock.count.<mode> shows the number of times the lock acquisitions encountered deadlocks.

Considering this description, should I add a description for every field?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, it's fine, we can revisit it in the future if needed.

@jsoriano
Copy link
Member

jenkins, test this please

@jsoriano
Copy link
Member

jenkins, test this again, please

@a3dho3yn
Copy link
Contributor Author

@jsoriano Is there anything that I should do for this PR?
Is it possible to have this, #7611, and #7604 in the v6.4 release?

@jsoriano
Copy link
Member

jenkins, test this again

@jsoriano jsoriano merged commit 5544838 into elastic:master Jul 23, 2018
tsg pushed a commit that referenced this pull request Jul 24, 2018
* Fix breaking change in monitoring data (#7563)

The prefix for the stats metrics was metrics but renamed to `stats` by accident as the name is now auto generated. This reverts this change.

Closes #7562

* Add http.request.mehod to Kibana log filset (#7607)

Take `http.request.method` from ECS and apply it to the Kibana fileset.

Additional logs are added to the example log files.

* Fix rename log message (#7614)

Instead of the from field the to field was logged.

* Add tests to verify template content (#7606)

We recently started to move fields.yml into the Golang binary to be used internally. To make sure the loading important and loading of all the data into the binary works as expected for Metricbeat, this adds some basic tests. Related to #7605.

* Basic support of ES GC metrics for jvm9 (#7628)

GC log format for JVM9 is more detailed than for JVM8.

Differences and possible improvements:
* To get cpu_times.* a corellation between log lines is required.
* Some GC metrics are available in jvm8 are not in jvm9
  (class_unload_time_sec, weak_refs_processing_time_sec, ...)
* heap.used_kb is empty, but it can be calculated as young_gen.used_kb +
  old_gen.size_kb
* GC phase times are logged in miliseconds vs seconds in jvm8

* Improve fields.yml generator of modules (#7533)

From now on when a user provides a type hint in an Ingest pipeline, it's added to the generated `fields.yml` instead of guessing.

Closes #7472

* Fix filebeat registry meta being nil vs empty (#7632)

Filebeat introduces a meta field to registry entries in 6.3.1. The meta field is used to distuingish different log streams in docker files. For other input types the meta field must be null. Unfortunately the input loader did initialize the meta field with an empty dictionary. This leads to failing matches of old and new registry entries. Due to the match failing, old entries will not be removed, and filebeat will handle all files as new files on startup (old logs are send again).

Users will observe duplicate entries in the reigstry file. One entry with "meta": null and one entry with "meta": {}. The entry with "meta": {} will be used by filebeat. The null-entry will not be used by filebeat, but is kept in the registry file, cause it has now active owner (yet).

Improvements provided by this PR:

* when matching states consider an empty map and a null-map to be equivalent
* update input loader to create a null map for old state -> registry entries will be compatible on upgrade
* Add checks in critical places replacing an empty map with a null-map
* Add support to fix registry entries on load. states from corrupted 6.3.1 files will be merged into one single state on load 
* introduce unit tests for loading different registry formats
* introduce system tests validating output and registry when upgrading filebeat from an older version

Closes: #7634

* Heartbeat Job Validation + addition of libbeat/mapval (#7587)

This commit seeks to establish a pattern for testing heartbeat jobs. It currently tests the HTTP and TCP jobs. It also required some minor refactors of those tasks for HTTP/TCP.

To do this, it made sense to validate event maps with a sort of schema library. I couldn't find one that did exactly what I wanted here, so I wrote one called mapval. That turned out to be a large undertaking, and is now the majority of this commit. Further tests need to be written, but this commit is large enough as is.

One of the nicest things about the heartbeat architecture is the dialer chain behavior. It should be the case that any validated protocol using TCP (e.g. HTTP, TCP, Redis, etc.) has the exact same tcp metadata.

To help make testing these properties easy mapval lets users compose portions of a schema into a bigger one. In other words, you can say "An HTTP response should be a TCP response, with the standard monitor data added in, and also the special HTTP fields". Even having only written a handful of tests this has uncovered some inconsistencies there, where TCP jobs have a hostname, but HTTP ones do not.

* Only fetch shard metrics from master node (#7635)

This PR makes it so that the `elasticsearch/shard` metricset only fetches information from the Elasticsearch node if that node is the master node.

* Create (X-Pack Monitoring) stats metricset for Kibana module (#7525)

This PR takes the `stats` metricset of the `kibana` Metricbeat module and makes it ship documents to `.monitoring-kibana-6-mb-%{YYYY.MM.DD}` indices, while preserving the current format/mapping expected by docs in these indices. This will ensure that current consumers of the data in these indices, viz. the X-Pack Monitoring UI and the Telemetry shipping module in Kibana, will continue to work as-is.

* Add kubernetes specs for auditbeat file integrity monitoring (#7642)

* Release the rename processor as GA

* Fix log message for Kibana beta state (#7631)

From copy paste Kafka was in the log message instead of Kibana.

* Clean up experimental and beta messages (#7659)

Sometimes the old logging mechanism was used. If all use the new one it is easier to find all the entries. In addition some messages were inconsistent.

* Release raid and socket metricset from system module as GA (#7658)

* Release raid and socket metricset from system module as GA

* remove raid metricset title

* Update geoip config docs (#7640)

* Document  breaking change in monitoring shcema

Situation:

* Edit breaking changes statement about monitoring schema changes (#7666)

* Marking Elasticsearch module and its metricsets as beta (#7662)

This PR marks the `elasticsearch` module and all its 8 existing metricsets all as `beta`. Previously only 
2 metricsets were marked as `beta` with the remaining 6 marked as `experimental`.

* Increase kafka version in tests to 1.1.1 (#7655)

* Add missing mongodb status fields (#7613)

Add `locks`, `global_locks`, `oplatencies` and `process` fields to `status` metricset of MongoDB module.

* Remove outdated vendor information. (#7676)

* Fix Filebeat tests with new region_iso_code field (#7678)

In elastic/elasticsearch#31669 the field `region_iso_code` was added to the geoip processor. Because of this test broke with the most recent release of Elasticsearch as the events contain an undocumented field.

* Fix duplicated module headers (#7650)

* Fix duplicated module headers

Closes #7643

* fix metricset titles for munin and kvm

* fix imssing kubernetes apiserver metricset doc

* remove headers from modules / metricset generator and clean up traefik title

* Release munin and traefik module as beta. (#7660)

* Release munin and treafik module as beta.

* fixes to munin module

* Report k8s pct metrics from enrichment process (#7677)

Instead of doing it from the `state_container`. Problem with the
previous approach is that `state_container` metricset is not run in all
nodes, but from a single point. Making performance metrics not available
in all cases.

With this new approach, the enriching process will also collect
performance metrics, so they should be available everywhere where the
module is run.

* Fix misspell in Beats repo (#7679)

Running `make misspell`.

* Update sarama (kafka client) to 1.17 (#7665)

- Update Sarama to 1.17. The Sarama testsuite tests kafka versions between 0.11 and 1.1.0.
- Update compatible versions in output docs
- Add compression_level setting for gzip compression

* Update github.com/OneOfOne/xxhash to fix mips

* Update boltdb to use github.com/coreos/bbolt fork

Closes #6052

* Generate fields.yml using Mage (#7670)

Make will now delegate to mage for generating fields.yml. Make will check if the mage command exists and go install it if not. The FIELDS_FILE_PATH make variable is not longer used because the path(s) are specified in magefile.go.

This allows fields.yml to be generated on Windows using Mage. The CI scripts for Windows have been updated so that fields.yml is generated for all Beats during testing.

This also adds a make.bat in each directory where building occurs to give Windows
users a starting point.

Some fixes were made to the generators because:
- rsync was excluding important source files contained in a directory
  named "build"
- the generated project needed to be `git init` before running certain
  magefile targets that detect project's root dir and import path.

* Update go-ucfg to 0.6.1 (#7599)

Update fixes config unpacking if users overwrite settings from CLI, with
missing values. When using `-E key=` (e.g. in scripts defining potential
empty defaults via env variables like `-E key=${MYVALUE}`), an untyped
`nil`-values was inserted into the config. This untyped value will make
Unpack fail for most typed settings.

* Docs: Add deprecation check for dashboard loading. (#7675)

For APM Server the recommended way of loading dashboards and Kibana index pattern will be through the Kibana UI from 6.4 on. Since the docs are based on the libbeat docs we need to add a deprecation flag for dashboard and index pattern related documentation.

relates to elastic/apm-server#1142

* Update expected filebeat module files for geoip change
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants