Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Only scrape one pod #635

Closed
Serrvosky opened this issue May 30, 2019 · 2 comments
Closed

Only scrape one pod #635

Serrvosky opened this issue May 30, 2019 · 2 comments

Comments

@Serrvosky
Copy link

Hello everyone,

I'm using promtail to agregate my kubernetes cluster logs, but it's not being easy.

I'm using DaemonSet method, but every time I run my daemon, promtail starts scrapping one pod, and one pod only and it's always a different one every time.

Can somebody help me?

There are my promtail configs:

    scrape_configs:
      - job_name: kubernetes
        kubernetes_sd_configs:
        - role: pod

        relabel_configs:

        - source_labels: [ __meta_kubernetes_pod_controller_name]
          target_label: __path__
          replacement: '/var/log/pods/*/$1/*.log'

        - action: replace
          separator: _
          source_labels:
          - __meta_kubernetes_namespace
          - __meta_kubernetes_pod_name
          - __meta_kubernetes_pod_uid
          target_label: __tmp_log_folder

        - replacement: /var/log/pods/$1/*.log
          separator: /
          source_labels:
          - __tmp_log_folder
          - __meta_kubernetes_pod_container_name
          target_label: __path__

There are some logs of my daemonset:
RUN TIME #1

level=warn ts=2019-05-30T08:21:55.830484646Z caller=filetargetmanager.go:94 msg="WARNING!!! entry_parser config is deprecated, please change to pipeline_stages"
level=info ts=2019-05-30T08:21:55.831023422Z caller=kubernetes.go:191 component=discovery discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-05-30T08:21:55.83315885Z caller=server.go:120 http=[::]:9080 grpc=[::]:44011 msg="server listening on addresses"
level=info ts=2019-05-30T08:21:55.834482717Z caller=main.go:49 msg="Starting Promtail" version="(version=master-39bbd73, branch=master, revision=39bbd73)"
level=info ts=2019-05-30T08:22:00.835444134Z caller=filetargetmanager.go:243 msg="Adding target" key={}
2019/05/30 08:22:00 Seeked /var/log/pods/default_glartek-frontend-6b4df57949-8gkpr_c036acfe-7720-11e9-a97e-de04c70a5f39/glarboard/0.log - &{Offset:0 Whence:0}
level=info ts=2019-05-30T08:22:00.841660906Z caller=tailer.go:68 msg="start tailing file" path=/var/log/pods/default_glartek-frontend-6b4df57949-8gkpr_c036acfe-7720-11e9-a97e-de04c70a5f39/glarboard/0.log

RUN TIME #2

level=warn ts=2019-05-29T21:54:00.80305217Z caller=filetargetmanager.go:94 msg="WARNING!!! entry_parser config is deprecated, please change to pipeline_stages"
level=info ts=2019-05-29T21:54:00.804235119Z caller=kubernetes.go:191 component=discovery discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-05-29T21:54:00.80600984Z caller=server.go:120 http=[::]:9080 grpc=[::]:36687 msg="server listening on addresses"
level=info ts=2019-05-29T21:54:00.806389707Z caller=main.go:49 msg="Starting Promtail" version="(version=master-39bbd73, branch=master, revision=39bbd73)"
level=info ts=2019-05-29T21:54:05.807565101Z caller=filetargetmanager.go:243 msg="Adding target" key={}
2019/05/29 21:54:05 Seeked /var/log/pods/kube-system_coredns-5f44b47f5f-d6qmc_f87cd699-7b0d-11e9-a97e-de04c70a5f39/coredns/0.log - &{Offset:0 Whence:0}
level=info ts=2019-05-29T21:54:05.812863078Z caller=tailer.go:68 msg="start tailing file" path=/var/log/pods/kube-system_coredns-5f44b47f5f-d6qmc_f87cd699-7b0d-11e9-a97e-de04c70a5f39/coredns/0.log

RUN TIME #3

level=warn ts=2019-05-29T21:53:21.661706385Z caller=filetargetmanager.go:94 msg="WARNING!!! entry_parser config is deprecated, please change to pipeline_stages"
level=info ts=2019-05-29T21:53:21.662507345Z caller=kubernetes.go:191 component=discovery discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-05-29T21:53:21.664034838Z caller=server.go:120 http=[::]:9080 grpc=[::]:44241 msg="server listening on addresses"
level=info ts=2019-05-29T21:53:21.664467876Z caller=main.go:49 msg="Starting Promtail" version="(version=master-39bbd73, branch=master, revision=39bbd73)"
level=info ts=2019-05-29T21:53:26.66549849Z caller=filetargetmanager.go:243 msg="Adding target" key={}
2019/05/29 21:53:26 Seeked /var/log/pods/monitoring_prometheus-deployment-5bc6cf756-hblv5_3a512860-7bcb-11e9-a97e-de04c70a5f39/prometheus-pod/0.log - &{Offset:0 Whence:0}
level=info ts=2019-05-29T21:53:26.668528278Z caller=tailer.go:68 msg="start tailing file" path=/var/log/pods/monitoring_prometheus-deployment-5bc6cf756-hblv5_3a512860-7bcb-11e9-a97e-de04c70a5f39/prometheus-pod/0.log

As you can see, it's always one pod different. Do I have to use a loop in somewhere to add all pods? Or are my promtail's configs wrong?

Thanks for your help

@jaaanix
Copy link

jaaanix commented May 12, 2020

Did you get rid of your problem? I am encountering the same problem.

@cyriltovena
Copy link
Contributor

Probably a configuration issue @jaaanix please open an issue with your config for more assistance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants