Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman play assigns wrong environment vars. #5140

Closed
stxm opened this issue Feb 9, 2020 · 19 comments
Closed

podman play assigns wrong environment vars. #5140

stxm opened this issue Feb 9, 2020 · 19 comments
Labels
Good First Issue This issue would be a good issue for a first time contributor to undertake. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@stxm
Copy link

stxm commented Feb 9, 2020

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

podman play assigns environment vars to the wrong containers.

Steps to reproduce the issue:

podman run -dt --rm --name gitea_gitea  --pod new:gitea -p 3000:3000 -p 2222:22 gitea/gitea:latest
podman run -dt --rm --name gitea_postgres --pod gitea postgres:9.6
podman generate kube gitea -f gitea.yml
podman pod rm -f gitea
podman play kube gitea gitea.yml
podman generate kube gitea -f gitea_played.yml
diff -y gitea.yml gitea_played.yml

Describe the results you received:
There is an unexpected difference between the two yaml files
Differences are regarding the env variables.

Describe the results you expected:
I expected both files to be identical or with minor differences

Additional information you deem important (e.g. issue happens only occasionally):
always
Output of podman version:

[root@localhost ~]# podman version
Version:            1.6.4
RemoteAPI Version:  1
Go Version:         go1.12.12
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.12.12
  podman version: 1.6.4
host:
  BuildahVersion: 1.12.0-dev
  CgroupVersion: v1
  Conmon:
    package: conmon-2.0.6-1.module_el8.1.0+272+3e64ee36.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.6, commit: 7a4f0dd7b20a3d4bf9ef3e5cbfac05606b08eac0'
  Distribution:
    distribution: '"centos"'
    version: "8"
  IDMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  MemFree: 614768640
  MemTotal: 2085666816
  OCIRuntime:
    name: runc
    package: runc-1.0.0-64.rc9.module_el8.1.0+272+3e64ee36.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.1-dev'
  SwapFree: 2218782720
  SwapTotal: 2218782720
  arch: amd64
  cpus: 2
  eventlogger: journald
  hostname: localhost.localdomain
  kernel: 4.18.0-147.5.1.el8_1.x86_64
  os: linux
  rootless: true
  slirp4netns:
    Executable: /usr/bin/slirp4netns
    Package: slirp4netns-0.4.2-2.git21fdece.module_el8.1.0+272+3e64ee36.x86_64
    Version: |-
      slirp4netns version 0.4.2+dev
      commit: 21fdece2737dc24ffa3f01a341b8a6854f8b13b4
  uptime: 1h 21m 29.73s (Approximately 0.04 days)
registries:
  blocked: null
  insecure: null
  search:
  - registry.access.redhat.com
  - registry.fedoraproject.org
  - registry.centos.org
  - docker.io
store:
  ConfigFile: /home/sysops/.config/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: overlay
  GraphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-0.7.2-1.module_el8.1.0+272+3e64ee36.x86_64
      Version: |-
        fuse-overlayfs: version 0.7.2
        FUSE library version 3.2.1
        using FUSE kernel interface version 7.26
  GraphRoot: /home/sysops/.local/share/containers/storage
  GraphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 0
  RunRoot: /run/user/1000
  VolumePath: /home/sysops/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

podman-1.6.4-2.module_el8.1.0+272+3e64ee36.x86_64

Additional environment details (AWS, VirtualBox, physical, etc.):
see attached files
gitea.yml.txt
gitea_played.yml.txt

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Feb 9, 2020
@rhatdan
Copy link
Member

rhatdan commented Feb 18, 2020

@haircommander Could you take a look at this one?

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Mar 20, 2020

@sujil02 PTAL

@rhatdan
Copy link
Member

rhatdan commented Jun 9, 2020

@sujil02 Did you ever get a chance to look at this, If yes could you hand it over to @ryanchpowell

@sujil02
Copy link
Member

sujil02 commented Jun 9, 2020

@sujil02 Did you ever get a chance to look at this, If yes could you hand it over to @ryanchpowell

Yes, I did look into this a couple of times. But could not get a significant breakthrough @ryanchpowell we can connect and then probably knock this one out together?

@rhatdan
Copy link
Member

rhatdan commented Sep 10, 2020

@haircommander Any chance you could look at this?

@haircommander
Copy link
Collaborator

I will try to cleanup these play/generate kube issues

@rhatdan
Copy link
Member

rhatdan commented Sep 11, 2020

You da man.

@haircommander
Copy link
Collaborator

it turns out, when we do the initial generate kube we aren't taking the image's env into account, only the containers. Play kube does this, so on the second generate kube, we've picked up the images env. Unfortunately, I believe fixing this requires plumbing all of generate kube with the image.Runtime and ctx to actually be able to inspect the image and get the env out of it. as such, I believe this is more than I am able to take on right now.

@haircommander haircommander removed their assignment Sep 16, 2020
@zhangguanzhang
Copy link
Collaborator

I don't understand exactly where this problem is. . .

@haircommander
Copy link
Collaborator

if the image has environment variables, play kube takes them and adds them to the container's env structure. However, generate kube does not take the image env into account. So first generate does not have image env, first play does have image env, second generate does have image env. does that clear it up @zhangguanzhang

@zhangguanzhang
Copy link
Collaborator

if the image has environment variables, play kube takes them and adds them to the container's env structure. However, generate kube does not take the image env into account. So first generate does not have image env, first play does have image env, second generate does have image env. does that clear it up @zhangguanzhang

so need to get all env from ctr first, and remove all the env from the img?

@rhatdan
Copy link
Member

rhatdan commented Oct 10, 2020

Sure, except you need to be careful, they match exactly, IE Name and Value. Otherwise there is a chance the user specified an environment that overroad the default. There is also the corner case where the user created the pod/container with an exact match, but I think we can live with this, as a big corner case.

@rhatdan
Copy link
Member

rhatdan commented Oct 10, 2020

You might want to remove "container=", since Podman adds this itself. And we don't want openshift setting container=podman when a user runs the container on CRI-O.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Nov 11, 2020

@zhangguanzhang Interested in taking this on?

@rhatdan rhatdan added Good First Issue This issue would be a good issue for a first time contributor to undertake. and removed stale-issue labels Nov 11, 2020
@zhangguanzhang
Copy link
Collaborator

I am busy these days, if the problem is not solved later, I will try

@zhangguanzhang
Copy link
Collaborator

I think this problem maybe fixed by #8654, so could you try with the latest version?

@rhatdan
Copy link
Member

rhatdan commented Dec 9, 2020

Reopen if it does not.

@rhatdan rhatdan closed this as completed Dec 9, 2020
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Good First Issue This issue would be a good issue for a first time contributor to undertake. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

8 participants