Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NetworkSettings missing when container is running in pod #8073

Closed
andrin55 opened this issue Oct 20, 2020 · 2 comments · Fixed by #8075
Closed

NetworkSettings missing when container is running in pod #8073

andrin55 opened this issue Oct 20, 2020 · 2 comments · Fixed by #8075
Assignees
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@andrin55
Copy link
Contributor

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

When a container is started within a pod, network settings are empty. If started standalone, it works as expected.
Since pods share the network namespace across all containers in it, it should show the same settings as the infra container.

Steps to reproduce the issue:

  1. podman pod create --name=test

  2. podman run -d --pod=test alpine sleep 2000

  3. podman inspect

Describe the results you received:
Inspect output when running in pod:

        "NetworkSettings": {
            "EndpointID": "",
            "Gateway": "",
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "MacAddress": "",
            "Bridge": "",
            "SandboxID": "",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {},
            "SandboxKey": ""

Describe the results you expected:
Inspect output when running "standalone":

        "NetworkSettings": {
            "EndpointID": "",
            "Gateway": "10.88.0.1",
            "IPAddress": "10.88.0.4",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "MacAddress": "c2:00:53:6e:2a:98",
            "Bridge": "",
            "SandboxID": "",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {},
            "SandboxKey": "/var/run/netns/cni-a8b7b063-8adb-d3d8-114f-eb6985dc0c78"

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Version:      2.1.1
API Version:  2.0.0
Go Version:   go1.13.15
Built:        Fri Oct  2 16:30:39 2020
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.16.1
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.21-1.el8.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.21, commit: fa5f92225c4c95759d10846106c1ebd325966f91-dirty'
  cpus: 2
  distribution:
    distribution: '"centos"'
    version: "8"
  eventLogger: journald
  hostname: somehost
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-193.19.1.el8_2.x86_64
  linkmode: dynamic
  memFree: 2895990784
  memTotal: 3956207616
  ociRuntime:
    name: runc
    package: runc-1.0.0-145.rc91.git24a3cf8.el8.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.2-dev'
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  rootless: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 4261408768
  swapTotal: 4261408768
  uptime: 21h 10m 7.09s (Approximately 0.88 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 6
    paused: 0
    running: 4
    stopped: 2
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 5
  runRoot: /var/run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 2.0.0
  Built: 1601649039
  BuiltTime: Fri Oct  2 16:30:39 2020
  GitCommit: ""
  GoVersion: go1.13.15
  OsArch: linux/amd64
  Version: 2.1.1

Package info (e.g. output of rpm -q podman or apt list podman):

podman-2.1.1-4.el8.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Oct 20, 2020
@mheon
Copy link
Member

mheon commented Oct 20, 2020

I'll take this one.

@mheon mheon self-assigned this Oct 20, 2020
@mheon mheon added the In Progress This issue is actively being worked by the assignee, please do not work on this at this time. label Oct 20, 2020
mheon added a commit to mheon/libpod that referenced this issue Oct 20, 2020
When a container either joins a pod that shares the network
namespace or uses `--net=container:` to share the network
namespace of another container, it does not have its own copy of
the CNI results used to generate `podman inspect` output. As
such, to inspect these containers, we should be going to the
container we share the namespace with for network info.

Fixes containers#8073

Signed-off-by: Matthew Heon <mheon@redhat.com>
@mheon
Copy link
Member

mheon commented Oct 20, 2020

Fixed by #8075

mheon added a commit to mheon/libpod that referenced this issue Oct 20, 2020
When a container either joins a pod that shares the network
namespace or uses `--net=container:` to share the network
namespace of another container, it does not have its own copy of
the CNI results used to generate `podman inspect` output. As
such, to inspect these containers, we should be going to the
container we share the namespace with for network info.

Fixes containers#8073

Signed-off-by: Matthew Heon <mheon@redhat.com>
cevich pushed a commit to cevich/podman that referenced this issue Oct 22, 2020
When a container either joins a pod that shares the network
namespace or uses `--net=container:` to share the network
namespace of another container, it does not have its own copy of
the CNI results used to generate `podman inspect` output. As
such, to inspect these containers, we should be going to the
container we share the namespace with for network info.

Fixes containers#8073

Signed-off-by: Matthew Heon <mheon@redhat.com>
edsantiago pushed a commit to edsantiago/libpod that referenced this issue Nov 4, 2020
When a container either joins a pod that shares the network
namespace or uses `--net=container:` to share the network
namespace of another container, it does not have its own copy of
the CNI results used to generate `podman inspect` output. As
such, to inspect these containers, we should be going to the
container we share the namespace with for network info.

Fixes containers#8073

Signed-off-by: Matthew Heon <mheon@redhat.com>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants