Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After podman 2 upgrade, systemd fails to start in containers on cgroups v1 hosts #6734

Closed
markstos opened this issue Jun 23, 2020 · 148 comments · Fixed by #7339
Closed

After podman 2 upgrade, systemd fails to start in containers on cgroups v1 hosts #6734

markstos opened this issue Jun 23, 2020 · 148 comments · Fixed by #7339
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@markstos
Copy link
Contributor

markstos commented Jun 23, 2020

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

I was repeatedly building working containers with podman this morning when my OS (Ubuntu 20.04) notified me that podman 2.0 was available and I elected to install it.

Shortly afterword, I can no longer SSH to a newly build and launched container. I see this as output to podman container list -a:

CONTAINER ID  IMAGE                        COMMAND                                       CREATED         STATUS             PORTS                                             NAMES
0e7692779754  k8s.gcr.io/pause:3.2                                                       21 seconds ago  Up 17 seconds ago  127.0.0.1:2222->22/tcp, 127.0.0.1:3000->3000/tcp  505f2a3b385a-infra
537b8ed4db9c  localhost/devenv-img:latest  -c exec /sbin/init --log-target=journal 3>&1  20 seconds ago  Up 17 seconds ago                                                    devenv

This is frustrating: I don't any references to a container named "pause", yet one is running and listening on the ports my container had published, yet my container isn't listening on any ports at all.

I read the podman 2.0 release notes and don't see any notes about a related breaking change.

I did search the project for references to "infra containers" because I sometimes see that term mentioned in error messages. I find references to "infra containers" in the code, but I can't find references in the documentation.

They seem related to this issue and it would be great if there was more accessible user documentation about "infra containers"

Steps to reproduce the issue:

  1. podman run --systemd=always -it -p "127.0.0.1:2222:22" solita/ubuntu-systemd-ssh

Describe the results you received:

Initializing machine ID from random generator.
Failed to create /user.slice/user-1000.slice/session-8.scope/init.scope control group: Permission denied
Failed to allocate manager object: Permission denied
[!!!!!!] Failed to allocate manager object.

Describe the results you expected:

For this test, the container should boot to the point where this line appears:

  [  OK  ] Reached target Multi-User System.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

podman version 2.0.0

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.15.0
  cgroupVersion: v1
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.18, commit: '
  cpus: 4
  distribution:
    distribution: ubuntu
    version: "20.04"
  eventLogger: file
  hostname: mark-x1
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.4.0-37-generic
  linkmode: dynamic
  memFree: 1065062400
  memTotal: 16527003648
  ociRuntime:
    name: runc
    package: 'containerd.io: /usr/bin/runc'
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc10
      commit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
      spec: 1.0.1-dev
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: 'slirp4netns: /usr/bin/slirp4netns'
    version: |-
      slirp4netns version 1.0.0
      commit: unknown
      libslirp: 4.2.0
  swapFree: 19345408
  swapTotal: 1027600384
  uptime: 72h 32m 43.91s (Approximately 3.00 days)
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /home/mark/.config/containers/storage.conf
  containerStore:
    number: 2
    paused: 0
    running: 2
    stopped: 0
  graphDriverName: vfs
  graphOptions: {}
  graphRoot: /home/mark/.local/share/containers/storage
  graphStatus: {}
  imageStore:
    number: 122
  runRoot: /run/user/1000/containers
  volumePath: /home/mark/.local/share/containers/storage/volumes
version:
  APIVersion: 1
  Built: 0
  BuiltTime: Wed Dec 31 19:00:00 1969
  GitCommit: ""
  GoVersion: go1.13.8
  OsArch: linux/amd64
  Version: 2.0.0

Package info (e.g. output of rpm -q podman or apt list podman):

podman/unknown,now 2.0.0~1 amd64 [installed]

Additional environment details (AWS, VirtualBox, physical, etc.):

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 23, 2020
@baude
Copy link
Member

baude commented Jun 23, 2020

can you make the image in question available ?

@markstos
Copy link
Contributor Author

No.

@markstos
Copy link
Contributor Author

@baude Any idea why ports could be assigned to a second "pause" container instead of the intended one?

@mheon
Copy link
Member

mheon commented Jun 23, 2020

How is the pod created? Can you provide the command that was used to launch the pod?

@mheon
Copy link
Member

mheon commented Jun 23, 2020

Also, podman inspect output for both pod and container would be appreciated.

@baude
Copy link
Member

baude commented Jun 23, 2020

@markstos when using pods, all of the ports are assigned to the infra container. That is normal. Then each subsequent container in the pod joins the infra containers namespace. That is one of our definitions of a pod. As @mheon asked, can you provide the pod command used?

@markstos
Copy link
Contributor Author

I used a docker-compose.yml file like this:

version: "3.8"
services:
  devenv:
    image: devenv-img
    build:
      context: ./docker/ubuntu-18.04
      args:
        GITHUB_USERS: "markstos"
    container_name: devenv
    security_opt:
      - seccomp:unconfined
         # Expose port 2222 so you can ssh -p 2222 root@localhost 
    ports:
      - "127.0.0.1:2222:22"
      - "127.0.0.1:3000:3000"
    tmpfs:
      - /tmp
      - /run
      - /run/lock
    volumes:
      - "/sys/fs/cgroup:/sys/fs/cgroup:ro"
      - "./:/home/amigo/unity"

podman-compose was used, but had to be patched first:
containers/podman-compose@af83276

podman-compose up -d
using podman version: podman version 2.0.0
podman pod create --name=unity --share net -p 127.0.0.1:3000:3000 -p 127.0.0.1:2222:22
f7829db54fc270e903fa55be97ae192d131c89a3c476ef0220a3942c8e1192fa
0
podman run --name=devenv -d --pod=unity --security-opt seccomp=unconfined --label io.podman.compose.config-hash=123 --label io.podman.compose.project=unity --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=devenv --tmpfs /tmp --tmpfs /run --tmpfs /run/lock -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /home/mark/git/unity/./:/home/amigo/unity --add-host devenv:127.0.0.1 --add-host devenv:127.0.0.1 devenv-img
50edda8bf329296490f771a8785c605415ca3be36171b3970ecba71211a825b9

Here's the inspect output for the container:

 podman inspect devenv
[
    {
        "Id": "50edda8bf329296490f771a8785c605415ca3be36171b3970ecba71211a825b9",
        "Created": "2020-06-23T15:52:29.053978355-04:00",
        "Path": "/usr/bin/fish",
        "Args": [
            "-c",
            "exec /sbin/init --log-target=journal 3>&1"
        ],
        "State": {
            "OciVersion": "1.0.2-dev",
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 2457442,
            "ConmonPid": 2457430,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2020-06-23T15:52:32.468351379-04:00",
            "FinishedAt": "0001-01-01T00:00:00Z",
            "Healthcheck": {
                "Status": "",
                "FailingStreak": 0,
                "Log": null
            }
        },
        "Image": "471497bb87d25cf7d9a2df9acf516901e38c34d93732b628a42ce3e2a2fc5099",
        "ImageName": "localhost/devenv-img:latest",
        "Rootfs": "",
        "Pod": "f7829db54fc270e903fa55be97ae192d131c89a3c476ef0220a3942c8e1192fa",
        "ResolvConfPath": "/run/user/1000/containers/vfs-containers/4054570f5694e73f1297c76e4d59ec482b5e03cf006bc5ebfe63fe44362a6235/userdata/resolv.conf",
        "HostnamePath": "/run/user/1000/containers/vfs-containers/50edda8bf329296490f771a8785c605415ca3be36171b3970ecba71211a825b9/userdata/hostname",
        "HostsPath": "/run/user/1000/containers/vfs-containers/4054570f5694e73f1297c76e4d59ec482b5e03cf006bc5ebfe63fe44362a6235/userdata/hosts",
        "StaticDir": "/home/mark/.local/share/containers/storage/vfs-containers/50edda8bf329296490f771a8785c605415ca3be36171b3970ecba71211a825b9/userdata",
        "OCIConfigPath": "/home/mark/.local/share/containers/storage/vfs-containers/50edda8bf329296490f771a8785c605415ca3be36171b3970ecba71211a825b9/userdata/config.json",
        "OCIRuntime": "runc",
        "LogPath": "/home/mark/.local/share/containers/storage/vfs-containers/50edda8bf329296490f771a8785c605415ca3be36171b3970ecba71211a825b9/userdata/ctr.log",
        "LogTag": "",
        "ConmonPidFile": "/run/user/1000/containers/vfs-containers/50edda8bf329296490f771a8785c605415ca3be36171b3970ecba71211a825b9/userdata/conmon.pid",
        "Name": "devenv",
        "RestartCount": 0,
        "Driver": "vfs",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "EffectiveCaps": [
            "CAP_AUDIT_WRITE",
            "CAP_CHOWN",
            "CAP_DAC_OVERRIDE",
            "CAP_FOWNER",
            "CAP_FSETID",
            "CAP_KILL",
            "CAP_MKNOD",
            "CAP_NET_BIND_SERVICE",
            "CAP_NET_RAW",
            "CAP_SETFCAP",
            "CAP_SETGID",
            "CAP_SETPCAP",
            "CAP_SETUID",
            "CAP_SYS_CHROOT"
        ],
        "BoundingCaps": [
            "CAP_AUDIT_WRITE",
            "CAP_CHOWN",
            "CAP_DAC_OVERRIDE",
            "CAP_FOWNER",
            "CAP_FSETID",
            "CAP_KILL",
            "CAP_MKNOD",
            "CAP_NET_BIND_SERVICE",
            "CAP_NET_RAW",
            "CAP_SETFCAP",
            "CAP_SETGID",
            "CAP_SETPCAP",
            "CAP_SETUID",
            "CAP_SYS_CHROOT"
        ],
        "ExecIDs": [],
        "GraphDriver": {
            "Name": "vfs",
            "Data": null
        },
        "Mounts": [
            {
                "Type": "bind",
                "Name": "",
                "Source": "/sys/fs/cgroup",
                "Destination": "/sys/fs/cgroup",
                "Driver": "",
                "Mode": "",
                "Options": [
                    "noexec",
                    "nosuid",
                    "nodev",
                    "rbind"
                ],
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Name": "",
                "Source": "/home/mark/Documents/RideAmigos/git/unity",
                "Destination": "/home/amigo/unity",
                "Driver": "",
                "Mode": "",
                "Options": [
                    "rbind"
                ],
                "RW": true,
                "Propagation": "rprivate"
            }
        ],
        "Dependencies": [
            "4054570f5694e73f1297c76e4d59ec482b5e03cf006bc5ebfe63fe44362a6235"
        ],
        "NetworkSettings": {
            "EndpointID": "",
            "Gateway": "",
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "MacAddress": "",
            "Bridge": "",
            "SandboxID": "",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": [],
            "SandboxKey": ""
        },
        "ExitCommand": [
            "/usr/bin/podman",
            "--root",
            "/home/mark/.local/share/containers/storage",
            "--runroot",
            "/run/user/1000/containers",
            "--log-level",
            "error",
            "--cgroup-manager",
            "cgroupfs",
            "--tmpdir",
            "/run/user/1000/libpod/tmp",
            "--runtime",
            "runc",
            "--storage-driver",
            "vfs",
            "--events-backend",
            "file",
            "container",
            "cleanup",
            "50edda8bf329296490f771a8785c605415ca3be36171b3970ecba71211a825b9"
        ],
        "Namespace": "",
        "IsInfra": false,
        "Config": {
            "Hostname": "50edda8bf329",
            "Domainname": "",
            "User": "root",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:~/.yarn/bin",
                "TERM=xterm",
                "container=podman",
                "YARN_VERSION=1.10.1",
                "MONGO_VERSION=4.2.8",
                "NODE_VERSION=12.15.0",
                "LANG=C.UTF-8",
                "MONGO_MAJOR=4.2",
                "GPG_KEYS=E162F504A20CDF15827F718D4B7C549A058F8B6B",
                "HOME=/root",
                "NPM_CONFIG_LOGLEVEL=info",
                "HOSTNAME=50edda8bf329"
            ],
            "Cmd": [
                "-c",
                "exec /sbin/init --log-target=journal 3>&1"
            ],
            "Image": "localhost/devenv-img:latest",
            "Volumes": null,
            "WorkingDir": "/unity",
            "Entrypoint": "/usr/bin/fish",
            "OnBuild": null,
            "Labels": {
                "com.docker.compose.container-number": "1",
                "com.docker.compose.service": "devenv",
                "io.podman.compose.config-hash": "123",
                "io.podman.compose.project": "unity",
                "io.podman.compose.version": "0.0.1",
                "maintainer": "mark@rideamigos.com"
            },
            "Annotations": {
                "io.container.manager": "libpod",
                "io.kubernetes.cri-o.ContainerType": "container",
                "io.kubernetes.cri-o.Created": "2020-06-23T15:52:29.053978355-04:00",
                "io.kubernetes.cri-o.SandboxID": "unity",
                "io.kubernetes.cri-o.TTY": "false",
                "io.podman.annotations.autoremove": "FALSE",
                "io.podman.annotations.init": "FALSE",
                "io.podman.annotations.privileged": "FALSE",
                "io.podman.annotations.publish-all": "FALSE",
                "io.podman.annotations.seccomp": "unconfined",
                "org.opencontainers.image.stopSignal": "37"
            },
            "StopSignal": 37,
            "CreateCommand": [
                "podman",
                "run",
                "--name=devenv",
                "-d",
                "--pod=unity",
                "--security-opt",
                "seccomp=unconfined",
                "--label",
                "io.podman.compose.config-hash=123",
                "--label",
                "io.podman.compose.project=unity",
                "--label",
                "io.podman.compose.version=0.0.1",
                "--label",
                "com.docker.compose.container-number=1",
                "--label",
                "com.docker.compose.service=devenv",
                "--tmpfs",
                "/tmp",
                "--tmpfs",
                "/run",
                "--tmpfs",
                "/run/lock",
                "-v",
                "/sys/fs/cgroup:/sys/fs/cgroup:ro",
                "-v",
                "/home/mark/Documents/RideAmigos/git/unity/./:/home/amigo/unity",
                "--add-host",
                "devenv:127.0.0.1",
                "--add-host",
                "devenv:127.0.0.1",
                "devenv-img"
            ]
        },
        "HostConfig": {
            "Binds": [
                "/sys/fs/cgroup:/sys/fs/cgroup:ro,rprivate,noexec,nosuid,nodev,rbind",
                "/home/mark/Documents/RideAmigos/git/unity:/home/amigo/unity:rw,rprivate,rbind"
            ],
            "CgroupMode": "host",
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "k8s-file",
                "Config": null
            },
            "NetworkMode": "container:4054570f5694e73f1297c76e4d59ec482b5e03cf006bc5ebfe63fe44362a6235",
            "PortBindings": {},
            "RestartPolicy": {
                "Name": "",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": [],
            "CapDrop": [],
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": [
                "devenv:127.0.0.1",
                "devenv:127.0.0.1"
            ],
            "GroupAdd": [],
            "IpcMode": "private",
            "Cgroup": "",
            "Cgroups": "default",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "private",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": [
                "seccomp=unconfined"
            ],
            "Tmpfs": {
                "/run": "rw,rprivate,nosuid,nodev,tmpcopyup",
                "/run/lock": "rw,rprivate,nosuid,nodev,tmpcopyup",
                "/tmp": "rw,rprivate,nosuid,nodev,tmpcopyup"
            },
            "UTSMode": "private",
            "UsernsMode": "",
            "ShmSize": 65536000,
            "Runtime": "oci",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "/libpod_parent/f7829db54fc270e903fa55be97ae192d131c89a3c476ef0220a3942c8e1192fa",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": 0,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": [],
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0
        }
    }
]

I don't see an option to run podman inspect on pods.

@baude
Copy link
Member

baude commented Jun 23, 2020

podman pod inspect

@baude
Copy link
Member

baude commented Jun 23, 2020

any chance we can sync up on irc? freenode.net #podman

@baude
Copy link
Member

baude commented Jun 23, 2020

btw, can couple of simple things we should have asked. apologies if i missed the information.

  1. can you see the ssh process running with ps
  2. can you ssh directly to the container without involving the port mapping ? i.e. use :22

@markstos
Copy link
Contributor Author

all of the ports are assigned to the infra container.

Did I miss this in the docs? It's not intuitive to have port mappings appear on a container other than the one I installed. I wasn't thrilled to see the "pause" container from a third-party service on the internet that I had no intention of pulling down content from either.

@markstos
Copy link
Contributor Author

can you see the ssh process running with ps

No. I presume that means I happened to break my own container about the time I also upgraded podman. I'm trying to get the container running under Docker now as a second point of reference.

@mheon
Copy link
Member

mheon commented Jun 23, 2020

Network mode is set to another container, which I'm assuming is the infra container (I don't see the ID in question in your first podman ps so perhaps you recreated). Container config on the whole seems fine, so I no longer believe this is a network issue, but is probably related to the SSH daemon itself.

What init are you using in the container, systemd or something else?

@mheon
Copy link
Member

mheon commented Jun 23, 2020

@baude One obvious thing: podman ps isn't displaying ports correctly.

1.9:

b4b47beefd3d  registry.fedoraproject.org/fedora:latest  bash     1 second ago    Up 1 second ago           0.0.0.0:2222->22/tcp  serene_tu
182529b785b3  registry.fedoraproject.org/fedora:latest  bash     15 seconds ago  Exited (0) 9 seconds ago  0.0.0.0:2222->22/tcp  pensive_chaum
64d111e06042  k8s.gcr.io/pause:3.2                               35 seconds ago  Up 15 seconds ago         0.0.0.0:2222->22/tcp  46ce3d0db44c-infra

2.0:

182529b785b3  registry.fedoraproject.org/fedora:latest  bash     20 seconds ago  Exited (0) 13 seconds ago                        pensive_chaum
3f4e33ba8a41  registry.fedoraproject.org/fedora:latest  bash     5 days ago      Exited (0) 5 days ago                            testctr1
64d111e06042  k8s.gcr.io/pause:3.2                               39 seconds ago  Up 19 seconds ago          0.0.0.0:2222->22/tcp  46ce3d0db44c-infra

@mheon
Copy link
Member

mheon commented Jun 23, 2020

Hm. It's also ordering containers incorrectly... I'd expect sort to be by time of creation, not by ID.

@markstos
Copy link
Contributor Author

I'm using systemd. I was ssh'ing in fine before the upgrade. But I also have been tweaking the configuration all day, so it could be something on my end.

@mheon
Copy link
Member

mheon commented Jun 23, 2020

I built a test setup as close to yours as I could given provided information (pod with port 2222 forwarded, container in that pod with systemd as init + sshd, added a user, SSH'd in from another machine to public port, all rootless) and everything worked locally, so I think this is either environment, or some detail of the pod that is not clear from what is given here.

@markstos
Copy link
Contributor Author

I'm on Kubernetes Slack server now. I forgot my IRC password.

@markstos
Copy link
Contributor Author

@mheon Thanks for the attention. I'll test more with Docker as a control group reference and see if I can pinpoint some bug on my end that I introduced.

@markstos
Copy link
Contributor Author

It booted fine with docker-compose up -d but not podman-compose up -d.

The plot thickens.

I'll see if I can some more useful case for you to reproduce from.

@markstos
Copy link
Contributor Author

I've temporarily posted my Dockerfile here:

https://gist.github.com/markstos/9f7b982bc73106e4bb5a73e5524a3ec6

Once you've grabbed it, I'm going to take down the Gist.

@markstos
Copy link
Contributor Author

I believe the last two things I was changing before it broke were setting fish_user_paths, and looping over users to add their SSH keys to authorized_keys-- both happen in the last 20 lines of the file.

@mheon
Copy link
Member

mheon commented Jun 23, 2020

Grabbed, thanks. It's a little late in the day here, but I'll pick this up tomorrow and see if I can chase something down.

Might be a compose-specific bug, or might be a result of upgrading an existing 1.9 system to 2.0

@markstos
Copy link
Contributor Author

I've reduced the test a case a bit. Here's a script I successfully used to launch the container with 1.9 that fails with 2.0:

#!/bin/bash
podman run --detach \
  --name devenv \
  --security-opt "seccomp=unconfined" \
  --tmpfs /tmp \
  --tmpfs /run \
  --tmpfs /run/lock \
  --volume /sys/fs/cgroup:/sys/fs/cgroup:ro \
  --volume '../../:/home/amigo/unity' \
  --publish "127.0.0.1:2222:22" \
  --publish "127.0.0.1:3000:3000" \
 devenv-img

The result is the same-- it starts without apparent error, but I can't SSH in. This eliminates anything to do with pods.

Using ps I can confirm that there's an init process running running under the expected user account but no sshd process.

I'm going to try to rollback recent changes to my Dockerfile assuming that my changes broke it, not podman.

@mheon
Copy link
Member

mheon commented Jun 24, 2020

I'd recommend checking the journal within the container to see why sshd is failing. Also, checking if port forwarding works at all would be helpful - if you use 8080:80 with a simple nginx container, can you access it?

@mheon
Copy link
Member

mheon commented Jun 24, 2020

Partial fix for the podman ps issues I noticed in #6761

@markstos
Copy link
Contributor Author

@mheon how I can check the journal in the container if I can't get into it?

I tried this to narrow down the issue: I rewrote my start command to give me an interactive shell instead of starting system. Then within the shell I started sshd manually with sshd -D-- that's how systemd would start it. Then I tried to SSH in, and that worked. I double checked that systemd is set to start SSH at boot. So something changed which resulted in sshd not running when booted with systemd.

I don't think port-forwarding is the issue, since ps shows no sshd process running.

@mheon
Copy link
Member

mheon commented Jun 24, 2020

@markstos podman exec -t -i $CONTAINERNAME journalctl?

@vrothberg
Copy link
Member

@giuseppe PTAL

@giuseppe
Copy link
Member

giuseppe commented Aug 11, 2020

if reverting (#6569) solves your issue, you can force a new scope wrapping podman with systemd-run as systemd-run --user --scope podman ....

In your case it will be: systemd-run --user --scope podman run -it jrei/systemd-ubuntu:16.04

@c-goes
Copy link

c-goes commented Aug 14, 2020

In your case it will be: systemd-run --user --scope podman run -it jrei/systemd-ubuntu:16.04

Thanks for this. It's useful for Molecule users with this problem. Molecule works again with Podman 2 on Ubuntu when running alias podman="systemd-run --user --scope podman".

@dustymabe
Copy link
Contributor

Note that we've got this issue flagged as something to be fixed before we switch to podman 2.x in Fedora CoreOS. Is there any resolution or more information that we should be using to inform our decision here?

Context: coreos/fedora-coreos-tracker#575

@baude
Copy link
Member

baude commented Aug 17, 2020

@giuseppe is this something you can look at?

giuseppe added a commit to giuseppe/libpod that referenced this issue Aug 17, 2020
create a scope everytime we don't own the current cgroup and we are
running on systemd, regardless of the config manager specified.

Closes: containers#6734

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
@giuseppe
Copy link
Member

PR: #7339

Can anyone who is on cgroup v1 please try it?

@mheon
Copy link
Member

mheon commented Aug 17, 2020

@dustymabe ^^ Mind testing this?

giuseppe added a commit to giuseppe/libpod that referenced this issue Aug 18, 2020
create a scope everytime we don't own the current cgroup and we are
running on systemd.

Closes: containers#6734

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
giuseppe added a commit to giuseppe/libpod that referenced this issue Aug 18, 2020
create a scope everytime we don't own the current cgroup and we are
running on systemd.

Closes: containers#6734

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
giuseppe added a commit to giuseppe/libpod that referenced this issue Aug 18, 2020
create a scope everytime we don't own the current cgroup and we are
running on systemd.

Closes: containers#6734

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
@dustymabe
Copy link
Contributor

I can test if someone gives me a link to an RPM. Sorry for the delayed response.

mheon pushed a commit to mheon/libpod that referenced this issue Aug 20, 2020
create a scope everytime we don't own the current cgroup and we are
running on systemd.

Closes: containers#6734

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
adelton added a commit to adelton/freeipa-container that referenced this issue Aug 26, 2020
adelton added a commit to adelton/freeipa-container that referenced this issue Aug 26, 2020
adelton added a commit to adelton/freeipa-container that referenced this issue Aug 26, 2020
@c-goes
Copy link

c-goes commented Aug 27, 2020

I tried compiling podman today
This is the output of testing podman master branch:

ubuntu@test:~$ podman version
Version:      2.1.0-dev
API Version:  1
Go Version:   go1.13.8
Git Commit:   f99954c7ca4428e501676fa47a63b5cecadd9454
Built:        Wed Aug 26 22:23:48 2020
OS/Arch:      linux/amd64

ubuntu@test:~$ podman run --name systemd1 --privileged  --security-opt=seccomp=unconfined -it --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro geerlingguy/docker-ubuntu2004-ansible:latest
systemd 244.3-1ubuntu1 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
Detected virtualization podman.
Detected architecture x86-64.

Welcome to Ubuntu Focal Fossa (development branch)!

Set hostname to <3c67712e3767>.
Failed to create /user.slice/user-1000.slice/session-34.scope/init.scope control group: Permission denied
Failed to allocate manager object: Permission denied
[!!!!!!] Failed to allocate manager object.
Exiting PID 1...

Rootless with systemd-run:

ubuntu@test:~$ systemd-run --user --scope podman run --name systemd1 --privileged  --security-opt=seccomp=unconfined -it --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro geerlingguy/docker-ubuntu2004-ansible:latest
Running scope as unit: run-r7a966731f3244da7995d2bd80fa9ae3c.scope
systemd 244.3-1ubuntu1 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
Detected virtualization podman.
Detected architecture x86-64.

Welcome to Ubuntu Focal Fossa (development branch)!

Set hostname to <bf428a64a7e5>.
Couldn't move remaining userspace processes, ignoring: Input/output error
/usr/lib/systemd/system-generators/systemd-crontab-generator failed with exit status 1.
/lib/systemd/system/dbus.socket:5: ListenStream= references a path below legacy directory /var/run/, updating /var/run/dbus/system_bus_socket → /run/dbus/system_bus_socket; please update the unit file accordingly.
Unnecessary job for /dev/sda1 was removed.
[  OK  ] Created slice User and Session Slice.
[  OK  ] Started Dispatch Password Requests to Console Directory Watch.
[  OK  ] Started Forward Password Requests to Wall Directory Watch.
proc-sys-fs-binfmt_misc.automount: Failed to initialize automounter: Operation not permitted
proc-sys-fs-binfmt_misc.automount: Failed with result 'resources'.
[FAILED] Failed to set up automount Arbitrary Executable File Formats File System Automount Point.
See 'systemctl status proc-sys-fs-binfmt_misc.automount' for details.
[  OK  ] Reached target Local Encrypted Volumes.
[  OK  ] Reached target Remote File Systems.
[  OK  ] Reached target Slices.
[  OK  ] Reached target Swap.
[  OK  ] Listening on Syslog Socket.
[  OK  ] Listening on initctl Compatibility Named Pipe.
[  OK  ] Listening on Journal Socket (/dev/log).
[  OK  ] Listening on Journal Socket.
         Mounting Kernel Debug File System...
         Starting Journal Service...
         Starting Remount Root and Kernel File Systems...
         Starting Apply Kernel Variables...
sys-kernel-debug.mount: Mount process exited, code=exited, status=32/n/a
sys-kernel-debug.mount: Failed with result 'exit-code'.
[FAILED] Failed to mount Kernel Debug File System.
See 'systemctl status sys-kernel-debug.mount' for details.
[  OK  ] Started Remount Root and Kernel File Systems.
         Starting Create System Users...
[  OK  ] Started Apply Kernel Variables.
[  OK  ] Started Journal Service.
         Starting Flush Journal to Persistent Storage...
[  OK  ] Started Flush Journal to Persistent Storage.
[  OK  ] Started Create System Users.
         Starting Create Static Device Nodes in /dev...
[  OK  ] Started Create Static Device Nodes in /dev.
[  OK  ] Reached target Local File Systems (Pre).
[  OK  ] Reached target Local File Systems.
         Starting Create Volatile Files and Directories...
[  OK  ] Started Create Volatile Files and Directories.
         Starting Network Name Resolution...
[  OK  ] Reached target System Time Set.
[  OK  ] Reached target System Time Synchronized.
         Starting Update UTMP about System Boot/Shutdown...
[  OK  ] Started Update UTMP about System Boot/Shutdown.
[  OK  ] Reached target System Initialization.
[  OK  ] Started systemd-cron path monitor.
[  OK  ] Started Daily apt download activities.
[  OK  ] Started Daily apt upgrade and clean activities.
[  OK  ] Started systemd-cron daily timer.
[  OK  ] Started systemd-cron hourly timer.
[  OK  ] Started systemd-cron monthly timer.
[  OK  ] Started systemd-cron weekly timer.
[  OK  ] Started Periodic ext4 Online Metadata Check for All Filesystems.
[  OK  ] Started Message of the Day.
[  OK  ] Started Daily Cleanup of Temporary Directories.
[  OK  ] Reached target systemd-cron.
[  OK  ] Reached target Paths.
[  OK  ] Reached target Timers.
[  OK  ] Listening on D-Bus System Message Bus Socket.
[  OK  ] Reached target Sockets.
[  OK  ] Reached target Basic System.
[  OK  ] Started D-Bus System Message Bus.
[  OK  ] Started Save initial kernel messages after boot.
         Starting Remove Stale Online ext4 Metadata Check Snapshots...
         Starting System Logging Service...
         Starting Login Service...
         Starting Permit User Sessions...
[  OK  ] Started System Logging Service.
[  OK  ] Started Permit User Sessions.
[  OK  ] Started Network Name Resolution.
[  OK  ] Reached target Host and Network Name Lookups.
[  OK  ] Started Login Service.
[  OK  ] Reached target Multi-User System.
[  OK  ] Started Remove Stale Online ext4 Metadata Check Snapshots.
[  OK  ] Reached target Graphical Interface.
         Starting Update UTMP about System Runlevel Changes...
[  OK  ] Started Update UTMP about System Runlevel Changes.

Rootfull works however:

ubuntu@test:~$ sudo podman run --name systemd1 --privileged  --security-opt=seccomp=unconfined -it --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro geerlingguy/docker-ubuntu2004-ansible:latest
systemd 244.3-1ubuntu1 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
Detected virtualization podman.
Detected architecture x86-64.

Welcome to Ubuntu Focal Fossa (development branch)!

Set hostname to <106593ccc268>.
/lib/systemd/system/dbus.socket:5: ListenStream= references a path below legacy directory /var/run/, updating /var/run/dbus/system_bus_socket → /run/dbus/system_bus_socket; please update the unit file accordingly.
Unnecessary job for /dev/sda1 was removed.
[  OK  ] Created slice User and Session Slice.
[  OK  ] Started Dispatch Password Requests to Console Directory Watch.
[  OK  ] Started Forward Password Requests to Wall Directory Watch.
[  OK  ] Set up automount Arbitrary Executable File Formats File System Automount Point.
[  OK  ] Reached target Local Encrypted Volumes.
[  OK  ] Reached target Remote File Systems.
[  OK  ] Reached target Slices.
[  OK  ] Reached target Swap.
[  OK  ] Listening on Syslog Socket.
[  OK  ] Listening on initctl Compatibility Named Pipe.
[  OK  ] Listening on Journal Audit Socket.
[  OK  ] Listening on Journal Socket (/dev/log).
[  OK  ] Listening on Journal Socket.
         Mounting Huge Pages File System...
         Mounting Kernel Debug File System...
         Starting Journal Service...
         Mounting FUSE Control File System...
         Starting Remount Root and Kernel File Systems...
         Starting Apply Kernel Variables...
[  OK  ] Mounted Huge Pages File System.
[  OK  ] Mounted Kernel Debug File System.
[  OK  ] Mounted FUSE Control File System.
[  OK  ] Started Apply Kernel Variables.
[  OK  ] Started Remount Root and Kernel File Systems.
         Starting Create System Users...
[  OK  ] Started Create System Users.
         Starting Create Static Device Nodes in /dev...
[  OK  ] Started Create Static Device Nodes in /dev.
[  OK  ] Reached target Local File Systems (Pre).
[  OK  ] Reached target Local File Systems.
[  OK  ] Started Journal Service.
         Starting Flush Journal to Persistent Storage...
[  OK  ] Started Flush Journal to Persistent Storage.
         Starting Create Volatile Files and Directories...
[  OK  ] Started Create Volatile Files and Directories.
         Starting Network Name Resolution...
[  OK  ] Reached target System Time Set.
[  OK  ] Reached target System Time Synchronized.
         Starting Update UTMP about System Boot/Shutdown...
[  OK  ] Started Update UTMP about System Boot/Shutdown.
[  OK  ] Reached target System Initialization.
[  OK  ] Started systemd-cron path monitor.
[  OK  ] Started Daily apt download activities.
[  OK  ] Started Daily apt upgrade and clean activities.
[  OK  ] Started systemd-cron daily timer.
[  OK  ] Started systemd-cron hourly timer.
[  OK  ] Started systemd-cron monthly timer.
[  OK  ] Started systemd-cron weekly timer.
[  OK  ] Started Periodic ext4 Online Metadata Check for All Filesystems.
[  OK  ] Started Message of the Day.
[  OK  ] Started Daily Cleanup of Temporary Directories.
[  OK  ] Reached target systemd-cron.
[  OK  ] Reached target Paths.
[  OK  ] Reached target Timers.
[  OK  ] Listening on D-Bus System Message Bus Socket.
[  OK  ] Reached target Sockets.
[  OK  ] Reached target Basic System.
[  OK  ] Started D-Bus System Message Bus.
[  OK  ] Started Save initial kernel messages after boot.
         Starting Remove Stale Online ext4 Metadata Check Snapshots...
         Starting System Logging Service...
         Starting Login Service...
         Starting Permit User Sessions...
[  OK  ] Started System Logging Service.
[  OK  ] Started Permit User Sessions.
[  OK  ] Started Network Name Resolution.
[  OK  ] Reached target Host and Network Name Lookups.
[  OK  ] Started Login Service.
[  OK  ] Reached target Multi-User System.
[  OK  ] Started Remove Stale Online ext4 Metadata Check Snapshots.
[  OK  ] Reached target Graphical Interface.
         Starting Update UTMP about System Runlevel Changes...
[  OK  ] Started Update UTMP about System Runlevel Changes.

@dustymabe
Copy link
Contributor

@c-goes - see #7441. I think you're hitting that.

adelton added a commit to adelton/freeipa-container that referenced this issue Sep 5, 2020
adelton added a commit to adelton/freeipa-container that referenced this issue Sep 5, 2020
adelton added a commit to adelton/freeipa-container that referenced this issue Sep 5, 2020
edsantiago pushed a commit to edsantiago/libpod that referenced this issue Sep 14, 2020
create a scope everytime we don't own the current cgroup and we are
running on systemd.

Closes: containers#6734

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
rdoproject pushed a commit to rdo-infra/ansible-role-dlrn that referenced this issue Dec 11, 2020
Using rootless podman on CentOS 8.3 is failing for us, due to [1][2].
We could try to force molecule to use "systemd-run --user --scope podman"
instead of "podman", but running as root works as well.

[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1880987
[2] - containers/podman#6734

Change-Id: I30305b4396a849a4cefc4c080b3fa6be604adc79
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

12 participants