Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CRI: try to use "sudo podman load" instead of "docker load" #2757

Merged
merged 2 commits into from
Jan 30, 2019
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 25 additions & 3 deletions pkg/minikube/machine/cache_images.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ import (
"path/filepath"
"runtime"
"strings"
"sync"

"github.com/google/go-containerregistry/pkg/v1/tarball"

Expand All @@ -48,6 +49,8 @@ const tempLoadDir = "/tmp"

var getWindowsVolumeName = getWindowsVolumeNameCmd

var podmanLoad sync.Mutex

func CacheImagesForBootstrapper(version string, clusterBootstrapper string) error {
images := bootstrapper.GetCachedImageList(version, clusterBootstrapper)

Expand Down Expand Up @@ -85,12 +88,17 @@ func CacheImages(images []string, cacheDir string) error {

func LoadImages(cmd bootstrapper.CommandRunner, images []string, cacheDir string) error {
var g errgroup.Group
// Load profile cluster config from file
cc, err := config.Load()
if err != nil && !os.IsNotExist(err) {
glog.Errorln("Error loading profile config: ", err)
}
for _, image := range images {
image := image
g.Go(func() error {
src := filepath.Join(cacheDir, image)
src = sanitizeCacheDir(src)
if err := LoadFromCacheBlocking(cmd, src); err != nil {
if err := LoadFromCacheBlocking(cmd, cc.KubernetesConfig, src); err != nil {
return errors.Wrapf(err, "loading image %s", src)
}
return nil
Expand Down Expand Up @@ -190,7 +198,7 @@ func getWindowsVolumeNameCmd(d string) (string, error) {
return vname, nil
}

func LoadFromCacheBlocking(cmd bootstrapper.CommandRunner, src string) error {
func LoadFromCacheBlocking(cmd bootstrapper.CommandRunner, k8s config.KubernetesConfig, src string) error {
glog.Infoln("Loading image from cache at ", src)
filename := filepath.Base(src)
for {
Expand All @@ -207,12 +215,26 @@ func LoadFromCacheBlocking(cmd bootstrapper.CommandRunner, src string) error {
return errors.Wrap(err, "transferring cached image")
}

dockerLoadCmd := "docker load -i " + dst
var dockerLoadCmd string
crio := k8s.ContainerRuntime == constants.CrioRuntime || k8s.ContainerRuntime == constants.Cri_oRuntime
if crio {
dockerLoadCmd = "sudo podman load -i " + dst
} else {
dockerLoadCmd = "docker load -i " + dst
}

if crio {
podmanLoad.Lock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What type of failure is this lock trying to prevent?

Unless it's something specifically terrible, I'd prefer as little state as possible here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As per the commit message, it was running out of memory in the VM when running all the "load" commands in parallel. With docker, things are serialized / queued in the docker daemon. But with podman, it will actually try to run all of them at once. So I had to introduce a lock, for the command to succeed. It's mostly I/O-bound anyway, so you don't lose too much time by it.

}

if err := cmd.Run(dockerLoadCmd); err != nil {
return errors.Wrapf(err, "loading docker image: %s", dst)
}

if crio {
podmanLoad.Unlock()
}

if err := cmd.Run("sudo rm -rf " + dst); err != nil {
return errors.Wrap(err, "deleting temp docker image location")
}
Expand Down