Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to start minikube after latest docker desktop update on MacOS M3 Pro #18106

Closed
geek-kb opened this issue Feb 5, 2024 · 8 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@geek-kb
Copy link

geek-kb commented Feb 5, 2024

What Happened?

After upgrading Docker Desktop to the latest version, I tried restarting my Minikube cluster, but started getting lots of errors and the cluster isn't starting.

Attach the log file

➜ mysql git:(master) ✗ minikube start --alsologtostderr
I0205 16:52:18.637761 12547 out.go:296] Setting OutFile to fd 1 ...
I0205 16:52:18.638268 12547 out.go:348] isatty.IsTerminal(1) = true
I0205 16:52:18.638275 12547 out.go:309] Setting ErrFile to fd 2...
I0205 16:52:18.638278 12547 out.go:348] isatty.IsTerminal(2) = true
I0205 16:52:18.638665 12547 root.go:338] Updating PATH: /Users/itaig/.minikube/bin
I0205 16:52:18.640518 12547 out.go:303] Setting JSON to false
I0205 16:52:18.658035 12547 start.go:128] hostinfo: {"hostname":"Itais-MacBook-Pro.local","uptime":6513,"bootTime":1707138225,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"ee7053e6-1ace-5e10-b2c0-2e3927c9d011"}
W0205 16:52:18.658091 12547 start.go:136] gopshost.Virtualization returned error: not implemented yet
I0205 16:52:18.662885 12547 out.go:177] 😄 minikube v1.32.0 on Darwin 14.1 (arm64)
😄 minikube v1.32.0 on Darwin 14.1 (arm64)
I0205 16:52:18.670451 12547 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I0205 16:52:18.670588 12547 notify.go:220] Checking for updates...
W0205 16:52:18.670686 12547 preload.go:295] Failed to list preload files: open /Users/itaig/.minikube/cache/preloaded-tarball: no such file or directory
I0205 16:52:18.671301 12547 driver.go:378] Setting default libvirt URI to qemu:///system
I0205 16:52:18.778540 12547 docker.go:122] docker version: linux-25.0:Docker Desktop 4.27.1 (136059)
I0205 16:52:18.778767 12547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0205 16:52:19.028316 12547 info.go:266] docker info: {ID:c7561a57-5cfd-49bd-a9e0-d57423f0a831 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlayfs DriverStatus:[[driver-type io.containerd.snapshotter.v1]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:80 SystemTime:2024-02-05 14:52:19.013966292 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:11 MemTotal:8221937664 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/itaig/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/itaig/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.3-desktop.1] map[Name:debug Path:/Users/itaig/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.22] map[Name:dev Path:/Users/itaig/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/itaig/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/itaig/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/itaig/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/itaig/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/itaig/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/itaig/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/itaig/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.3.0]] Warnings:}}
I0205 16:52:19.034713 12547 out.go:177] ✨ Using the docker driver based on existing profile
✨ Using the docker driver based on existing profile
I0205 16:52:19.037637 12547 start.go:298] selected driver: docker
I0205 16:52:19.037643 12547 start.go:902] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I0205 16:52:19.037709 12547 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:}
I0205 16:52:19.037816 12547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0205 16:52:19.114838 12547 info.go:266] docker info: {ID:c7561a57-5cfd-49bd-a9e0-d57423f0a831 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlayfs DriverStatus:[[driver-type io.containerd.snapshotter.v1]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:80 SystemTime:2024-02-05 14:52:19.09904225 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:11 MemTotal:8221937664 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/itaig/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/itaig/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.3-desktop.1] map[Name:debug Path:/Users/itaig/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.22] map[Name:dev Path:/Users/itaig/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/itaig/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/itaig/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/itaig/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/itaig/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/itaig/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/itaig/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/itaig/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.3.0]] Warnings:}}
W0205 16:52:19.115385 12547 out.go:239] ❗ docker is currently using the overlayfs storage driver, setting preload=false
❗ docker is currently using the overlayfs storage driver, setting preload=false
I0205 16:52:19.116398 12547 cni.go:84] Creating CNI manager for ""
I0205 16:52:19.116606 12547 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0205 16:52:19.116619 12547 start_flags.go:323] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I0205 16:52:19.123713 12547 out.go:177] 👍 Starting control plane node minikube in cluster minikube
👍 Starting control plane node minikube in cluster minikube
I0205 16:52:19.127142 12547 cache.go:121] Beginning downloading kic base image for docker with docker
I0205 16:52:19.129612 12547 out.go:177] 🚜 Pulling base image ...
🚜 Pulling base image ...
I0205 16:52:19.135744 12547 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
I0205 16:52:19.135885 12547 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
I0205 16:52:19.136178 12547 profile.go:148] Saving config to /Users/itaig/.minikube/profiles/minikube/config.json ...
I0205 16:52:19.137458 12547 cache.go:107] acquiring lock: {Name:mk6f58ce39576adfe94db5a385b855fccda6e0bf Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0205 16:52:19.137421 12547 cache.go:107] acquiring lock: {Name:mk9f33f94d510ff7982ca8a2c871b9f336d38be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0205 16:52:19.137422 12547 cache.go:107] acquiring lock: {Name:mkff1c5db3496cf4077c09d2260dcc114611afff Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0205 16:52:19.137478 12547 cache.go:107] acquiring lock: {Name:mkd34c7196f02c0170bce5bf00baa4073607a37a Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0205 16:52:19.137420 12547 cache.go:107] acquiring lock: {Name:mk6cd69b1a84c309516e6bcccc0b0f98eb6d53ab Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0205 16:52:19.137461 12547 cache.go:107] acquiring lock: {Name:mk36543e6ad4e6aaf0ff3a1e9865fc5887433ab0 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0205 16:52:19.137460 12547 cache.go:107] acquiring lock: {Name:mkbd82258ef73d1b3a82987ee2d246da60a82508 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0205 16:52:19.137525 12547 cache.go:115] /Users/itaig/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I0205 16:52:19.137564 12547 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/itaig/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 259.125µs
I0205 16:52:19.137579 12547 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/itaig/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I0205 16:52:19.137459 12547 cache.go:107] acquiring lock: {Name:mkbbcc3d22f9e541ab46c842e4d64e3c3f418e97 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0205 16:52:19.137588 12547 cache.go:115] /Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.3 exists
I0205 16:52:19.137592 12547 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.3" -> "/Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.3" took 274.416µs
I0205 16:52:19.137596 12547 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.3 -> /Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.3 succeeded
I0205 16:52:19.137533 12547 cache.go:115] /Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
I0205 16:52:19.137546 12547 cache.go:115] /Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
I0205 16:52:19.137615 12547 cache.go:115] /Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.3 exists
I0205 16:52:19.137619 12547 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.3" -> "/Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.3" took 323.209µs
I0205 16:52:19.137622 12547 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.3 -> /Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.3 succeeded
I0205 16:52:19.137625 12547 cache.go:115] /Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
I0205 16:52:19.137602 12547 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 255.5µs
I0205 16:52:19.137633 12547 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
I0205 16:52:19.137630 12547 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 289.792µs
I0205 16:52:19.137636 12547 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
I0205 16:52:19.137649 12547 cache.go:115] /Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.3 exists
I0205 16:52:19.137649 12547 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 325.375µs
I0205 16:52:19.137654 12547 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
I0205 16:52:19.137654 12547 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.3" -> "/Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.3" took 350.5µs
I0205 16:52:19.137660 12547 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.3 -> /Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.3 succeeded
I0205 16:52:19.137723 12547 cache.go:115] /Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.3 exists
I0205 16:52:19.137728 12547 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.3" -> "/Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.3" took 424.75µs
I0205 16:52:19.137731 12547 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.3 -> /Users/itaig/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.3 succeeded
I0205 16:52:19.137735 12547 cache.go:87] Successfully saved all images to host disk.
I0205 16:52:19.209676 12547 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
I0205 16:52:19.209950 12547 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
I0205 16:52:19.209973 12547 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory, skipping pull
I0205 16:52:19.210108 12547 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in cache, skipping pull
I0205 16:52:19.210116 12547 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
I0205 16:52:19.210119 12547 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from local cache
I0205 16:52:21.839815 12547 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from cached tarball
I0205 16:52:21.839876 12547 cache.go:194] Successfully downloaded all kic artifacts
I0205 16:52:21.841015 12547 start.go:365] acquiring machines lock for minikube: {Name:mk3e8bb36589856163dbc1e8e3012abcc10aaea1 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0205 16:52:21.841128 12547 start.go:369] acquired machines lock for "minikube" in 86.166µs
I0205 16:52:21.841326 12547 start.go:96] Skipping create...Using existing machine configuration
I0205 16:52:21.841331 12547 fix.go:54] fixHost starting:
I0205 16:52:21.841625 12547 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
W0205 16:52:21.875278 12547 cli_runner.go:211] docker container inspect minikube --format={{.State.Status}} returned with exit code 1
I0205 16:52:21.875575 12547 fix.go:102] recreateIfNeeded on minikube: state= err=unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:21.875596 12547 fix.go:107] machineExists: false. err=machine does not exist
I0205 16:52:21.881533 12547 out.go:177] 🤷 docker "minikube" container is missing, will recreate.
🤷 docker "minikube" container is missing, will recreate.
I0205 16:52:21.884526 12547 delete.go:124] DEMOLISHING minikube ...
I0205 16:52:21.884636 12547 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
W0205 16:52:21.917841 12547 cli_runner.go:211] docker container inspect minikube --format={{.State.Status}} returned with exit code 1
W0205 16:52:21.917891 12547 stop.go:75] unable to get state: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:21.917898 12547 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:21.918156 12547 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
W0205 16:52:21.952491 12547 cli_runner.go:211] docker container inspect minikube --format={{.State.Status}} returned with exit code 1
I0205 16:52:21.952536 12547 delete.go:82] Unable to get host status for minikube, assuming it has already been deleted: state: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:21.952627 12547 cli_runner.go:164] Run: docker container inspect -f {{.Id}} minikube
W0205 16:52:21.987565 12547 cli_runner.go:211] docker container inspect -f {{.Id}} minikube returned with exit code 1
I0205 16:52:21.987588 12547 kic.go:371] could not find the container minikube to remove it. will try anyways
I0205 16:52:21.987900 12547 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
W0205 16:52:22.022283 12547 cli_runner.go:211] docker container inspect minikube --format={{.State.Status}} returned with exit code 1
W0205 16:52:22.022563 12547 oci.go:84] error getting container status, will try to delete anyways: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:22.022635 12547 cli_runner.go:164] Run: docker exec --privileged -t minikube /bin/bash -c "sudo init 0"
W0205 16:52:22.055429 12547 cli_runner.go:211] docker exec --privileged -t minikube /bin/bash -c "sudo init 0" returned with exit code 1
I0205 16:52:22.055450 12547 oci.go:650] error shutdown minikube: docker exec --privileged -t minikube /bin/bash -c "sudo init 0": exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:23.056912 12547 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
W0205 16:52:23.100785 12547 cli_runner.go:211] docker container inspect minikube --format={{.State.Status}} returned with exit code 1
I0205 16:52:23.101651 12547 oci.go:662] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:23.101662 12547 oci.go:664] temporary error: container minikube status is but expect it to be exited
I0205 16:52:23.101691 12547 retry.go:31] will retry after 421.636747ms: couldn't verify container is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:23.524171 12547 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
W0205 16:52:23.567484 12547 cli_runner.go:211] docker container inspect minikube --format={{.State.Status}} returned with exit code 1
I0205 16:52:23.567524 12547 oci.go:662] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:23.567528 12547 oci.go:664] temporary error: container minikube status is but expect it to be exited
I0205 16:52:23.567554 12547 retry.go:31] will retry after 564.521305ms: couldn't verify container is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:24.133122 12547 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
W0205 16:52:24.175982 12547 cli_runner.go:211] docker container inspect minikube --format={{.State.Status}} returned with exit code 1
I0205 16:52:24.176023 12547 oci.go:662] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:24.176027 12547 oci.go:664] temporary error: container minikube status is but expect it to be exited
I0205 16:52:24.176044 12547 retry.go:31] will retry after 1.673375106s: couldn't verify container is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:25.850635 12547 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
W0205 16:52:25.893099 12547 cli_runner.go:211] docker container inspect minikube --format={{.State.Status}} returned with exit code 1
I0205 16:52:25.893138 12547 oci.go:662] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:25.893144 12547 oci.go:664] temporary error: container minikube status is but expect it to be exited
I0205 16:52:25.893160 12547 retry.go:31] will retry after 1.914816272s: couldn't verify container is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:27.808445 12547 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
W0205 16:52:27.851255 12547 cli_runner.go:211] docker container inspect minikube --format={{.State.Status}} returned with exit code 1
I0205 16:52:27.851305 12547 oci.go:662] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:27.851310 12547 oci.go:664] temporary error: container minikube status is but expect it to be exited
I0205 16:52:27.851326 12547 retry.go:31] will retry after 2.960104942s: couldn't verify container is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:30.813020 12547 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
W0205 16:52:30.877562 12547 cli_runner.go:211] docker container inspect minikube --format={{.State.Status}} returned with exit code 1
I0205 16:52:30.877602 12547 oci.go:662] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:30.877608 12547 oci.go:664] temporary error: container minikube status is but expect it to be exited
I0205 16:52:30.877636 12547 retry.go:31] will retry after 4.047690317s: couldn't verify container is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:34.925701 12547 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
W0205 16:52:34.975925 12547 cli_runner.go:211] docker container inspect minikube --format={{.State.Status}} returned with exit code 1
I0205 16:52:34.975958 12547 oci.go:662] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:34.975963 12547 oci.go:664] temporary error: container minikube status is but expect it to be exited
I0205 16:52:34.975982 12547 retry.go:31] will retry after 7.240972372s: couldn't verify container is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:42.218558 12547 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
W0205 16:52:42.284966 12547 cli_runner.go:211] docker container inspect minikube --format={{.State.Status}} returned with exit code 1
I0205 16:52:42.285017 12547 oci.go:662] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:42.285025 12547 oci.go:664] temporary error: container minikube status is but expect it to be exited
I0205 16:52:42.285050 12547 oci.go:88] couldn't shut down minikube (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube

I0205 16:52:42.285139 12547 cli_runner.go:164] Run: docker rm -f -v minikube
I0205 16:52:42.329363 12547 cli_runner.go:164] Run: docker container inspect -f {{.Id}} minikube
W0205 16:52:42.366648 12547 cli_runner.go:211] docker container inspect -f {{.Id}} minikube returned with exit code 1
I0205 16:52:42.366752 12547 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0205 16:52:42.402105 12547 cli_runner.go:164] Run: docker network rm minikube
I0205 16:52:42.501516 12547 fix.go:114] Sleeping 1 second for extra luck!
I0205 16:52:43.502724 12547 start.go:125] createHost starting for "" (driver="docker")
I0205 16:52:43.509579 12547 out.go:204] 🔥 Creating docker container (CPUs=2, Memory=4600MB) ...
🔥 Creating docker container (CPUs=2, Memory=4600MB) ...| I0205 16:52:43.510425 12547 start.go:159] libmachine.API.Create for "minikube" (driver="docker")
I0205 16:52:43.510783 12547 client.go:168] LocalClient.Create starting
I0205 16:52:43.511498 12547 main.go:141] libmachine: Reading certificate data from /Users/itaig/.minikube/certs/ca.pem
I0205 16:52:43.512732 12547 main.go:141] libmachine: Decoding PEM data...
I0205 16:52:43.512780 12547 main.go:141] libmachine: Parsing certificate...
I0205 16:52:43.513720 12547 main.go:141] libmachine: Reading certificate data from /Users/itaig/.minikube/certs/cert.pem
I0205 16:52:43.513974 12547 main.go:141] libmachine: Decoding PEM data...
I0205 16:52:43.514000 12547 main.go:141] libmachine: Parsing certificate...
I0205 16:52:43.514903 12547 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0205 16:52:43.574407 12547 cli_runner.go:211] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0205 16:52:43.574507 12547 network_create.go:281] running [docker network inspect minikube] to gather additional debugging logs...
I0205 16:52:43.574537 12547 cli_runner.go:164] Run: docker network inspect minikube
/ W0205 16:52:43.613908 12547 cli_runner.go:211] docker network inspect minikube returned with exit code 1
I0205 16:52:43.613945 12547 network_create.go:284] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]

stderr:
Error response from daemon: network minikube not found
I0205 16:52:43.613954 12547 network_create.go:286] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr **
Error response from daemon: network minikube not found

** /stderr **
I0205 16:52:43.614066 12547 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0205 16:52:43.651573 12547 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x14002828ec0}
I0205 16:52:43.651727 12547 network_create.go:124] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
I0205 16:52:43.651781 12547 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=minikube minikube

  • I0205 16:52:43.715262 12547 network_create.go:108] docker network minikube 192.168.49.0/24 created
    I0205 16:52:43.715287 12547 kic.go:121] calculated static IP "192.168.49.2" for the "minikube" container
    I0205 16:52:43.715369 12547 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
    I0205 16:52:43.747061 12547 cli_runner.go:164] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
    I0205 16:52:43.778932 12547 oci.go:103] Successfully created a docker volume minikube
    I0205 16:52:43.779020 12547 cli_runner.go:164] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
    | W0205 16:52:44.014930 12547 cli_runner.go:211] docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib returned with exit code 1
    / I0205 16:52:44.015182 12547 client.go:171] LocalClient.Create took 504.16725ms
    | I0205 16:52:46.018279 12547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
    I0205 16:52:46.018557 12547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
    / W0205 16:52:46.077391 12547 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
    I0205 16:52:46.077478 12547 retry.go:31] will retry after 307.551277ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
    stdout:

stderr:
Error response from daemon: No such container: minikube
| I0205 16:52:46.386984 12547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
/ W0205 16:52:46.449625 12547 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0205 16:52:46.449699 12547 retry.go:31] will retry after 295.162945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
| I0205 16:52:46.745771 12547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0205 16:52:46.808492 12547 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0205 16:52:46.808581 12547 retry.go:31] will retry after 768.159632ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
| I0205 16:52:47.578515 12547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0205 16:52:47.639277 12547 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
W0205 16:52:47.640103 12547 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube

W0205 16:52:47.640121 12547 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:47.640203 12547 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0205 16:52:47.640261 12547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
/ W0205 16:52:47.677981 12547 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0205 16:52:47.678046 12547 retry.go:31] will retry after 167.558709ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube

  • I0205 16:52:47.846846 12547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
    \ W0205 16:52:47.895883 12547 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
    I0205 16:52:47.895951 12547 retry.go:31] will retry after 445.125107ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
    stdout:

stderr:
Error response from daemon: No such container: minikube
\ I0205 16:52:48.342886 12547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
| W0205 16:52:48.407061 12547 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0205 16:52:48.407138 12547 retry.go:31] will retry after 656.96884ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube

  • I0205 16:52:49.065731 12547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
    \ W0205 16:52:49.130397 12547 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
    I0205 16:52:49.130496 12547 retry.go:31] will retry after 435.955644ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
    stdout:

stderr:
Error response from daemon: No such container: minikube
\ I0205 16:52:49.568104 12547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
| W0205 16:52:49.632482 12547 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
W0205 16:52:49.632593 12547 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube

W0205 16:52:49.632608 12547 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:49.632613 12547 start.go:128] duration metric: createHost completed in 6.129808875s
I0205 16:52:49.632703 12547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0205 16:52:49.632753 12547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
/ W0205 16:52:49.673193 12547 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0205 16:52:49.673259 12547 retry.go:31] will retry after 183.183316ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube

  • I0205 16:52:49.859005 12547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
    \ W0205 16:52:49.915756 12547 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
    I0205 16:52:49.915850 12547 retry.go:31] will retry after 343.267559ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
    stdout:

stderr:
Error response from daemon: No such container: minikube

  • I0205 16:52:50.261031 12547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
    \ W0205 16:52:50.324890 12547 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
    I0205 16:52:50.324966 12547 retry.go:31] will retry after 761.051651ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
    stdout:

stderr:
Error response from daemon: No such container: minikube
\ I0205 16:52:51.087120 12547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0205 16:52:51.152380 12547 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
W0205 16:52:51.152470 12547 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube

W0205 16:52:51.152485 12547 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:51.152565 12547 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0205 16:52:51.152613 12547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
| W0205 16:52:51.191892 12547 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0205 16:52:51.191961 12547 retry.go:31] will retry after 284.277024ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube

  • I0205 16:52:51.477989 12547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
    \ W0205 16:52:51.542962 12547 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
    I0205 16:52:51.543045 12547 retry.go:31] will retry after 237.344685ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
    stdout:

stderr:
Error response from daemon: No such container: minikube
/ I0205 16:52:51.782261 12547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube

  • W0205 16:52:51.842332 12547 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
    I0205 16:52:51.842408 12547 retry.go:31] will retry after 623.971157ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
    stdout:

stderr:
Error response from daemon: No such container: minikube
| I0205 16:52:52.467530 12547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
/ W0205 16:52:52.532430 12547 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
W0205 16:52:52.532525 12547 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube

W0205 16:52:52.532543 12547 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error response from daemon: No such container: minikube
I0205 16:52:52.532552 12547 fix.go:56] fixHost completed within 30.691287708s
I0205 16:52:52.532557 12547 start.go:83] releasing machines lock for "minikube", held for 30.691491292s
W0205 16:52:52.532574 12547 start.go:691] error starting host: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: exit status 1
stdout:

stderr:
exec /usr/bin/test: exec format error
W0205 16:52:52.532640 12547 out.go:239] 🤦 StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: exit status 1
stdout:

stderr:
exec /usr/bin/test: exec format error

🤦 StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: exit status 1
stdout:

stderr:
exec /usr/bin/test: exec format error

I0205 16:52:52.532669 12547 start.go:706] Will try again in 5 seconds ...

Operating System

macOS (Default)

Driver

Docker

@medyagh medyagh added the kind/bug Categorizes issue or PR as related to a bug. label Feb 5, 2024
@spowelljr
Copy link
Member

Hi @geek-kb, I'm not sure how you were able to get a Docker storage driver type of overlayfs would it be possible to provide a screenshot of your Docker Desktop settings, I'm not able to replicate your Docker Desktop config.

@medyagh
Copy link
Member

medyagh commented Feb 5, 2024

@geek-kb thank you for creating the issue, do you recall changing the Storage Driver for your Docker Deskstop ?

why it is overlayfs ? while overlay2 is the default ?

@prezha
Copy link
Contributor

prezha commented Feb 5, 2024

looks like the driver was changed after the cluster was created:

I0205 16:52:19.028316 12547 info.go:266] docker info: {ID:c7561a57-5cfd-49bd-a9e0-d57423f0a831 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlayfs DriverStatus:[[driver-type io.containerd.snapshotter.v1]]

ref: https://docs.docker.com/storage/storagedriver/select-storage-driver/#check-your-current-storage-driver

When you change the storage driver, any existing images and containers become inaccessible. This is because their layers can't be used by the new storage driver. If you revert your changes, you can access the old images and containers again, but any that you pulled or created using the new driver are then inaccessible.

@CheriseCodes
Copy link

CheriseCodes commented May 3, 2024

@geek-kb If you are okay with losing all your previous clusters, you can try this command: minikube delete --all --purge

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 1, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 31, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants