Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

StartHost failed : container addresses should have 2 values, got 3 values #10079

Open
ggjulio opened this issue Jan 1, 2021 · 19 comments · May be fixed by #18943
Open

StartHost failed : container addresses should have 2 values, got 3 values #10079

ggjulio opened this issue Jan 1, 2021 · 19 comments · May be fixed by #18943
Labels
area/networking networking issues co/docker-driver Issues related to kubernetes in container help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@ggjulio
Copy link

ggjulio commented Jan 1, 2021

I use a second network to give a range of ips to Metallb.
Everyting work fine except at restart. Minikube complain about having two networks.
I have to disconnect the second network before each restart. And then reconnect.

I find nothing helpful in the doc / github Issues.

Steps to reproduce the issue:

  1. minukube start --vm-driver=docker
  2. docker network create my-net
  3. docker network connect my-net minikube
  4. minikube stop
  5. minikube start

Full output of failed command:

🏃  Updating the running docker "minikube" container ...
I0101 20:57:39.509944 3016595 machine.go:88] provisioning docker machine ...
I0101 20:57:39.509978 3016595 ubuntu.go:169] provisioning hostname "minikube"
I0101 20:57:39.510046 3016595 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0101 20:57:39.550282 3016595 main.go:119] libmachine: Using SSH client type: native
I0101 20:57:39.550473 3016595 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x80b6c0] 0x80b680 <nil>  [] 0s} 127.0.0.1 49332 <nil> <nil>}
I0101 20:57:39.550495 3016595 main.go:119] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0101 20:57:39.694466 3016595 main.go:119] libmachine: SSH cmd err, output: <nil>: minikube

I0101 20:57:39.694552 3016595 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0101 20:57:39.733676 3016595 main.go:119] libmachine: Using SSH client type: native
I0101 20:57:39.733849 3016595 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x80b6c0] 0x80b680 <nil>  [] 0s} 127.0.0.1 49332 <nil> <nil>}
I0101 20:57:39.733873 3016595 main.go:119] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
I0101 20:57:39.862874 3016595 main.go:119] libmachine: SSH cmd err, output: <nil>: 
I0101 20:57:39.862918 3016595 ubuntu.go:175] set auth options {CertDir:/home/ggjulio/.minikube CaCertPath:/home/ggjulio/.minikube/certs/ca.pem CaPrivateKeyPath:/home/ggjulio/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/ggjulio/.minikube/machines/server.pem ServerKeyPath:/home/ggjulio/.minikube/machines/server-key.pem ClientKeyPath:/home/ggjulio/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/ggjulio/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/ggjulio/.minikube}
I0101 20:57:39.863002 3016595 ubuntu.go:177] setting up certificates
I0101 20:57:39.863023 3016595 provision.go:83] configureAuth start
I0101 20:57:39.863092 3016595 cli_runner.go:111] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0101 20:57:39.904544 3016595 provision.go:86] duration metric: configureAuth took 41.505805ms
W0101 20:57:39.904583 3016595 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 3 values: [192.168.49.2 172.20.0.2 ]
I0101 20:57:39.904600 3016595 retry.go:31] will retry after 110.466µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 3 values: [192.168.49.2 172.20.0.2 ]
I0101 20:57:39.904801 3016595 provision.go:83] configureAuth start
~ docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
192.168.49.2,172.20.0.2,

Full output of minikube start command used, if not already included:

~ minikube start
😄  minikube v1.16.0 on Ubuntu 20.04
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing docker container for "minikube" ...
🤦  StartHost failed, but will try again: container addresses should have 2 values, got 3 values: [192.168.49.2 172.20.0.2 ]
🏃  Updating the running docker "minikube" container ...
@ggjulio ggjulio changed the title StartHost failed container addresses should have 2 values, got 3 values StartHost failed : container addresses should have 2 values, got 3 values Jan 1, 2021
@afbjorklund
Copy link
Collaborator

Minikube needs to handle multiple networks, and filter out the created network only.

It actually got 4 values, but output format fails to add a delimiter between networks..

@afbjorklund afbjorklund added kind/bug Categorizes issue or PR as related to a bug. co/docker-driver Issues related to kubernetes in container priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Jan 1, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 5, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 5, 2021
@sharifelgamal sharifelgamal added priority/backlog Higher priority than priority/awaiting-more-evidence. area/networking networking issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels May 5, 2021
@sharifelgamal
Copy link
Collaborator

It would seem that we don't support multiple networks in our docker driver. We would some help getting this fixed! I'd be happy to review any PR.

@sharifelgamal
Copy link
Collaborator

@k8s-ci-robot k8s-ci-robot assigned vishjain and unassigned vishjain May 17, 2021
@vishjain
Copy link
Contributor

/assign

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 16, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 16, 2021
@medyagh
Copy link
Member

medyagh commented Sep 22, 2021

I havent seen this error in long time, did we fix it ?

@vishjain are you still working on this ?

@medyagh medyagh removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 22, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 21, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 20, 2022
@klaases
Copy link
Contributor

klaases commented Jan 26, 2022

Hi @ggjulio – is this issue still occurring? If so, please feel free to re-open the issue by commenting with /reopen. This issue will be closed as additional information was unavailable and some time has passed.

Additional information that may be helpful:

  • Whether the issue occurs with the latest minikube release

  • The exact minikube start command line used

  • Attach the full output of minikube logs, run minikube logs --file=logs.txt to create a log file

Thank you for sharing your experience!

@klaases klaases closed this as completed Jan 26, 2022
@ololobus
Copy link

I can confirm that on the latest:

minikube version: v1.25.2
commit: 362d5fdc0a3dbee389b3d3f1034e8023e72bd3a7

the problem is still there. I have a minikube k8s cluster with additional docker-compose network attached to the minikube container and get the same error:

minikube start
😄  minikube v1.25.2 on Debian 11.0
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔄  Restarting existing docker container for "minikube" ...
🤦  StartHost failed, but will try again: container addresses should have 2 values, got 3 values: [192.168.49.2 172.54.32.2 ]
🏃  Updating the running docker "minikube" container ...
😿  Failed to start docker container. Running "minikube delete" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 3 values: [192.168.49.2 172.54.32.2 ]

❌  Exiting due to GUEST_PROVISION: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 3 values: [192.168.49.2 172.54.32.2 ]

Disconnecting the additional network before start (and connecting it back again after start) helps as a workaround, though.

/reopen

@k8s-ci-robot
Copy link
Contributor

@ololobus: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

I can confirm that on the latest:

minikube version: v1.25.2
commit: 362d5fdc0a3dbee389b3d3f1034e8023e72bd3a7

the problem is still there. I have a minikube k8s cluster with additional docker-compose network attached to the minikube container and get the same error:

minikube start
😄  minikube v1.25.2 on Debian 11.0
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔄  Restarting existing docker container for "minikube" ...
🤦  StartHost failed, but will try again: container addresses should have 2 values, got 3 values: [192.168.49.2 172.54.32.2 ]
🏃  Updating the running docker "minikube" container ...
😿  Failed to start docker container. Running "minikube delete" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 3 values: [192.168.49.2 172.54.32.2 ]

❌  Exiting due to GUEST_PROVISION: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 3 values: [192.168.49.2 172.54.32.2 ]

Disconnecting the additional network before start (and connecting it back again after start) helps as a workaround, though.

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ololobus
Copy link

ololobus commented Mar 16, 2022

@klaases or @ggjulio, please, reopen once you have time.

@ggjulio
Copy link
Author

ggjulio commented Mar 17, 2022

/reopen

@k8s-ci-robot
Copy link
Contributor

@ggjulio: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Mar 17, 2022
@klaases
Copy link
Contributor

klaases commented Mar 30, 2022

As @sharifelgamal mentioned, we would appreciate any PR that goes towards resolving this issue.

@sharifelgamal sharifelgamal added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Apr 13, 2022
@sharifelgamal
Copy link
Collaborator

I'm freezing this issue as it is a genuine minikube bug. Help wanted!

fredericgermain added a commit to fredericgermain/minikube that referenced this issue May 21, 2024
fredericgermain added a commit to fredericgermain/minikube that referenced this issue Jun 16, 2024
fredericgermain added a commit to fredericgermain/minikube that referenced this issue Jun 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking networking issues co/docker-driver Issues related to kubernetes in container help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants