We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I noticed that using the playbook scale.yml to scale out the cluster worker nodes will restart kube-proxy.
The corresponding task: https://github.com/kubernetes-sigs/kubespray/blob/master/roles/kubernetes/kubeadm/tasks/main.yml#L204
Is it necessary to restart kube-proxy in the scenario where only worker nodes are being added?
In the case of only scaling out worker nodes, I don't think it is necessary to restart kube-proxy, so I believe this scenario can be optimized.
This can be reproduced by using the playbook scale.yml to scale out the cluster worker nodes.
Linux 6.8.0-35-generic x86_64 NAME="Alpine Linux" ID=alpine VERSION_ID=3.17.6 PRETTY_NAME="Alpine Linux v3.17" HOME_URL="https://alpinelinux.org/" BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues"
ansible [core 2.12.5] config file = /kubespray/ansible.cfg configured module search path = ['/kubespray/library'] ansible python module location = /usr/lib/python3.10/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.10.13 (main, Aug 26 2023, 11:33:35) [GCC 12.2.1 20220924] jinja version = 3.1.2 libyaml = True
Python 3.10.13
774d824
calico
~
ansible-playbook -i /conf/host.yml --become-user root -e "@/conf/group_vars.yml" --private-key /auth/ssh-privatekey /kubespray/scale.yml --limit=dev1-w-10-64-80-147 --forks=10
No response
The text was updated successfully, but these errors were encountered:
No branches or pull requests
What happened?
I noticed that using the playbook scale.yml to scale out the cluster worker nodes will restart kube-proxy.
The corresponding task: https://github.com/kubernetes-sigs/kubespray/blob/master/roles/kubernetes/kubeadm/tasks/main.yml#L204
Is it necessary to restart kube-proxy in the scenario where only worker nodes are being added?
What did you expect to happen?
In the case of only scaling out worker nodes, I don't think it is necessary to restart kube-proxy, so I believe this scenario can be optimized.
How can we reproduce it (as minimally and precisely as possible)?
This can be reproduced by using the playbook scale.yml to scale out the cluster worker nodes.
OS
Version of Ansible
Version of Python
Python 3.10.13
Version of Kubespray (commit)
774d824
Network plugin used
calico
Full inventory with variables
Command used to invoke ansible
ansible-playbook -i /conf/host.yml --become-user root -e "@/conf/group_vars.yml" --private-key /auth/ssh-privatekey /kubespray/scale.yml --limit=dev1-w-10-64-80-147 --forks=10
Output of ansible run
~
Anything else we need to know
No response
The text was updated successfully, but these errors were encountered: