-
Notifications
You must be signed in to change notification settings - Fork 475
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Not all Pods get restartet after Secret Change #701
Comments
More: He logs But sometimes the pod doesnt restartet so 17 or 18 have restartet and 1-2 NOT But the Log said he restartet all 20. There is no error log that something has failed. |
Is the restarting done in a fire-and-request approach and if the API server has issues, they are lost or is there some ACK/retry involved? |
The pods which are not restarted, are they the same ones everytime or random? |
Very Random. We watch this over the last 3-4 weeks. sometime this is deployment 17 then next time deployment 3. Tried it also with latest version there is it still there |
Facing the same issue. |
any more information about what values are being used to install Reloader? |
+1 Facing the same issue. |
We are using reloader helm chart. Chart.yaml
values.yaml
|
Describe the bug
We have like 20 Deployment in our cluster.
all have on the Deployment the Annotations: reloader.stakater.com/auto: true
This all are AKHQ Deployment with Kafka Secrets.
Every 5 Days the Secrets get changed. (at the same time)
Sometimes:
Only 17/18 get restartet by reloader.
level=info msg="Changes detected in 'root-ca-cert-truststore' of type 'SECRET' in namespace 'test1', Updated 'akhq' of type 'Deployment' in namespace 'test1'"
Log look like this without any error.
To Reproduce
Who can i debug this deeper ?
Expected behavior
All pods get restartet
Environment
The text was updated successfully, but these errors were encountered: