Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

backend agnostic reconciliation test #328

Open
jayunit100 opened this issue Aug 19, 2022 · 7 comments
Open

backend agnostic reconciliation test #328

jayunit100 opened this issue Aug 19, 2022 · 7 comments
Assignees
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@jayunit100
Copy link
Contributor

jayunit100 commented Aug 19, 2022

we'd like something like https://github.com/kubernetes/kubernetes/blob/master/test/e2e/network/kube_proxy.go that we can use to run a/b comparisons in service of #325

@astoycos is interested in this, seems like a simple way to do it would be

make a service
for node in nodes:
   add 100 pods to the service

and then, say 1000 times

  • modify pod count (scale up or scale down)
make 1000 go funcs:
  in parallel poll  the SEP until EACH ENDPOINT is obtained.

continuously.

@jayunit100
Copy link
Contributor Author

/assign @astoycos

@jayunit100
Copy link
Contributor Author

/unassign @astoycos

@jayunit100
Copy link
Contributor Author

/good-first-issue

(For an experienced programmer that knows how to use client-go)

@k8s-ci-robot
Copy link
Contributor

@jayunit100:
This request has been marked as suitable for new contributors.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.

In response to this:

/good-first-issue

(For an experienced programmer that knows how to use client-go)

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Aug 19, 2022
@mcluseau
Copy link
Contributor

mcluseau commented Aug 19, 2022

how far is that from https://github.com/kubernetes-sigs/kpng/blob/master/server/cmd/kpng-backend-torture/main.go ?
Other refs:

  • backend_torture docs, improvements, metrics integration #150
  • dist/kpng-backend-torture --listen unix:///tmp/kpng.sock --sleep 1s 0:0 10000:10 10001:10 0:0 100000:1 0:0 1:100000 : 1s delay between diffs to the client, start with 0 services each with 0 endpoints, then 10k services with 10 endpoints, then 10k1 services with 10 endpoints, then back to 0:0, then 100k services with 1 endpoint, back to 0:0, then 1 service with 100k endpoints. (connect any backend with ie kpng local --api {same as --listen of torture} to-XXX ...)
  • tip: run your backend in a isolated network namespace, see https://github.com/kubernetes-sigs/kpng/blob/master/netns-test

@asim-reza
Copy link

/assign

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

6 participants