Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GEP-1762 needs clarification on long Gateway names > 63 characters #2592

Closed
sunjayBhatia opened this issue Nov 17, 2023 · 17 comments · May be fixed by #3070
Closed

GEP-1762 needs clarification on long Gateway names > 63 characters #2592

sunjayBhatia opened this issue Nov 17, 2023 · 17 comments · May be fixed by #3070
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@sunjayBhatia
Copy link
Member

sunjayBhatia commented Nov 17, 2023

What happened:

Prior to GEP-1762, Contour's Gateway provisioner has been adding a label to resources to denote which Gateway "owns" it: projectcontour.io/owning-gateway-name set to the Gateway name.

While reviewing #2582 with Contour and also implementing the Gateway infrastructure labels feature, we realized that Gateway names longer than 63 characters cause failures to create resources generated with this label.

While this can be solved for Contour's implementation-specific labels with a hash or some other solution, this is not exclusive to Contour, the upstream GEP-1762 also has requirements that may cause many implementations to run into this. Also, including a conformance test that exhibits this may be problematic for some implementations.

@skriss pointed out as well that GEP-1762 makes this same type of label, including the "owning" Gateway as the value, required for implementations that generate deployment resources dynamically: https://gateway-api.sigs.k8s.io/geps/gep-1762/#automated-deployments

What you expected to happen:

GEP-1762 should be amended to describe what to do with the gateway.networking.k8s.io/gateway-name label when dealing with a Gateway with a long name.

How to reproduce it (as minimally and precisely as possible):

Run the conformance test HTTPRouteInvalidParentRefSectionNameNotMatchingPort (on main) with Contour, or create a Gateway with a name longer than 63 characters and have Contour reconcile it.

Anything else we need to know?:

Hashing a long Gateway name and setting that as the value of may be an answer, however that means we will lose the immediate human readability of this feature.

Another solution would be to limit Gateway names to < 63 characters and longer names would not be Accepted by implementations.

@arkodg
Copy link
Contributor

arkodg commented Nov 17, 2023

vote for 2. (limit gateway names to < 63)

@mlavacca
Copy link
Member

mlavacca commented Nov 17, 2023

Hashing a long Gateway name and setting that as the value of may be an answer, however that means we will lose the immediate human readability of this feature.

I agree this is suboptimal as we will lose the immediate human readability of this feature.

Another solution would be to limit Gateway names to < 63 characters and longer names would not be Accepted by implementations.

Limiting the name's length is quite tricky as what behavior should we expect?

  • run a webhook to reject gateways with a too-long name (burden for the implementations) or
  • mark the gateway as not accepted because a too-long name (bad UX, delete the gateway and recreate it with a new name)

We could use a label selector on gateways with length < 63. Here the problem is that we should enforce uniqueness on this field among all the gateways in every namespace and I'm not sure there is some kind of validation at kubebuilder or CEL level that can enforce this.

@robscott
Copy link
Member

Unfortunately I think it's too late to add any kind of limit here - Gateway is a GA part of the API. Even if we could, almost all Kubernetes resources follow the same default name validation, and I don't think it it would make sense to change it here.

I think many/most Kubernetes tools will use some kind of combination of truncation and hashing when dealing with long names in places they don't fit. So although it adds additional work, it's probably good that we have conformance tests that cover longer Gateway names like this.

@sunjayBhatia
Copy link
Member Author

Yep, that makes a lot of sense. I guess at the moment, unless anyone disagrees, the action item here is to update the GEP (and any other places we talk about labels perhaps) to have a prescriptive way to deal with long resource (specifically Gateway) names being added as labels

@arkodg
Copy link
Contributor

arkodg commented Nov 17, 2023

@robscott can we consider adding text in the API to discourage longer than 63 chars in the name ?
implementations are already using various techniques to shorten names when creating generated resources (Deployments, Service etc) but rely on the labels like gateway.envoyproxy.io/owning-gateway-namespace to map the generated resource to the Gateway resource, since the label value limit is 63, this makes it no longer possible to consistently use labels to find resources mapped to a Gateway

@tao12345666333
Copy link
Member

Even without any hard restrictions on the API, we should add documentation to mention this possibility.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 16, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 17, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 16, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@robscott
Copy link
Member

/reopen

@howardjohn can you work on this?

@k8s-ci-robot
Copy link
Contributor

@robscott: Reopened this issue.

In response to this:

/reopen

@howardjohn can you work on this?

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Apr 16, 2024
@sunjayBhatia
Copy link
Member Author

If noone has picked this up I can take it since it looks like we got some consensus above on documentation/recommendation updates

@robscott
Copy link
Member

robscott commented May 9, 2024

Thanks @sunjayBhatia!

/assign @sunjayBhatia

@sunjayBhatia
Copy link
Member Author

PR: #3070

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants