Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for the "kubectl top" command across multiple clusters #5058

Open
XiShanYongYe-Chang opened this issue Jun 18, 2024 · 4 comments
Open

Comments

@XiShanYongYe-Chang
Copy link
Member

We have added component fleet-apiserver, which provides users with APIs compatible with the Kubernetes single cluster experience. Related proposal PR #4317

For most kubectl commands, we can obtain the same experience as that of a single Kubernetes cluster.

Currently, for the kubectl top command, there is a phenomenon:

image

As we can see, the number of resources obtained has doubled. We want to obtain the format of the lower part, but there is no related resource(cpu/memory) use data.

This is because for PodMetrics and NodeMetirics resources, requests are forwarded to the karmada-metrics-adapter component for processing. The difference is that the karmada-metrics-adapter component is not processed in the same way as fleet-apiserver (adding the suffix .clusterspacce.{cluster-name} to resources).

@XiShanYongYe-Chang
Copy link
Member Author

Solution 1: Modify the resource request response of the karmada-metrics-adapter component and add the .clusterspace.{cluster-name} suffix to the pod and node names. like this:

image

Modification:

  1. Change the API return value of the metrics.k8s.io group in the karmada-metrics-adapter component.
  2. Adapt to the logic of the federatedhpa-controller to process the suffix of the pod resource.

Impact on Users:
The returned information of resources in the metrics.k8s.io group of the karmada-metrics-adapter component changes, and users are aware of the behavior change.

Note: The return values of custom metrics in the karmada-metrics-adapter component do not change.

@XiShanYongYe-Chang
Copy link
Member Author

Solution 2: Replace metrics.k8s.io with metrics.karmada.io in the karmada-metrics-adapter component. The return value remains unchanged. Add a fleet-metrics-adapter component to work with fleet-apiserver to process requests from the metrics.k8s.io group. The .clusterspace.{cluster-name} suffix is added to the resource name in the returned value.

image

Modification:

  1. Change the API return value of the metrics.k8s.io group in the karmada-metrics-adapter component.
  2. Adapt to the logic of the federatedhpa-controller to process the suffix of the pod resource.
  3. Add a new component fleet-metrics-adapter.

Impact on Users:
To be compatible with the previous bussiness, users need to use the new metrics.karmada.io group. (Only the group name is changed, and the return value remains unchanged.)

@XiShanYongYe-Chang
Copy link
Member Author

Solution 3: Use the top command provided by karmadactl to allow users to query node and pod information in the member cluster.

Note: Currently, the karmadactl top command can be used to query pod information but cannot be used to query node information.

@XiShanYongYe-Chang
Copy link
Member Author

I shared this topic at today's community meeting.

https://docs.google.com/document/d/1y6YLVC-v7cmVAdbjedoyR5WL0-q45DBRXTvz5_I7bkA/edit#heading=h.h535bf1fgq9k

The comments were that solutions 1 and 2 have a direct impact on user behavior and are not recommended. Solution 3 is recommended.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant