Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Follow the same standards with provider kubernetes for environment variable #225

Open
anhdle14 opened this issue Mar 1, 2023 · 12 comments

Comments

@anhdle14
Copy link

anhdle14 commented Mar 1, 2023

Currently only one configuration can be set which is quite limited on running this provider in CI and in development.

Would suggest 2 things:

  • In-cluster config should just automatically set based on the below

In-cluster Config
The provider uses the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT environment variables to detect when it is running inside a cluster, so in this case you do not need to specify any attributes in the provider block if you want to connect to the local kubernetes cluster.

If you want to connect to a different cluster than the one terraform is running inside, configure the provider as above.
ref: https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#in-cluster-config

  • Support multiple args instead of: only one of kubeconfig_incluster,kubeconfig_path,kubeconfig_rawcan be specified, butkubeconfig_incluster,kubeconfig_path were specified..
@ForbiddenEra
Copy link

Would also be great if we could get exec as well like the kubernetes provider!

@tkellen
Copy link

tkellen commented Apr 28, 2024

Would you accept a PR that implements this?

@pst
Copy link
Member

pst commented Apr 30, 2024

I would accept a PR that makes kubeconfig_incluster settable via an env var for users that wasn't to keep the HCL agnostic and control via the environment which auth is being used.

A PR that isn't fully backwards compatible for existing users is not an option. I am also not keen on adding additional attributes to the provider if the one's already there can get the job done.

You could already now achieve what you want simply by always supplying a path to a kubeconfig and have that kubeconfig set the desired auth.

@tkellen
Copy link

tkellen commented Apr 30, 2024

The primary use case I am suggesting supporting is for CI environments where a kubeconfig is not present because it is generated with an exec call. Backwards compatibility is assumed always.

@pst
Copy link
Member

pst commented May 1, 2024

Exec call could mean many things. Can you show an example?

@tkellen
Copy link

tkellen commented May 1, 2024

From the hashicorp/kubernetes provider docs:

provider "kubernetes" {
  host                   = var.cluster_endpoint
  cluster_ca_certificate = base64decode(var.cluster_ca_cert)
  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    args        = ["eks", "get-token", "--cluster-name", var.cluster_name]
    command     = "aws"
  }
}

Essentially everything this provider supports with respect to connecting to clusters, I would be glad to add support for in this provider in whatever way would be backwards compatibility with existing functionality. The kubernetes provider is used widely by the entire Terraform ecosystem and as such reflects the needs of the community. I'd like this provider to support those needs (as mine are among them).

@pst
Copy link
Member

pst commented May 1, 2024

What you're trying to do @tkellen is possible by defining the kubeconfig as a HCL map, and then passing it into kubeconfig_raw using Terraform's yamlencode function. There should be examples how to do this in the issues, use the search.

This issue is about something else. I am not planning to add individual kubeconfig attributes to the provider spec because I have zero interest in playing catch up supporting all of them.

@tkellen
Copy link

tkellen commented May 1, 2024

Roger that. I'll fork the provider. The ergonomics of building a kubeconfig are not something I am interested in maintaining.

Here is another format your provider does not support:

data "aws_eks_cluster" "this_env" {
  name = local.name
}

data "aws_eks_cluster_auth" "this_env" {
  name = local.name
}

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.this_env.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.this_env.certificate_authority[0].data)
    token                  = data.aws_eks_cluster_auth.this_env.token
  }
}

@tkellen
Copy link

tkellen commented May 1, 2024

To be clear, I appreciate the work you and the other contributors have done here but it seems very strange that you don't support configuring your provider in the way kubernetes/helm/kubectl do.

@tkellen
Copy link

tkellen commented May 1, 2024

If I extract the functionality from the kubernetes provider into a library and make it possible for this provider to utilize that functionality, would you consider adding this support? That would allow you to kick requests from users "upstream" to me. I would also seek to get the helm, kubectl and kubernetes providers to share the functionality, though obviously I can't guarantee it would be adopted this way.

@pst
Copy link
Member

pst commented May 4, 2024

@tkellen I doubt that maintaining a fork is less work than simply doing something like this and getting the host, CA and token path from the env vars inside the pod instead of via the AWS provider data sources like in the example.

If you search the issues, you will find plenty of examples and plenty of explanation why I am not interested in duplicating things that can be set in a kubeconfig in the provider schema. Since this issue is about something else entirely, though, I kindly ask you to stop derailing the conversation now.

@tkellen
Copy link

tkellen commented May 4, 2024

🙄 roger that, will fork.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants