Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ADR about failed multus integration #9434

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

manuelbuil
Copy link
Contributor

@manuelbuil manuelbuil commented Feb 8, 2024

Proposed Changes

I worked for a while in the integration of k3s and Multus but it was pretty complicated. This ADR tries to explain what was tried and why I finally gave up

Types of Changes

New feature

Verification

Testing

Linked Issues

User-Facing Change


Further Comments

@manuelbuil manuelbuil requested a review from a team as a code owner February 8, 2024 13:55
Copy link
Contributor

@brandond brandond left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I usually just write it up in the final format the ADR will be accepted in, so marked as Approved, and with the Design suggestion section as if it was the Decision... so that people can approve and you don't need to update it again and get another round of approvals just to do minor rewording.


The suggestion is to include a new `--multus` boolean flag. When that flag gets activated, we deploy a Kubernetes job that will run in that node and will:
* Run the [cni-plugins image](https://github.com/rancher/image-build-cni-plugins) which will deploy in the node all the widely used CNI binaries (e.g. macvlan), the multus binary and the whereabouts binary
* Install the required multus CRD `network-attachment-definitions.k8s.cni.cncf.io` and the whereabouts CRDs `ippools.whereabouts.cni.cncf.io` & `overlappingrangeipreservations.whereabouts.cni.cncf.io`
Copy link
Contributor

@brandond brandond Feb 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would probably do like we do with disabled components in RKE2, and ship these as a packaged manifest, and then just add them to the --disable list if --multus isn't set.

* Create the appropriate Multus CNI config file and place it in the right path
* Start the whereabouts controller

These tasks are more or less what the upstream Multus and whereabouts daemonset do but, at least for Multus, it seems too overkill to have a daemonset running forever just for deploying the binaries and creating the correct multus cni config. A job that runs the `cni-plugins image` once the nodes comes up seems like a better solution.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure a Job is the best way to do this, given that that the process for assigning job pods to specific nodes is a bit clumsy. We would need to reinvent much of the logic around creating and managing job-node assignment, and this logic already exists in the daemonset controller. I think we should either drop the required files directly from the k3s supervisor process, or use a daemonset to ensure that the install pod gets run on every node.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if it is too clumsy, then perhaps it is best for k3s to do that job but then, I don't want to increase the size of k3s by adding too many binaries that most people won't use... I'm starting to wonder if perhaps the idea of that user that suggested to add "macvlan" to the cni multi-exec might be the best one :D

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

macvlan + multus + whereabouts +.... I'll have fun in the next days

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding multus+whereabouts to the multi-exec is not worth it.:

  • Multus binary increases the size of k3s significantly. Given that only a handful of users will benefit from this, I don't think it makes sense
  • whereabouts binary has very very old dependencies. That project actually looks sort of unmaintained. Anyway, it would creep in a lot of CVEs and we don't want that

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Open question: do we want to add the multus images to the core k3s airgap image list, or should we have an optional image tarball, like we do for rke2?

Copy link
Contributor Author

@manuelbuil manuelbuil Feb 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, probably the latter makes more sense. What would be the disadvantages of it?

Copy link
Contributor

@brandond brandond Feb 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Disadvantage to adding an optional tarball is that we have to update build scripts, QA processes, and documentation to handle the optional tarballs. We have never had more than 1 for k3s. How big are the additional images?

Copy link
Contributor Author

@manuelbuil manuelbuil Feb 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah. To minimize that pain, I'd leave the same name for the current tarball k3s-airgap-images-${ARCH}.tar and the images file k3s-images.txt

@manuelbuil manuelbuil force-pushed the multusADR branch 5 times, most recently from 8455315 to 5a04cb4 Compare February 15, 2024 17:07
@manuelbuil manuelbuil changed the title Multus adr Integrate k3s and multus Feb 15, 2024
Copy link

codecov bot commented Feb 15, 2024

Codecov Report

Attention: Patch coverage is 33.33333% with 14 lines in your changes are missing coverage. Please review.

Project coverage is 43.77%. Comparing base (06b6444) to head (59ef7c7).

❗ Current head 59ef7c7 differs from pull request most recent head 916b122. Consider uploading reports for the commit 916b122 to get more accurate results

Files Patch % Lines
pkg/deploy/zz_generated_bindata.go 0.00% 12 Missing ⚠️
pkg/server/server.go 60.00% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #9434      +/-   ##
==========================================
- Coverage   48.66%   43.77%   -4.89%     
==========================================
  Files         158      151       -7     
  Lines       14017    13580     -437     
==========================================
- Hits         6821     5945     -876     
- Misses       5906     6416     +510     
+ Partials     1290     1219      -71     
Flag Coverage Δ
e2etests 39.29% <33.33%> (-7.21%) ⬇️
inttests 39.38% <33.33%> (?)
unittests ?

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@manuelbuil manuelbuil force-pushed the multusADR branch 4 times, most recently from 5073222 to a2f3a96 Compare February 23, 2024 07:02
@manuelbuil manuelbuil force-pushed the multusADR branch 5 times, most recently from 47541a7 to 1efc154 Compare March 6, 2024 14:46
dereknola
dereknola previously approved these changes Mar 8, 2024
Copy link
Contributor

@brandond brandond left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the k3s-multus chart has the same amd64 nodeSelector issue as the rke2 chart. We should fix that before we add the chart to k3s, since we do fully support arm64.

https://github.com/k3s-io/k3s-charts/blob/main/charts/multus/4.0.201%2Bupv4.0.2-build2024020801/values.yaml#L81

@manuelbuil
Copy link
Contributor Author

This PR is postponed for the April release

@@ -279,6 +287,7 @@ func stageFiles(ctx context.Context, sc *Context, controlConfig *config.Control)
"%{SYSTEM_DEFAULT_REGISTRY}%": registryTemplate(controlConfig.SystemDefaultRegistry),
"%{SYSTEM_DEFAULT_REGISTRY_RAW}%": controlConfig.SystemDefaultRegistry,
"%{PREFERRED_ADDRESS_TYPES}%": addrTypesPrioTemplate(controlConfig.FlannelExternalIP),
"%{DATA_DIR}%": CNIBinDir,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets not call this DATA_DIR - this could be confusing as normally that would refer to the value of the --data-dir flag.

Suggested change
"%{DATA_DIR}%": CNIBinDir,
"%{CNI_BIN_DIR}%": CNIBinDir,

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are other problems with injecting the server's CNI bin dir path into the chart like this though:

  1. CNI bin dir is version dependent. The value will change whenever the server is upgraded, and will only work if all nodes are on the exact same build of k3s:
brandond@seago:~$ grep bin_dir /var/lib/rancher/k3s/agent/etc/containerd/config.toml
bin_dir = "/var/lib/rancher/k3s/data/0bd989dbe82165818ee98da4cc70b043adabccc23827c9bbd24b1460d4f3f2e1/bin"
  1. CNI bin dir is prefixed by --data-dir, which may be set to different values on different hosts.

You're going to need to find some way to set this on per-node basis, instead of injecting it via the chart. You could MAYBE get away with hardcoding it as /var/lib/rancher/k3s/data/current but this only fixes problem 1, not problem 2.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Brad, those are great points I did not think about

@@ -157,6 +157,7 @@ type CriticalControlArgs struct {
FlannelIPv6Masq bool `cli:"flannel-ipv6-masq"`
FlannelExternalIP bool `cli:"flannel-external-ip"`
EgressSelectorMode string `cli:"egress-selector-mode"`
Multus bool `cli:"multus"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like this needs tidying

@@ -397,6 +398,11 @@ func run(app *cli.Context, cfg *cmds.Server, leaderControllers server.CustomCont
serverConfig.ControlConfig.Disables["ccm"] = true
}

if !serverConfig.ControlConfig.Multus {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we require that the bundled multus be used alongside the embedded flannel? Should we raise an error if multus is enabled, but flannel backend is set to none?

config:
cni_conf:
confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d
binDir: %{DATA_DIR}%
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
binDir: %{DATA_DIR}%
binDir: %{CNI_BIN_DIR}%

@manuelbuil manuelbuil changed the title Integrate k3s and multus Add ADR about failed multus integration Apr 15, 2024

## Alternatives

* Deploy multus and whereabouts like traefik, reusing the multus chart in rke2-charts. However, that will deploy a daemonset for both multus and whereabouts which will consume more resources
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

considering Brad's comment above about using a daemonset, this looks it would be the simplest way to deploy multus+whereabouts?

Copy link
Contributor Author

@manuelbuil manuelbuil Apr 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are reading an old version of the ADR 🤔 . In the latest version, it is explained the reasons why this option is complicated. In the end, it was the option I tried

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants