Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: separate tsconfck caches per config in a weakmap #17317

Open
wants to merge 2 commits into
base: v6/environment-api
Choose a base branch
from

Conversation

dominikg
Copy link
Contributor

Description

alternative to #17304

the weakmap on config ensures that multiple vite devserver instances don't share the global tsconfck cache.

Copy link

stackblitz bot commented May 25, 2024

Review PR in StackBlitz Codeflow Run & review this pull request in StackBlitz Codeflow.

@bluwy
Copy link
Member

bluwy commented May 27, 2024

Why can't multiple dev servers share the same cache? Shouldn't it not be dependant on the values of the config?

@dominikg
Copy link
Contributor Author

Why can't multiple dev servers share the same cache? Shouldn't it not be dependant on the values of the config?

@patak-dev mentioned that some projects use mutiple server instances simultaneously with different configs. They might watch entirely different subtrees of a monorepo and then having a cache for each one limits the refresh to that subtree.

@bluwy
Copy link
Member

bluwy commented May 27, 2024

I see. Though there is still the tradeoff that if the subtrees overlap, we would have duplicated the cache in memory. Personally I think since editing the tsconfig.json isn't common, and full reloads shouldn't be too bad, it's okay to restart entirely for now. Or if we want to fix this, we can have another cache of watched tsconfig file paths per server only?

@dominikg
Copy link
Contributor Author

it's not only about configs outside of the root. even if a new tsconfig is added inside, you have to restart that server. Its kind of niche because most setups only involve one devserver, so they won't see much of a difference, but it can help with multiples. Having overlap and read+cache twice is the nature of that overlap.

Given that we try to avoid global caches (even more so with the new environments splitting things more) i'm inclined to follow this one. The alternative PR still has that global cache.

@bluwy
Copy link
Member

bluwy commented May 27, 2024

Do you mean that the current server top-level variable is only causing one server to restart only? I agree that it would be good to handle multiple servers there, but it looks the current v6 branch already fixed that?

it's not only about configs outside of the root. even if a new tsconfig is added inside, you have to restart that server.

I still don't quite understand why we need to duplicate the cache per server though. If a tsconfig is added inside a root, the server already restarts today? If it's outside the root (e.g. another subpackage), in the current v6 branch the server will not be watching that other root and tsconfig.json, so it should rightfully not restart?

@dominikg
Copy link
Contributor Author

Do you mean that the current server top-level variable is only causing one server to restart only? I agree that it would be good to handle multiple servers there, but it looks the current v6 branch already fixed that?

it's not only about configs outside of the root. even if a new tsconfig is added inside, you have to restart that server.

I still don't quite understand why we need to duplicate the cache per server though. If a tsconfig is added inside a root, the server already restarts today? If it's outside the root (e.g. another subpackage), in the current v6 branch the server will not be watching that other root and tsconfig.json, so it should rightfully not restart?

but the tsconfck cache would still get flushed if it is just one global one...

@bluwy
Copy link
Member

bluwy commented May 27, 2024

I think it's ok since editing the tsconfig.json is not common, and the memory taken by the cache is something people will likely hit more often?

If we can solve granular invalidation (which is understandably tricky), we can also keep the current single cache today.

@patak-dev
Copy link
Member

We currently have a separate FsUtils cache per ResolvedConfig and also a Package Cache per config, no? This is why I thought the tsconfck is similar. Is the tsconfck cache a lot bigger than the other ones? To be fair, I tried to push for the FsUtils to share the state internally, but it didn't progressed much because of the added complexity.

But I'm ok keeping a global one too. Restarting all servers is an ok tradeoff to pay for sharing a single one. @dominikg I think #17304 won't do that though. It will currently only restart the server that is watching the json file and clear the global cache. So the other servers will end up with a possible outdated state. For this to work, it should be maybe similar to the current code in Vite 5 but with a WeakSet keeping all the servers instead of a single Server global variable. If that is the case, you may also be able to keep the back compat and push the change to Vite 5.3 because you could still use the server in the weak set when the function is called as free function (without a server).

@dominikg
Copy link
Contributor Author

dominikg commented Jun 4, 2024

granular invalidation is too complex and given how rare it is to edit tsconfig while dev is running i'd still avoid that.

If we keep one global tsconfck cache, the question becomes how we deal with ensureWatchedFile if there are multiple servers and also how to reload all environments of all servers if just one of them detected a change that flushed the global tsconfig cache.

Having one tsconfig cache per server (and its watcher) solves this issue as each server is in control of its own fate.

Keeping one global cache could work if we declared a breaking change and not call ensureWatchedFile anymore for discovered tsconfig files outside of already watched areas.

@patak-dev
Copy link
Member

We have a free exported function transformWithEsbuild() that is currently using the global server to ensure new tsconfig.json are watched when out of root. We should change this. If you call this function, and then destroy the server and start a new one, the global tsconfck will end up potentially with an entry that is no longer watched.

Actually, this is a problem in general. If we have a single tsconfck cache, every time a server is destroyed, all the cached tsconfig.json that are being watched should be removed from the cache. And because a single change will trigger full reloads across the board, this means that every time we destroy a server, we should reload all the others?

I think that properly sharing this cache is quite hard as its tied to the assumption of an existing watcher lifecycle. What this PR is doing with a cache per ResolvedConfig works (same as what we are doing with FsUtils) because we have a 1-1 ResolvedConfig <-> Server relationship (we create the config as part of createServer, the first line is a resolveConfig call, a config can't be reused to create two servers). We can also trust this as we have all our internal plugins weakmap caches working in the same way. This isn't great though. We are cheating to get the watcher when we don't have the Server at hand: ResolvedConfig <-> Server <-> Watcher. In Vite 6, internal plugin caches changes because the weakmaps caches are in general per Environment instead of per ResolvedConfig, and the environment instance has access to the watcher.

We still have the same per-server (per-watcher actually) caches in Vite 6. FsUtils, PackageData, etc, all work against the server ResolvedConfig. Because we have environment.config to get the resolved config, we can get access to any of these caches too.

Probably the right thing to do here is to have a new Caches abstraction (that is associated to a server watcher), and we can pass around (kind of like the package data cache).

I don't think we need to do a refactoring here in Vite 6 given how much has changed already, and I don't really now how deep it will go. Associating caches to ResolvedConfig and assuming each server has a single ResolvedConfig is working well so far so I'd say we keep doing the same for now. It would be good to review every cache in Vite later on (Vite 7?) and see if we can clean up things.

Given the difficulties to keeping a global cache, I think we have two clean options:

  • Do a breaking change in Vite 6 and define that changes in tsconfig.json are not watched at all by a Vite process. You'll need to kill it and start again.
  • Merge this PR (or a similar approach) moving the tsconfck cache to be per-watcher (by keying with a ResolvedConfig, as we do with FsUtils). Then we can ensure that the cache for a server will properly reload the server when needed.

Or we keep trying to have a global cache that only work in some cases and we document which things are supported. My vote goes to one of the two clean options above.

Co-authored-by: Ben McCann <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants