Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unexpected memory usage of memory tracker #54005

Open
XuHuaiyu opened this issue Jun 13, 2024 · 3 comments · May be fixed by #54095
Open

unexpected memory usage of memory tracker #54005

XuHuaiyu opened this issue Jun 13, 2024 · 3 comments · May be fixed by #54095

Comments

@XuHuaiyu
Copy link
Contributor

Bug Report

Please answer these questions before submitting your issue. Thanks!

1. Minimal reproduce step (Required)

data.txt
query.txt
schema.txt

# mysql -h 10.71.205.177 -uroot -P 4000 --local-infile=1 --comments
source schema.txt
source data.txt
source query.txt

After executing this SQL, the heap profile shows that the memory usage related to the tracker is significantly high. This behavior is unexpected.

  1. The tracker's memory should not occupy such a large portion.
  2. This memory is not being tracked, and therefore it is not included in the total memory usage of a single SQL statement. This can prevent the SQL from being canceled in a timely manner, potentially leading to an Out of Memory risk.
截屏2024-06-13 17 38 43

2. What did you expect to see? (Required)

3. What did you see instead (Required)

4. What is your TiDB version? (Required)

nightly
8d28b23

@XuHuaiyu
Copy link
Contributor Author

diff --git a/pkg/executor/distsql.go b/pkg/executor/distsql.go
index ce1eaf7a2b..6bd3a93a7c 100644
--- a/pkg/executor/distsql.go
+++ b/pkg/executor/distsql.go
@@ -833,6 +833,7 @@ func (e *IndexLookUpExecutor) Close() error {
        e.tblWorkerWg.Wait()
        e.finished = nil
        e.workerStarted = false
+       e.memTracker.Detach()
        e.memTracker = nil
        e.resultCurr = nil
        return nil

After the above change, the unexpected memory consumption of IndexLoopUp is eliminated.
Before:
截屏2024-06-17 12 03 12
After:
截屏2024-06-17 12 03 25

@XuHuaiyu
Copy link
Contributor Author

After resetting some system variables, the heap usage is still unexpected.

tidb:4000 [cmmcdb]> select @@tidb_enable_tmp_storage_on_oom, @@tidb_hash_join_concurrency, @@tidb_mem_quota_apply_cache;
+----------------------------------+------------------------------+------------------------------+
| @@tidb_enable_tmp_storage_on_oom | @@tidb_hash_join_concurrency | @@tidb_mem_quota_apply_cache |
+----------------------------------+------------------------------+------------------------------+
|                                0 | 1                            |                            0 |
+----------------------------------+------------------------------+------------------------------+
截屏2024-06-17 16 08 08

@XuHuaiyu
Copy link
Contributor Author

During the investigation of this issue, it was found that the phenomenon is caused by five problems:

  1. IndexLookUp did not detach memTracker.
  2. During the fetchAllInners process in NestedLoopApply, a new List is continuously created and put into the cache in each round. The memTracker of these Lists occupies a large amount of memory.
  3. The diskTracker of HashJoin was not detached.
  4. The memTracker and diskTracker of RowContainer were not detached, and the memTracker of RowContainer.records is attached to RowContainer.memTracker. As a result, the memTracker of each new List created by RowContainer cannot be garbage collected.
  5. Apply continuously opens and closes HashJoin, causing the ActionSpill objects in sc.MemTracker to accumulate. Since ActionSpill references RowContainer, the old RowContainer cannot be garbage collected, leading to continuous accumulation of RowContainer.memTracker.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants