Skip to content

Large memory regression #8589

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
huonw opened this issue Aug 18, 2013 · 4 comments
Closed

Large memory regression #8589

huonw opened this issue Aug 18, 2013 · 4 comments

Comments

@huonw
Copy link
Member

huonw commented Aug 18, 2013

It appears that #8235 caused a ~500MB memory regression.

The plot is very noisy because there were a few non-bors pushes to master, and the graph is purely time-based, so commits are shown when they are authored, not merged; but efd6eaf (the last "low" commit) is the parent of 2a7be1c (the merge of #8235) so I'm fairly sure it was caused by that PR, quite possibly the visitor changes.

cc @pcwalton (and @pnkfelix, I guess).

@nejucomo
Copy link

It looks like the memory profiling on Is Rust Slim Yet? profiles running rustc on librustc. This does not distinguish whether the memory leak is in the rustc application code, the libraries it depends on (such as std), the runtime support, or in generated code.

Can anyone suggest a way to distinguish these cases? For example, if there is similar memory profiling for servo, which is another probably complex application, that would suggest std, extra, runtime support, or generated code, but not rustc application code itself. If there is not a correlated memory leak in servo, then maybe this suggests the leak is in rustc itself.

That's not very precise, but if the data is already there it could suggest where to look. How can I learn about memory profiling tools that work well on rust code?

Anyway, I'm going to read over the diff between the parents of 2a7be1c which are efd6eaf and 9457ebe to see if I notice anything suspicious by code inspection.

@alexcrichton
Copy link
Member

I believe that I've found the regression. It's because one of the lint passes is holding onto a lint Context, and the lint Context is holding onto a pass, resulting in a cycle. After this, the tcx held by the Context is never free'd.

@cmr graciously created a memory graph where no lint passes are run. I'm working on a fix.

@emberian
Copy link
Member

emberian commented Oct 2, 2013

@alexcrichton so it's not ideal, but could the lint passes hold a &'static to the context? It'll always outlive them, and it's be a gig of memory savings.

@alexcrichton
Copy link
Member

Taking a look at lint, lint has a large number of problems with it otherwise, so I'd rather just do this right this time around. I should have a pull up by tonight.

bors added a commit that referenced this issue Oct 3, 2013
This purges about 500 lines of visitor cruft from lint passes. All lints are
handled in a much more sane way at this point. The other huge bonus of this
commit is that there are no more @-boxes in the lint passes, fixing the 500MB
memory regression seen when the lint passes were refactored.

Closes #8589
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants