-
Notifications
You must be signed in to change notification settings - Fork 269
reverse mode eats up file descriptors in a hurry #184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Well I may have spoken too soon about the workaround thing. I was doing a test rsync and the gocryptfs came to a screeching halt about 1.4 GB in. A bunch of these are in the kernel's log:
Not sure where that limit is coming from.
|
Hi! Which gocryptfs version are you on? Look like a file descriptor leak to me, rsync (and gocryptfs) should only hold a few files open at a time. |
I can reproduce this with latest master. |
Fixed by 48bd59f. There have been lots of changes to harden reverse mode against symlink attacks in the past few days, and one of them introduced a fd leak. Sorry about that! |
Wow, that was fast! I'll see if I can update and test later today. |
I updated my repo, rebuilt it, and it's working great now. Open FDs steady at 16. Thanks! |
Howdy! First I want to say thanks for all of the effort put into this project. This is what makes open source work.
Next, a little background. I have a Raspberry Pi 3 running Raspbian Stretch (9) set up as a local backup server for all of my important data. I have been looking for a good way to push these backups offsite and after a lot of research settled on running gocryptfs in reverse mode and then rsyncing the data up to a cheap VPS.
While I was testing this out, however, I found that the rsync didn't get very far before gocryptfs failed. A look at the logs indicated that it the max open file limit was reached. I found issue #82 for basically the same issue which resulted in commit 2d43288 which tries to raise the max open files limit to 4096 automatically.
The reason I'm opening this issue is that I believe this is still too small for a lot of workloads. My data set, for example, is 221 GB in size and growing daily. This is 135548 inodes. If I run gocryptfs as root, the highest I can raise the limit is 1048576, so I'm less than an order of magnitude from hitting that too. But it is an acceptable workaround (for me) for now.
One obvious question to ask is: Does gocryptfs really have to hold open every single file and directory that is "browsed" until the filesystem is unmounted? I presume there is a good reason, I just can't think of one with my naive understanding of how gocryptfs works. Another implication of this design is that holding open lots of files (potentially for a long time) isn't great from the perspective of other software accessing the plaintext data at the same time. For one example, if a bunch of plaintext files are deleted, they're not really gone until the gocryptfs mount is torn down.
Even if nothing can be done about this in the short term, it would probably be a good thing to mention in the documentation so that others are aware of the need to raise the max open files on the gocryptfs process on largeish sets of files.
I also want to mention that I am amazed how well gocryptfs works on a Raspberry Pi 3. Memory usage is low and it's decently fast even while pushing all four of those little ARM cores to their limit.
Thanks!
The text was updated successfully, but these errors were encountered: