-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: zfs disk i/o monitoring tool on Ubuntu/Debian #6763
Comments
Does filetop do what you want? It is not zfs-specific, but should work fine. For files, pretty much any tool that observes the VFS layer should work. |
@richardelling Hmm I can't seem to locate a package for Xenial. It seems a bit overkill to compile all this crap just to test it out :O) Maybe I'll try ... Would this not suffer from the same problem I have with iotop: Lots of noise, and essentially it would just list "faceless" zfs processes which I have a tough time tracking down to a specific datapool / container? |
You shouldn't see any ZFS processes doing file I/O at the VFS layer. Historically the VFS-layer metrics are kept at that generic layer. Today, there is no per-dataset (filesystem) metrics in most ZFS implementations because the measurements are intrusive and other methods work well for sampling. For example, a ZFS implementation can easily have billions of files and a billion counters would take a lot of RAM -- unlikely to be a suitable trade-off. I suggest you look for tools at the VFS layer, especially those that allow filters. |
To bring this up again: zfs iostat [-r|-d depth] [interval] (plus the usual flags to make parsing easier) dataset level equivalent of zpool iostat would certainly be very helpful at for tracking down what is going on in the system. Should tracking the metrics by default be too expensive: collection could be toggled by a dataset level property. |
Looks like some stats per dataset are available now via #7705 |
Bounty was mentioned in this thread, so if interested in crowd-funding zfs please see: #13397 |
This is a feature request for a simple disk i/o monitoring tool for zfs on Ubuntu/Debian
Problem:
Currently there is no easy way to identify which datapools are using the most disk i/o. This can be problematic in e.g. LXD containerized environments (which is my use case) where you may have 3rd parties running software in containers which might be misbehaving.
As a sysadmin it is currently a tremendous pain to track down which LXD container is (ab)using disk i/o.
Wanted:
A simple tool which gives a clear picture of what datapools are doing at the moment. Think a simpler version of iotop would suffice. Something like:
Here I'm guessing a tool can be constructed which shows read/write in mb/s and polls for data every x seconds, therefore my fanciful -d 5 "iotop-like" example.
Bounty:
I am willing to sponsor the development of this tool, if it is at all possible to build, for contribution back to the community. If you are interested in building something like this, please post your intended methodology. I would like at least a single consenting voice in the community to agree with your chosen solution before committing. Feel free to contact me privately with regards to compensation.
The text was updated successfully, but these errors were encountered: