-
Notifications
You must be signed in to change notification settings - Fork 0
resources: disk bound tests need to run in special way #97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I would want to collect statistics right from test-run or in some way that works in CI. So we'll see actual behaviour in CI. |
Added for 'vardir' path disk bound status collecting routine which parses file: /proc/discstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use Field 10 which has the 12th position in the file: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. For this routine use added test-run new option: --collect-statistics which should be used to use this routine. Added to worker listener call to collect disk bound values after each sent test lua command to main log. Closes tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk bound status collecting routine which parses file: /proc/discstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use Field 10 which has the 12th position in the file: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. For this routine use added test-run new option: --collect-statistics which should be used to use this routine. Added to worker listener call to collect disk bound values after each sent test lua command to main log. Closes tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk bound status collecting routine which parses file: /proc/discstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use Field 10 which has the 12th position in the file: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. For this routine use added test-run new option: --collect-statistics which should be used to use this routine. Added to worker listener call to collect disk bound values after each sent test lua command to main log. Closes tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk bound status collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use Field 10 which has the 12th position in the file: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. For this routine use added test-run new option: --collect-statistics which should be used to use this routine. Added to worker listener call to collect disk bound values after each sent test lua command to main log. Closes tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk bound status collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use Field 10 which has the 12th position in the file: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect vardir disk bounds in standalone file and print to stdout after testing finished up to 10 most used it tasks. To count vardir disk bounds for each test we need to know what was this value on test start and on its finish. Used for it 'StatisticsWatcher' listener which has the following used routines: process_result() Using 'WorkerCurrentTask' queue message to save the disk bound value at the start of the test. Using 'WorkerTaskResult' queue message to save final disk bounds for finished tasks. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tasks and up to 10 most used it tasks. The current patch uses standalone 'statistics' directory in 'vardir' to save 'durations.log' file there with disk bounds for each tested tasks in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk bound status collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents, like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use from the 'diskstats' file Field 10 which is in real at the 12th position: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect 'vardir' disk bounds per each run test in standalone file and print to stdout after testing completed, up to 10 most used it tasks. To count 'vardir' disk bounds for each test we need to know what was this value on test start and on its finish. Used for it listener 'StatisticsWatcher' which has the following usable routines: process_result() Using 'WorkerCurrentTask' queue message event to save the disk bounds values at the start of the tests. Using 'WorkerTaskResult' queue message event to save final disk bounds values for finished tests. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tasks and up to 10 most used it tasks. The current patch uses standalone 'statistics' directory in 'vardir' to save 'durations.log' file there with disk bounds for each tested tasks in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk disk utilization collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents, like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use from the 'diskstats' file Field 10 which is in real at the 12th position: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect 'vardir' disk utilization per each run test in standalone file and print to stdout after testing completed, up to 10 most biggest of it. To count 'vardir' disk utilization for each test we need to know what was the value of the disk bound on the test start and time when test started to count later the utilization. On test finish we check the test finish time and current disk bound and count disk utilization. All of these checks and counts implemented as the new routine 'get_disk_bound_stat_busy()' in 'lib/utils.py' file. To collect disk bounds and durations on test start/finish events used 'StatisticsWatcher' listener which has the following usable routines for it: process_result() Using 'WorkerCurrentTask' queue message receiver event to save the disk bounds values at the start of the tests. Using 'WorkerTaskResult' queue message receive event to save final disk bounds values for finished tests. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tasks and up to 10 most used it tasks. The current patch uses standalone 'statistics' directory in 'vardir' to save 'durations.log' file there with disk bounds for each tested tasks in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk disk utilization collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents, like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use from the 'diskstats' file Field 10 which is in real at the 12th position: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect 'vardir' disk utilization per each run test in standalone file and print to stdout after testing completed, up to 10 most biggest of it. To count 'vardir' disk utilization for each test we need to know what was the value of the disk bound on the test start and time when test started to count later the utilization. On test finish we check the test finish time and current disk bound and count disk utilization. All of these checks and counts implemented as the new routine 'get_disk_bound_stat_busy()' in 'lib/utils.py' file. To collect disk bounds and durations on test start/finish events used 'StatisticsWatcher' listener which has the following usable routines for it: process_result() Using 'WorkerCurrentTask' queue message receiver event to save the disk bounds values at the start of the tests. Using 'WorkerTaskResult' queue message receive event to save final disk bounds values for finished tests. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tests and up to 10 most used it tests. We use standalone 'statistics' directory in 'vardir' path to save 'durations.log' file in it with disk bounds for each tested task in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk disk utilization collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents, like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use from the 'diskstats' file Field 10 which is in real at the 12th position: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect 'vardir' disk utilization per each run test in standalone file and print to stdout after testing completed, up to 10 most biggest of it. To count 'vardir' disk utilization for each test we need to know what was the value of the disk bound on the test start and time when test started to count later the utilization. On test finish we check the test finish time and current disk bound and count disk utilization. All of these checks and counts implemented as the new routine 'get_disk_bound_stat_busy()' in 'lib/utils.py' file. To collect disk bounds and durations on test start/finish events used 'StatisticsWatcher' listener which has the following usable routines for it: process_result() Using 'WorkerCurrentTask' queue message receiver event to save the disk bounds values at the start of the tests. Using 'WorkerTaskResult' queue message receive event to save final disk bounds values for finished tests. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tests and up to 10 most used it tests. We use standalone 'statistics' directory in 'vardir' path to save 'durations.log' file in it with disk bounds for each tested task in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk disk utilization collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents, like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use from the 'diskstats' file Field 10 which is in real at the 12th position: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect 'vardir' disk utilization per each run test in standalone file and print to stdout after testing completed, up to 10 most biggest of it. To count 'vardir' disk utilization for each test we need to know what was the value of the disk bound on the test start and time when test started to count later the utilization. On test finish we check the test finish time and current disk bound and count disk utilization. All of these checks and counts implemented as the new routine 'get_disk_bound_stat_busy()' in 'lib/utils.py' file. To collect disk bounds and durations on test start/finish events used 'StatisticsWatcher' listener which has the following usable routines for it: process_result() Using 'WorkerCurrentTask' queue message receive event to save the disk bounds values at the start of the tests. Using 'WorkerTaskResult' queue message receive event to save final disk bounds values for finished tests. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tests and up to 10 most used it tests. We use standalone 'statistics' directory in 'vardir' path to save 'durations.log' file in it with disk bounds for each tested task in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk disk utilization collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents, like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use from the 'diskstats' file Field 10 which is in real at the 12th position: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect 'vardir' disk utilization per each run test in standalone file and print to stdout after testing completed, up to 10 most biggest of it. To count 'vardir' disk utilization for each test we need to know what was the value of the disk bound on the test start and time when test started to count later the utilization. On test finish we check the test finish time and current disk bound and count disk utilization. All of these checks and counts implemented as the new routine 'get_disk_bound_stat_busy()' in 'lib/utils.py' file. To collect disk bounds and durations on test start/finish events used 'StatisticsWatcher' listener which has the following usable routines for it: process_result() Using 'WorkerCurrentTask' queue message receive event to save the disk bounds values at the start of the tests. Using 'WorkerTaskResult' queue message receive event to save final disk bounds values for finished tests. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tests and up to 10 most used it tests. We use standalone 'statistics' directory in 'vardir' path to save 'durations.log' file in it with disk bounds for each tested task in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk disk utilization collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents, like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use from the 'diskstats' file Field 10 which is in real at the 12th position: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect 'vardir' disk utilization per each run test in standalone file and print to stdout after testing completed, up to 10 most biggest of it. To count 'vardir' disk utilization for each test we need to know what was the value of the disk bound on the test start and time when test started to count later the utilization. On test finish we check the test finish time and current disk bound and count disk utilization. All of these checks and counts implemented as the new routine 'get_disk_bound_stat_busy()' in 'lib/utils.py' file. To collect disk bounds and durations on test start/finish events used 'StatisticsWatcher' listener which has the following usable routines for it: process_result() Using 'WorkerCurrentTask' queue message receive event to save the disk bounds values at the start of the tests. Using 'WorkerTaskResult' queue message receive event to save final disk bounds values for finished tests. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tests and up to 10 most used it tests. We use standalone 'statistics' directory in 'vardir' path to save 'durations.log' file in it with disk bounds for each tested task in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk disk utilization collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents, like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use from the 'diskstats' file Field 10 which is in real at the 12th position: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect 'vardir' disk utilization per each run test in standalone file and print to stdout after testing completed, up to 10 most biggest of it. To count 'vardir' disk utilization for each test we need to know what was the value of the disk bound on the test start and time when test started to count later the utilization. On test finish we check the test finish time and current disk bound and count disk utilization. All of these checks and counts implemented as the new routine 'get_disk_bound_stat_busy()' in 'lib/utils.py' file. To collect disk bounds and durations on test start/finish events used 'StatisticsWatcher' listener which has the following usable routines for it: process_result() Using 'WorkerCurrentTask' queue message receive event to save the disk bounds values at the start of the tests. Using 'WorkerTaskResult' queue message receive event to save final disk bounds values for finished tests. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tests and up to 10 most used it tests. We use standalone 'statistics' directory in 'vardir' path to save 'durations.log' file in it with disk bounds for each tested task in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk disk utilization collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents, like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use from the 'diskstats' file Field 10 which is in real at the 12th position: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect 'vardir' disk utilization per each run test in standalone file and print to stdout after testing completed, up to 10 most biggest of it. To count 'vardir' disk utilization for each test we need to know what was the value of the disk bound on the test start and time when test started to count later the utilization. On test finish we check the test finish time and current disk bound and count disk utilization. All of these checks and counts implemented as the new routine 'get_disk_bound_stat_busy()' in 'lib/utils.py' file. To collect disk bounds and durations on test start/finish events used 'StatisticsWatcher' listener which has the following usable routines for it: process_result() Using 'WorkerCurrentTask' queue message receive event to save the disk bounds values at the start of the tests. Using 'WorkerTaskResult' queue message receive event to save final disk bounds values for finished tests. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tests and up to 10 most used it tests. We use standalone 'statistics' directory in 'vardir' path to save 'durations.log' file in it with disk bounds for each tested task in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk disk utilization collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents, like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use from the 'diskstats' file Field 10 which is in real at the 12th position: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect 'vardir' disk utilization per each run test in standalone file and print to stdout after testing completed, up to 10 most biggest of it. To count 'vardir' disk utilization for each test we need to know what was the value of the disk bound on the test start and time when test started to count later the utilization. On test finish we check the test finish time and current disk bound and count disk utilization. All of these checks and counts implemented as the new routine 'get_disk_bound_stat_busy()' in 'lib/utils.py' file. To collect disk bounds and durations on test start/finish events used 'StatisticsWatcher' listener which has the following usable routines for it: process_result() Using 'WorkerCurrentTask' queue message receive event to save the disk bounds values at the start of the tests. Using 'WorkerTaskResult' queue message receive event to save final disk bounds values for finished tests. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tests and up to 10 most used it tests. We use standalone 'statistics' directory in 'vardir' path to save 'durations.log' file in it with disk bounds for each tested task in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk disk utilization collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents, like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use from the 'diskstats' file Field 10 which is in real at the 12th position: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect 'vardir' disk utilization per each run test in standalone file and print to stdout after testing completed, up to 10 most biggest of it. To count 'vardir' disk utilization for each test we need to know what was the value of the disk bound on the test start and time when test started to count later the utilization. On test finish we check the test finish time and current disk bound and count disk utilization. All of these checks and counts implemented as the new routine 'get_disk_bound_stat_busy()' in 'lib/utils.py' file. To collect disk bounds and durations on test start/finish events used 'StatisticsWatcher' listener which has the following usable routines for it: process_result() Using 'WorkerCurrentTask' queue message receive event to save the disk bounds values at the start of the tests. Using 'WorkerTaskResult' queue message receive event to save final disk bounds values for finished tests. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tests and up to 10 most used it tests. We use standalone 'statistics' directory in 'vardir' path to save 'durations.log' file in it with disk bounds for each tested task in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk disk utilization collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents, like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use from the 'diskstats' file Field 10 which is in real at the 12th position: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect 'vardir' disk utilization per each run test in standalone file and print to stdout after testing completed, up to 10 most biggest of it. To count 'vardir' disk utilization for each test we need to know what was the value of the disk bound on the test start and time when test started to count later the utilization. On test finish we check the test finish time and current disk bound and count disk utilization. All of these checks and counts implemented as the new routine 'get_disk_bound_stat_busy()' in 'lib/utils.py' file. To collect disk bounds and durations on test start/finish events used 'StatisticsWatcher' listener which has the following usable routines for it: process_result() Using 'WorkerCurrentTask' queue message receive event to save the disk bounds values at the start of the tests. Using 'WorkerTaskResult' queue message receive event to save final disk bounds values for finished tests. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tests and up to 10 most used it tests. We use standalone 'statistics' directory in 'vardir' path to save 'durations.log' file in it with disk bounds for each tested task in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk disk utilization collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents, like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use from the 'diskstats' file Field 10 which is in real at the 12th position: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect 'vardir' disk utilization per each run test in standalone file and print to stdout after testing completed, up to 10 most biggest of it. To count 'vardir' disk utilization for each test we need to know what was the value of the disk bound on the test start and time when test started to count later the utilization. On test finish we check the test finish time and current disk bound and count disk utilization. All of these checks and counts implemented as the new routine 'get_disk_bound_stat_busy()' in 'lib/utils.py' file. To collect disk bounds and durations on test start/finish events used 'StatisticsWatcher' listener which has the following usable routines for it: process_result() Using 'WorkerCurrentTask' queue message receive event to save the disk bounds values at the start of the tests. Using 'WorkerTaskResult' queue message receive event to save final disk bounds values for finished tests. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tests and up to 10 most used it tests. We use standalone 'statistics' directory in 'vardir' path to save 'durations.log' file in it with disk bounds for each tested task in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk disk utilization collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents, like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use from the 'diskstats' file Field 10 which is in real at the 12th position: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect 'vardir' disk utilization per each run test in standalone file and print to stdout after testing completed, up to 10 most biggest of it. To count 'vardir' disk utilization for each test we need to know what was the value of the disk bound on the test start and time when test started to count later the utilization. On test finish we check the test finish time and current disk bound and count disk utilization. All of these checks and counts implemented as the new routine 'get_disk_bound_stat_busy()' in 'lib/utils.py' file. To collect disk bounds and durations on test start/finish events used 'StatisticsWatcher' listener which has the following usable routines for it: process_result() Using 'WorkerCurrentTask' queue message apperance to save the disk bounds values at the start of the tests. Using 'WorkerTaskResult' queue message appearance to save final disk bounds values for finished tests. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tests and up to 10 most used it tests. We use standalone 'statistics' directory in 'vardir' path to save 'durations.log' file in it with disk bounds for each tested task in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk disk utilization collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents, like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use from the 'diskstats' file Field 10 which is in real at the 12th position: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect 'vardir' disk utilization per each run test in standalone file and print to stdout after testing completed, up to 10 most biggest of it. To count 'vardir' disk utilization for each test we need to know what was the value of the disk bound on the test start and time when test started to count later the utilization. On test finish we check the test finish time and current disk bound and count disk utilization. All of these checks and counts implemented as the new routine 'get_disk_bound_stat_busy()' in 'lib/utils.py' file. To collect disk bounds and durations on test start/finish events used 'StatisticsWatcher' listener which has the following usable routines for it: process_result() Using 'WorkerCurrentTask' queue message apperance to save the disk bounds values at the start of the tests. Using 'WorkerTaskResult' queue message appearance to save final disk bounds values for finished tests. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tests and up to 10 most used it tests. We use standalone 'statistics' directory in 'vardir' path to save 'durations.log' file in it with disk bounds for each tested task in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk disk utilization collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents, like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use from the 'diskstats' file Field 10 which is in real at the 12th position: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect 'vardir' disk utilization per each run test in standalone file and print to stdout after testing completed, up to 10 most biggest of it. To count 'vardir' disk utilization for each test we need to know what was the value of the disk bound on the test start and time when test started to count later the utilization. On test finish we check the test finish time and current disk bound and count disk utilization. All of these checks and counts implemented as the new routine 'get_disk_bound_stat_busy()' in 'lib/utils.py' file. To collect disk bounds and durations on test start/finish events used 'StatisticsWatcher' listener which has the following usable routines for it: process_result() Using 'WorkerCurrentTask' queue message apperance to save the disk bounds values at the start of the tests. Using 'WorkerTaskResult' queue message appearance to save final disk bounds values for finished tests. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tests and up to 10 most used it tests. We use standalone 'statistics' directory in 'vardir' path to save 'durations.log' file in it with disk bounds for each tested task in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk disk utilization collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents, like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use from the 'diskstats' file Field 10 which is in real at the 12th position: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect 'vardir' disk utilization per each run test in standalone file and print to stdout after testing completed, up to 10 most biggest of it. To count 'vardir' disk utilization for each test we need to know what was the value of the disk bound on the test start and time when test started to count later the utilization. On test finish we check the test finish time and current disk bound and count disk utilization. All of these checks and counts implemented as the new routine 'get_disk_bound_stat_busy()' in 'lib/utils.py' file. To collect disk bounds and durations on test start/finish events used 'StatisticsWatcher' listener which has the following usable routines for it: process_result() Using 'WorkerCurrentTask' queue message apperance to save the disk bounds values at the start of the tests. Using 'WorkerTaskResult' queue message appearance to save final disk bounds values for finished tests. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tests and up to 10 most used it tests. We use standalone 'statistics' directory in 'vardir' path to save 'durations.log' file in it with disk bounds for each tested task in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk disk utilization collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents, like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use from the 'diskstats' file Field 10 which is in real at the 12th position: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect 'vardir' disk utilization per each run test in standalone file and print to stdout after testing completed, up to 10 most biggest of it. To count 'vardir' disk utilization for each test we need to know what was the value of the disk bound on the test start and time when test started to count later the utilization. On test finish we check the test finish time and current disk bound and count disk utilization. All of these checks and counts implemented as the new routine 'get_disk_bound_stat_busy()' in 'lib/utils.py' file. To collect disk bounds and durations on test start/finish events used 'StatisticsWatcher' listener which has the following usable routines for it: process_result() Using 'WorkerCurrentTask' queue message apperance to save the disk bounds values at the start of the tests. Using 'WorkerTaskResult' queue message appearance to save final disk bounds values for finished tests. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tests and up to 10 most used it tests. We use standalone 'statistics' directory in 'vardir' path to save 'durations.log' file in it with disk bounds for each tested task in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Added for 'vardir' path disk disk utilization collecting routine which parses file: /proc/diskstats for the value milliseconds spent doing I/Os from the given device and counts 'busy' value in percents, like iostat tool does for its '%util' field. Check for more information linux kernel documentation [1]. We use from the 'diskstats' file Field 10 which is in real at the 12th position: Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate struct request_queue and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field increases so long as field 9 is nonzero. Decided to collect 'vardir' disk utilization per each run test in standalone file and print to stdout after testing completed, up to 10 most biggest of it. To count 'vardir' disk utilization for each test we need to know what was the value of the disk bound on the test start and time when test started to count later the utilization. On test finish we check the test finish time and current disk bound and count disk utilization. All of these checks and counts implemented as the new routine 'get_disk_bound_stat_busy()' in 'lib/utils.py' file. To collect disk bounds and durations on test start/finish events used 'StatisticsWatcher' listener which has the following usable routines for it: process_result() Using 'WorkerCurrentTask' queue message apperance to save the disk bounds values at the start of the tests. Using 'WorkerTaskResult' queue message appearance to save final disk bounds values for finished tests. print_statistics() - statistics printing to stdout after testing. Prints disk bounds for failed tests and up to 10 most used it tests. We use standalone 'statistics' directory in 'vardir' path to save 'durations.log' file in it with disk bounds for each tested task in format: <test task name> <disk bound> Needed for tarantool/tarantool-qa#97 [1]: https://www.kernel.org/doc/Documentation/iostats.txt
Found that thr root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent status. After which any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before to be able to run with other tests. Part of tarantool/tarantool-qa#97
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. Part of tarantool/tarantool-qa#97
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. Part of tarantool/tarantool-qa#97
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. Part of tarantool/tarantool-qa#97 Closes #5089
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. Part of tarantool/tarantool-qa#97 Closes #4346 Closes #4979 Closes #4984 Closes #5089
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. Part of tarantool/tarantool-qa#97 Closes #4346 Closes #4979 Closes #4984 Closes #4993 Closes #5089 Closes #5187
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like #261 and tarantool/tarantool#5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to avoid of such situations for now and in the future all tests must be run after Tarantool worker default server process was restarted. This patch moves this server restart call from check if the test failed to the common part of the tests run loop. Part of tarantool/tarantool-qa#97 Needed for tarantool/tarantol#5089 Closes tarantool/tarantool-qa#126
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. Part of tarantool/tarantool-qa#97 Closes #4346 Closes #4979 Closes #4984 Closes #4993 Closes #5187
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. Part of tarantool/tarantool-qa#97 Closes #4346 Closes #4979 Closes #4984 Closes #4993 Closes #5187
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. Part of tarantool/tarantool-qa#97 Closes #4346 Closes #4979 Closes #4984 Closes #4993 Closes #5187
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like #261 and tarantool/tarantool#5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to avoid of such situations for now and in the future all tests must be run after Tarantool worker default server process was restarted. This patch moves this server restart call from check if the test failed to the common part of the tests run loop. Part of tarantool/tarantool-qa#97 Needed for tarantool/tarantol#5089 Closes tarantool/tarantool-qa#126
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. Part of tarantool/tarantool-qa#97 Closes #4346 Closes #4979 Closes #4984 Closes #4993 Closes #5187
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. Part of tarantool/tarantool-qa#97 Closes #4346 Closes #4979 Closes #4984 Closes #4993 Closes #5187
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. Part of tarantool/tarantool-qa#97 Closes #4346 Closes #4979 Closes #4984 Closes #4993 Closes #5187
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. Part of tarantool/tarantool-qa#97 Closes #4168 Closes #4309 Closes #4346 Closes #4572 Closes #4979 Closes #4984 Closes #4985 Closes #4993 Closes #5141 Closes #5197 Closes #5336 Closes #5338 Closes #5356 Closes #5377 Closes #5378 Closes #5383 Closes #5408 Closes #5539 Closes #5584 Closes #5586
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. Part of tarantool/tarantool-qa#97 Closes #4346 Closes #4572 Closes #4979 Closes #4984 Closes #5336 Closes #5356 Closes #5377 Closes #5378 Closes #5383 Closes #5408 Closes #5584 Closes #5586
Working on this issue and after creating pull request with solution found that the current statistic is really good and fine for HDD based hosts. But for SSD this statistics doesn't show anything countable except that any of SSD parallel RW channels were in use, but not all, while SSD may have it and 16 and 64 and use only some of it doesn't feed and overload its use. We use in Github Actions and in MCS only hosts with SSD which are really fast and such patch won't help us any kind. Decided to stop this work till it will be needed. |
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. Part of tarantool/tarantool-qa#97 Closes #4346 Closes #4572 Closes #4979 Closes #4984 Closes #5336 Closes #5356 Closes #5377 Closes #5378 Closes #5383 Closes #5408 Closes #5584 Closes #5586
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like #261 and tarantool/tarantool#5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to avoid of such situations for now and in the future all tests must be run after Tarantool worker default server process was restarted. This patch moves this server restart call from check if the test failed to the common part of the tests run loop. Part of tarantool/tarantool-qa#97 Fixes #260 Fixes #261 Needed for tarantool/tarantool#5089 Closes tarantool/tarantool-qa#126
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. Part of tarantool/tarantool-qa#97 Closes #4346 Closes #4572 Closes #4979 Closes #4984 Closes #5336 Closes #5356 Closes #5377 Closes #5378 Closes #5383 Closes #5408 Closes #5584 Closes #5586
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. Part of tarantool/tarantool-qa#97 Closes #4346 Closes #4572 Closes #4979 Closes #4984 Closes #5336 Closes #5356 Closes #5377 Closes #5378 Closes #5383 Closes #5408 Closes #5584 Closes #5586
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. The following issues were moved to tarantool/tarantool-qa repository: #4346 -> tarantool/tarantool-qa#11 #5408 -> tarantool/tarantool-qa#73 #5584 -> tarantool/tarantool-qa#21 #5586 -> tarantool/tarantool-qa#19 Part of tarantool/tarantool-qa#97 Closes tarantool/tarantool-qa#11 Closes #4572 Closes #4979 Closes #4984 Closes #5336 Closes #5356 Closes #5377 Closes #5378 Closes #5383 Closes tarantool/tarantool-qa#73 Closes tarantool/tarantool-qa#21 Closes tarantool/tarantool-qa#19
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. The following issues were moved to tarantool/tarantool-qa repository: #4346 -> tarantool/tarantool-qa#11 #5408 -> tarantool/tarantool-qa#73 #5584 -> tarantool/tarantool-qa#21 #5586 -> tarantool/tarantool-qa#19 Part of tarantool/tarantool-qa#97 Closes tarantool/tarantool-qa#11 Closes #4572 Closes #4979 Closes #4984 Closes #5336 Closes #5356 Closes #5377 Closes #5378 Closes #5383 Closes tarantool/tarantool-qa#73 Closes tarantool/tarantool-qa#21 Closes tarantool/tarantool-qa#19
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. The following issues were moved to tarantool/tarantool-qa repository: #4346 -> tarantool/tarantool-qa#11 #5408 -> tarantool/tarantool-qa#73 #5584 -> tarantool/tarantool-qa#21 #5586 -> tarantool/tarantool-qa#19 Part of tarantool/tarantool-qa#97 Closes tarantool/tarantool-qa#11 Closes #4572 Closes #4979 Closes #4984 Closes #5336 Closes #5356 Closes #5377 Closes #5378 Closes #5383 Closes tarantool/tarantool-qa#73 Closes tarantool/tarantool-qa#21 Closes tarantool/tarantool-qa#19
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. The following issues were moved to tarantool/tarantool-qa repository: #4346 -> tarantool/tarantool-qa#11 #5408 -> tarantool/tarantool-qa#73 #5584 -> tarantool/tarantool-qa#21 #5586 -> tarantool/tarantool-qa#19 Part of tarantool/tarantool-qa#97 Closes tarantool/tarantool-qa#11 Closes #4572 Closes #4979 Closes #4984 Closes #5336 Closes #5356 Closes #5377 Closes #5378 Closes #5383 Closes tarantool/tarantool-qa#73 Closes tarantool/tarantool-qa#21 Closes tarantool/tarantool-qa#19 (cherry picked from commit f0f53a3)
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. The following issues were moved to tarantool/tarantool-qa repository: #4346 -> tarantool/tarantool-qa#11 #5408 -> tarantool/tarantool-qa#73 #5584 -> tarantool/tarantool-qa#21 #5586 -> tarantool/tarantool-qa#19 Part of tarantool/tarantool-qa#97 Closes tarantool/tarantool-qa#11 Closes #4572 Closes #4979 Closes #4984 Closes #5336 Closes #5356 Closes #5377 Closes #5378 Closes #5383 Closes tarantool/tarantool-qa#73 Closes tarantool/tarantool-qa#21 Closes tarantool/tarantool-qa#19 (cherry picked from commit f0f53a3)
Found that the root cause of the issues happened with vinyl tests were backside effects of the not correct test 'vinyl/gh.test.lua' which leaved Tarantool worker process in inconsistent state. After it any other next test on the same Tarantool worker process could fail on running testings with snapshots calls, like tarantool/tarantool-qa#126: error: Snapshot is already in progress Either restarting Tarantool worker process could fail on stopping it, like tarantool/test-run#261 and #5141: E> failed to process vylog record: delete_slice{slice_id=115, } E> ER_INVALID_VYLOG_FILE: Invalid VYLOG file: Slice 115 deleted but not registered Decided to remove all vinyl tests from 'fragile' list except test 'gh.test.lua', which should be improved before, to be able to run it with the other tests. And 'gh-5141-invalid-vylog-file.test.lua' test which checks this issue and can be removed after the fix will be done. The following issues were moved to tarantool/tarantool-qa repository: #4346 -> tarantool/tarantool-qa#11 #5408 -> tarantool/tarantool-qa#73 #5584 -> tarantool/tarantool-qa#21 #5586 -> tarantool/tarantool-qa#19 Part of tarantool/tarantool-qa#97 Closes tarantool/tarantool-qa#11 Closes #4572 Closes #4979 Closes #4984 Closes #5336 Closes #5356 Closes #5377 Closes #5378 Closes #5383 Closes tarantool/tarantool-qa#73 Closes tarantool/tarantool-qa#21 Closes tarantool/tarantool-qa#19 (cherry picked from commit f0f53a3)
During investigation issue #24 found that there was the issue with not enough timeout to finish the test on host with too huge disk bound usage from this test. It produced the discussion how highly disk bound test should be run - this issue created to provide way for such tests. Main disk activities happens on tests with vinyl configuration either 'vinyl/' suite. To investigate this issue more deep investigation is needed:
The text was updated successfully, but these errors were encountered: