-
Notifications
You must be signed in to change notification settings - Fork 0
test: flaky replication/election_qsync_stress.test.lua test #166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Added for tests with issues: box/gh-5135-invalid-upsert.test.lua gh-5376 box/huge_field_map_long.test.lua gh-5375 replication/anon.test.lua gh-5381 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 vinyl/gc.test.lua gh-5383 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951
It also catches assertion time to time:
|
A minimal 100% reproducer. Follow the steps in the specified order on the specified instances. -- Instance 1
-- Step 1
fiber = require('fiber')
box.cfg{
listen = 3313,
replication = {'localhost:3313', 'localhost:3314'},
replication_synchro_quorum = 2,
replication_synchro_timeout = 1000000,
read_only = false,
}
box.schema.user.grant('guest', 'super')
s1 = box.schema.create_space('test1', {is_sync = true})
_ = s1:create_index('pk')
-- Step 3
for i = 1, 10 do s1:replace{i} end
-- Step 5
box.cfg{
replication_synchro_timeout = 0.001,
}
box.ctl.clear_synchro_queue() -- Instance 2
-- Step 2
box.cfg{
listen = 3314,
replication = {'localhost:3313', 'localhost:3314'},
replication_synchro_quorum = 3,
replication_synchro_timeout = 1000000,
read_only = true,
}
-- Step 4
box.cfg{read_only = false}
box.space.test1:replace{11} |
Added for tests with issues: box/gh-5135-invalid-upsert.test.lua gh-5376 box/huge_field_map_long.test.lua gh-5375 replication/anon.test.lua gh-5381 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 vinyl/gc.test.lua gh-5383 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Added for tests with issues: box/gh-5135-invalid-upsert.test.lua gh-5376 box/huge_field_map_long.test.lua gh-5375 replication/anon.test.lua gh-5381 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 vinyl/gc.test.lua gh-5383 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Added for tests with issues: box/gh-5135-invalid-upsert.test.lua gh-5376 box/huge_field_map_long.test.lua gh-5375 replication/anon.test.lua gh-5381 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Added for tests with issues: box/gh-5135-invalid-upsert.test.lua gh-5376 box/hash_replace.test.lua gh-5400 box/huge_field_map_long.test.lua gh-5375 replication/anon.test.lua gh-5381 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Added for tests with issues: box/access_misc.test.lua gh-5401 box/gh-5135-invalid-upsert.test.lua gh-5376 box/hash_replace.test.lua gh-5400 box/huge_field_map_long.test.lua gh-5375 replication/anon.test.lua gh-5381 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Added for tests with issues: box/access_misc.test.lua gh-5401 box/gh-5135-invalid-upsert.test.lua gh-5376 box/hash_replace.test.lua gh-5400 box/huge_field_map_long.test.lua gh-5375 box/net.box_huge_data_gh-983.test.lua gh-5402 replication/anon.test.lua gh-5381 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Added for tests with issues: box/access_misc.test.lua gh-5401 box/gh-5135-invalid-upsert.test.lua gh-5376 box/hash_replace.test.lua gh-5400 box/huge_field_map_long.test.lua gh-5375 box/net.box_huge_data_gh-983.test.lua gh-5402 replication/anon.test.lua gh-5381 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 swim/swim.test.lua gh-5403 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Added for tests with issues: box/access_misc.test.lua gh-5401 box/gh-5135-invalid-upsert.test.lua gh-5376 box/hash_replace.test.lua gh-5400 box/huge_field_map_long.test.lua gh-5375 box/net.box_huge_data_gh-983.test.lua gh-5402 replication/anon.test.lua gh-5381 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 swim/swim.test.lua gh-5403 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Added for tests with issues: box/access_misc.test.lua gh-5401 box/gh-5135-invalid-upsert.test.lua gh-5376 box/hash_replace.test.lua gh-5400 box/huge_field_map_long.test.lua gh-5375 box/net.box_huge_data_gh-983.test.lua gh-5402 replication/anon.test.lua gh-5381 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 swim/swim.test.lua gh-5403 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Added for tests with issues: box/access_misc.test.lua gh-5401 box/gh-5135-invalid-upsert.test.lua gh-5376 box/hash_replace.test.lua gh-5400 box/huge_field_map_long.test.lua gh-5375 box/net.box_huge_data_gh-983.test.lua gh-5402 replication/anon.test.lua gh-5381 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 swim/swim.test.lua gh-5403 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Added for tests with issues: box/access_misc.test.lua gh-5401 box/gh-5135-invalid-upsert.test.lua gh-5376 box/hash_replace.test.lua gh-5400 box/huge_field_map_long.test.lua gh-5375 box/net.box_huge_data_gh-983.test.lua gh-5402 replication/anon.test.lua gh-5381 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-3711-misc-no-restart-on-same-configuration.test.lua gh-5407 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 swim/swim.test.lua gh-5403 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4864-stmt-alloc-fail-compact.test.lua test gh-5408 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Added for tests with issues: box/access_misc.test.lua gh-5401 box/gh-5135-invalid-upsert.test.lua gh-5376 box/hash_replace.test.lua gh-5400 box/huge_field_map_long.test.lua gh-5375 box/net.box_huge_data_gh-983.test.lua gh-5402 replication/anon.test.lua gh-5381 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-3711-misc-no-restart-on-same-configuration.test.lua gh-5407 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 swim/swim.test.lua gh-5403 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4864-stmt-alloc-fail-compact.test.lua test gh-5408 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Added for tests with issues: box/access_misc.test.lua gh-5401 box/gh-5135-invalid-upsert.test.lua gh-5376 box/hash_64bit_replace.test.lua test gh-5410 box/hash_replace.test.lua gh-5400 box/huge_field_map_long.test.lua gh-5375 box/net.box_huge_data_gh-983.test.lua gh-5402 replication/anon.test.lua gh-5381 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-3711-misc-no-restart-on-same-configuration.test.lua gh-5407 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 replication/status.test.lua gh-5409 swim/swim.test.lua gh-5403 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4864-stmt-alloc-fail-compact.test.lua test gh-5408 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Added for tests with issues: box/access_misc.test.lua gh-5401 box/gh-5135-invalid-upsert.test.lua gh-5376 box/hash_64bit_replace.test.lua test gh-5410 box/hash_replace.test.lua gh-5400 box/huge_field_map_long.test.lua gh-5375 box/net.box_huge_data_gh-983.test.lua gh-5402 replication/anon.test.lua gh-5381 replication/autoboostrap.test.lua gh-4933 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-3711-misc-no-restart-on-same-configuration.test.lua gh-5407 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 replication/status.test.lua gh-5409 swim/swim.test.lua gh-5403 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4864-stmt-alloc-fail-compact.test.lua test gh-5408 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398 t
Added for tests with issues: box/access_misc.test.lua gh-5401 box/gh-5135-invalid-upsert.test.lua gh-5376 box/hash_64bit_replace.test.lua test gh-5410 box/hash_replace.test.lua gh-5400 box/huge_field_map_long.test.lua gh-5375 box/net.box_huge_data_gh-983.test.lua gh-5402 replication/anon.test.lua gh-5381 replication/autoboostrap.test.lua gh-4933 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-3711-misc-no-restart-on-same-configuration.test.lua gh-5407 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 replication/status.test.lua gh-5409 swim/swim.test.lua gh-5403 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4864-stmt-alloc-fail-compact.test.lua test gh-5408 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Added for tests with issues: box/access_misc.test.lua gh-5401 box/gh-5135-invalid-upsert.test.lua gh-5376 box/hash_64bit_replace.test.lua test gh-5410 box/hash_replace.test.lua gh-5400 box/huge_field_map_long.test.lua gh-5375 box/net.box_huge_data_gh-983.test.lua gh-5402 replication/anon.test.lua gh-5381 replication/autoboostrap.test.lua gh-4933 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-3711-misc-no-restart-on-same-configuration.test.lua gh-5407 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 replication/status.test.lua gh-5409 swim/swim.test.lua gh-5403 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4864-stmt-alloc-fail-compact.test.lua test gh-5408 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Added for tests with issues: box/access_misc.test.lua gh-5401 box/gh-5135-invalid-upsert.test.lua gh-5376 box/hash_64bit_replace.test.lua test gh-5410 box/hash_replace.test.lua gh-5400 box/huge_field_map_long.test.lua gh-5375 box/net.box_huge_data_gh-983.test.lua gh-5402 replication/anon.test.lua gh-5381 replication/autoboostrap.test.lua gh-4933 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-3711-misc-no-restart-on-same-configuration.test.lua gh-5407 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 replication/status.test.lua gh-5409 swim/swim.test.lua gh-5403 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4864-stmt-alloc-fail-compact.test.lua test gh-5408 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Added for tests with issues: box/access.test.lua gh-5411 box/access_misc.test.lua gh-5401 box/gh-5135-invalid-upsert.test.lua gh-5376 box/hash_64bit_replace.test.lua test gh-5410 box/hash_replace.test.lua gh-5400 box/huge_field_map_long.test.lua gh-5375 box/net.box_huge_data_gh-983.test.lua gh-5402 replication/anon.test.lua gh-5381 replication/autoboostrap.test.lua gh-4933 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-3711-misc-no-restart-on-same-configuration.test.lua gh-5407 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 replication/status.test.lua gh-5409 swim/swim.test.lua gh-5403 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4864-stmt-alloc-fail-compact.test.lua test gh-5408 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398 t
Added for tests with issues: app/socket.test.lua gh-4978 box/access.test.lua gh-5411 box/access_misc.test.lua gh-5401 box/gh-5135-invalid-upsert.test.lua gh-5376 box/hash_64bit_replace.test.lua test gh-5410 box/hash_replace.test.lua gh-5400 box/huge_field_map_long.test.lua gh-5375 box/net.box_huge_data_gh-983.test.lua gh-5402 replication/anon.test.lua gh-5381 replication/autoboostrap.test.lua gh-4933 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-3711-misc-no-restart-on-same-configuration.test.lua gh-5407 replication/gh-5287-boot-anon.test.lua gh-5412 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 replication/status.test.lua gh-5409 swim/swim.test.lua gh-5403 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4864-stmt-alloc-fail-compact.test.lua test gh-5408 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Instead of having all possible 2^10 diffs in case select{} fails, log the select{} output in case the space contains less than 10 elements. Requested by @avtikhon for easier flaky test handling. Related to #5395
Added for tests with issues: app/socket.test.lua gh-4978 box/access.test.lua gh-5411 box/access_misc.test.lua gh-5401 box/gh-5135-invalid-upsert.test.lua gh-5376 box/hash_64bit_replace.test.lua test gh-5410 box/hash_replace.test.lua gh-5400 box/huge_field_map_long.test.lua gh-5375 box/net.box_huge_data_gh-983.test.lua gh-5402 replication/anon.test.lua gh-5381 replication/autoboostrap.test.lua gh-4933 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-3711-misc-no-restart-on-same-configuration.test.lua gh-5407 replication/gh-5287-boot-anon.test.lua gh-5412 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 replication/status.test.lua gh-5409 swim/swim.test.lua gh-5403 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4864-stmt-alloc-fail-compact.test.lua test gh-5408 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Added for tests with issues: app/socket.test.lua gh-4978 box/access.test.lua gh-5411 box/access_misc.test.lua gh-5401 box/gh-5135-invalid-upsert.test.lua gh-5376 box/hash_64bit_replace.test.lua test gh-5410 box/hash_replace.test.lua gh-5400 box/huge_field_map_long.test.lua gh-5375 box/net.box_huge_data_gh-983.test.lua gh-5402 replication/anon.test.lua gh-5381 replication/autoboostrap.test.lua gh-4933 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-3711-misc-no-restart-on-same-configuration.test.lua gh-5407 replication/gh-5287-boot-anon.test.lua gh-5412 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 replication/status.test.lua gh-5409 swim/swim.test.lua gh-5403 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4864-stmt-alloc-fail-compact.test.lua test gh-5408 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Added for tests with issues: app/socket.test.lua gh-4978 box/access.test.lua gh-5411 box/access_misc.test.lua gh-5401 box/gh-5135-invalid-upsert.test.lua gh-5376 box/hash_64bit_replace.test.lua test gh-5410 box/hash_replace.test.lua gh-5400 box/huge_field_map_long.test.lua gh-5375 box/net.box_huge_data_gh-983.test.lua gh-5402 replication/anon.test.lua gh-5381 replication/autoboostrap.test.lua gh-4933 replication/box_set_replication_stress.test.lua gh-4992 replication/election_basic.test.lua gh-5368 replication/election_qsync.test.lua test gh-5395 replication/gh-3247-misc-iproto-sequence-value-not-replicated.test.lua gh-5380 replication/gh-3711-misc-no-restart-on-same-configuration.test.lua gh-5407 replication/gh-5287-boot-anon.test.lua gh-5412 replication/gh-5298-qsync-recovery-snap.test.lua.test.lua gh-5379 replication/show_error_on_disconnect.test.lua gh-5371 replication/status.test.lua gh-5409 swim/swim.test.lua gh-5403 unit/swim.test gh-5399 vinyl/gc.test.lua gh-5383 vinyl/gh-4864-stmt-alloc-fail-compact.test.lua test gh-5408 vinyl/gh-4957-too-many-upserts.test.lua gh-5378 vinyl/gh.test.lua gh-5141 vinyl/quota.test.lua gh-5377 vinyl/snapshot.test.lua gh-4984 vinyl/stat.test.lua gh-4951 vinyl/upsert.test.lua gh-5398
Found that replication/election_qsync_stress.test.lua test may fail on restating instances. It occures on heavy loaded hosts when its local call to stop instance using SIGTERM fails to stop it. Decided to use SIGKILL in local stop call options to be sure that the instance will be stopped. Also found that running loop inline new hangs occured on server start: --- replication/election_qsync_stress.result Thu Nov 12 16:23:16 2020 +++ var/128_replication/election_qsync_stress.result Thu Nov 12 16:31:22 2020 @@ -323,7 +323,7 @@ | ... test_run:wait_cond(function() return c.space.test ~= nil and c.space.test:get{i} ~= nil end) | --- - | - true + | - false | ... @@ -380,7 +380,7 @@ | ... test_run:wait_cond(function() return c.space.test ~= nil and c.space.test:get{i} ~= nil end) | --- - | - true + | - false | ... @@ -494,687 +494,3 @@ | --- | ... test_run:cmd('start server '..old_leader..' with wait=True, wait_load=True, args="2 0.4"') - | --- - | - true - | … but the test already failed before on getting 'c.space.test:get{i}'. To avoid of the hang and make test code more correct running way it were added log.error messages and return calls. Also the test was changed to use function for each loop iteration to be able to check return values and break the loop just after the fails. Also found that test hangs on recreation of the replica. It happend because replica creation had wait_load flag enabled which set to wait test-run for replica creation mark in replica log file in seek_wait subroutine. To fix it added replication_sync_timeout set to 5 secs. Needed for #5395
Found that replication/election_qsync_stress.test.lua test may fail on restating instances. It occures on heavy loaded hosts when its local call to stop instance using SIGTERM fails to stop it. Decided to use SIGKILL in local stop call options to be sure that the instance will be stopped. Also found that running loop inline new hangs occured on server start: --- replication/election_qsync_stress.result Thu Nov 12 16:23:16 2020 +++ var/128_replication/election_qsync_stress.result Thu Nov 12 16:31:22 2020 @@ -323,7 +323,7 @@ | ... test_run:wait_cond(function() return c.space.test ~= nil and c.space.test:get{i} ~= nil end) | --- - | - true + | - false | ... @@ -380,7 +380,7 @@ | ... test_run:wait_cond(function() return c.space.test ~= nil and c.space.test:get{i} ~= nil end) | --- - | - true + | - false | ... @@ -494,687 +494,3 @@ | --- | ... test_run:cmd('start server '..old_leader..' with wait=True, wait_load=True, args="2 0.4"') - | --- - | - true - | … but the test already failed before on getting 'c.space.test:get{i}'. To avoid of the hang and make test code more correct running way it were added log.error messages and return calls. Also the test was changed to use function for each loop iteration to be able to check return values and break the loop just after the fails. Also found that test hangs on recreation of the replica. It happend because replica creation had wait_load flag enabled which set to wait test-run for replica creation mark in replica log file in seek_wait subroutine. To fix it added replication_sync_timeout set to 5 secs. Needed for #5395
Found that replication/election_qsync_stress.test.lua test may fail on restating instances. It occures on heavy loaded hosts when its local call to stop instance using SIGTERM fails to stop it. Decided to use SIGKILL in local stop call options to be sure that the instance will be stopped. Also found that running loop inline new hangs occured on server start: --- replication/election_qsync_stress.result Thu Nov 12 16:23:16 2020 +++ var/128_replication/election_qsync_stress.result Thu Nov 12 16:31:22 2020 @@ -323,7 +323,7 @@ | ... test_run:wait_cond(function() return c.space.test ~= nil and c.space.test:get{i} ~= nil end) | --- - | - true + | - false | ... @@ -380,7 +380,7 @@ | ... test_run:wait_cond(function() return c.space.test ~= nil and c.space.test:get{i} ~= nil end) | --- - | - true + | - false | ... @@ -494,687 +494,3 @@ | --- | ... test_run:cmd('start server '..old_leader..' with wait=True, wait_load=True, args="2 0.4"') - | --- - | - true - | … but the test already failed before on getting 'c.space.test:get{i}'. To avoid of the hang and make test code more correct running way it were added log.error messages and return calls. Also the test was changed to use function for each loop iteration to be able to check return values and break the loop just after the fails. Also found that test hangs on recreation of the replica. It happend because replica creation had wait_load flag enabled which set to wait test-run for replica creation mark in replica log file in seek_wait subroutine. To fix it added replication_sync_timeout set to 5 secs. Needed for #5395
Found that replication/election_qsync_stress.test.lua test may fail on restating instances. It occures on heavy loaded hosts when its local call to stop instance using SIGTERM fails to stop it. Decided to use SIGKILL in local stop call options to be sure that the instance will be stopped. Also found that running loop inline new hangs occured on server start: --- replication/election_qsync_stress.result Thu Nov 12 16:23:16 2020 +++ var/128_replication/election_qsync_stress.result Thu Nov 12 16:31:22 2020 @@ -323,7 +323,7 @@ | ... test_run:wait_cond(function() return c.space.test ~= nil and c.space.test:get{i} ~= nil end) | --- - | - true + | - false | ... @@ -380,7 +380,7 @@ | ... test_run:wait_cond(function() return c.space.test ~= nil and c.space.test:get{i} ~= nil end) | --- - | - true + | - false | ... @@ -494,687 +494,3 @@ | --- | ... test_run:cmd('start server '..old_leader..' with wait=True, wait_load=True, args="2 0.4"') - | --- - | - true - | … but the test already failed before on getting 'c.space.test:get{i}'. To avoid of the hang and make test code more correct running way it were added log.error messages and return calls. Also the test was changed to use function for each loop iteration to be able to check return values and break the loop just after the fails. Also found that test hangs on recreation of the replica. It happend because replica creation had wait_load flag enabled which set to wait test-run for replica creation mark in replica log file in seek_wait subroutine. To fix it added replication_sync_timeout set to 5 secs. Needed for #5395
Found that replication/election_qsync_stress.test.lua test may fail on restating instances. It occures on heavy loaded hosts when its local call to stop instance using SIGTERM fails to stop it. Decided to use SIGKILL in local stop call options to be sure that the instance will be stopped. Also found that running loop inline new hangs occured on server start: --- replication/election_qsync_stress.result Thu Nov 12 16:23:16 2020 +++ var/128_replication/election_qsync_stress.result Thu Nov 12 16:31:22 2020 @@ -323,7 +323,7 @@ | ... test_run:wait_cond(function() return c.space.test ~= nil and c.space.test:get{i} ~= nil end) | --- - | - true + | - false | ... @@ -380,7 +380,7 @@ | ... test_run:wait_cond(function() return c.space.test ~= nil and c.space.test:get{i} ~= nil end) | --- - | - true + | - false | ... @@ -494,687 +494,3 @@ | --- | ... test_run:cmd('start server '..old_leader..' with wait=True, wait_load=True, args="2 0.4"') - | --- - | - true - | … but the test already failed before on getting 'c.space.test:get{i}'. To avoid of the hang and make test code more correct running way it were added log.error messages and return calls. Also the test was changed to use function for each loop iteration to be able to check return values and break the loop just after the fails. Also found that test hangs on recreation of the replica. It happend because replica creation had wait_load flag enabled which set to wait test-run for replica creation mark in replica log file in seek_wait subroutine. To fix it added replication_sync_timeout set to 5 secs. Needed for #5395
Cases 1 and 2 are not present in Cases 4 and 5 were failing because the test didn't wait for the old leader to send all data to the other nodes. As a result, they could have less data than needed and the space didn't have enough tuples in the end of the test. It seems to be fixed by tarantool/tarantool@bf0fbf3. Cases 3 and 4 (the latter had 2 issues) seem to be caused by old code relying on I have run both As a summary: I think @sergepetrenko fixed this ticket quite some time ago in the commits above. Sergey, can you try it on your machine too? @sergos, you could also try (me and Sergey P. both are on Macs I think, we could use a person with Linux)? |
The most recent new error not in this ticket on master I see here: https://github.com/tarantool/tarantool/runs/5590105468?check_suite_focus=true. Have no idea what might be causing it. It fails on bootstrap apparently. |
After running of
Which apparently have one way to appear:
or the |
Got another failure:
|
Apparently, we have an infinite elections - replica2 (id == 2) tried to deliver it's vote to the replica3 (id == 1) and after two sequential votes for replica3 it starts to vote for itself. Note, that replica2 keep receiving votes from replica3, but not the vice versa.
|
I've ran The ER_READONLY error was fixed recently in scope of tarantool/tarantool#6966 It seems that the case with infinite elections is caused by What causes the |
I've got the error, but not the infinite elections. The reason surprises me: it's in relay, not applier:
unfortunately, the xlog file is missing, even after I removed the test from the fragile list.
I will try to reproduce without docker. |
Wow, that's something already. Will take a look. |
Reproduced on plain el7.
|
Found the issue together with @sergos: |
When applier ack writer was moved to applier thread, it was overlooked that it would start sharing replicaset.vclock between two threads. This could lead to the following replication errors on master: relay//102/main:reader V> Got a corrupted row: relay//102/main:reader V> 00000000: 81 00 00 81 26 81 01 09 02 01 Such a row has an incorrectly-encoded vclock: `81 01 09 02 01`. When writer fiber encoded the vclock length (`81`), there was only one vclock component: {1: 9}, but at the moment of iterating over the components, another WAL write was reported to TX thread, which bumped the second vclock component {1: 9, 2: 1}. Let's fix the race by delivering a copy of current replicaset vclock to the applier thread. Closes tarantool#7089 Part-of tarantool/tarantool-qa#166 NO_DOC=internal fix NO_CHANGELOG=internal fix NO_TEST=hard to test
When applier ack writer was moved to applier thread, it was overlooked that it would start sharing replicaset.vclock between two threads. This could lead to the following replication errors on master: relay//102/main:reader V> Got a corrupted row: relay//102/main:reader V> 00000000: 81 00 00 81 26 81 01 09 02 01 Such a row has an incorrectly-encoded vclock: `81 01 09 02 01`. When writer fiber encoded the vclock length (`81`), there was only one vclock component: {1: 9}, but at the moment of iterating over the components, another WAL write was reported to TX thread, which bumped the second vclock component {1: 9, 2: 1}. Let's fix the race by delivering a copy of current replicaset vclock to the applier thread. Also add a perf test to the perf/ directory. Closes tarantool#7089 Part-of tarantool/tarantool-qa#166 NO_DOC=internal fix NO_CHANGELOG=internal fix NO_TEST=hard to test
When applier ack writer was moved to applier thread, it was overlooked that it would start sharing replicaset.vclock between two threads. This could lead to the following replication errors on master: relay//102/main:reader V> Got a corrupted row: relay//102/main:reader V> 00000000: 81 00 00 81 26 81 01 09 02 01 Such a row has an incorrectly-encoded vclock: `81 01 09 02 01`. When writer fiber encoded the vclock length (`81`), there was only one vclock component: {1: 9}, but at the moment of iterating over the components, another WAL write was reported to TX thread, which bumped the second vclock component {1: 9, 2: 1}. Let's fix the race by delivering a copy of current replicaset vclock to the applier thread. Also add a perf test to the perf/ directory. Closes tarantool#7089 Part-of tarantool/tarantool-qa#166 NO_DOC=internal fix NO_CHANGELOG=internal fix NO_TEST=hard to test
When applier ack writer was moved to applier thread, it was overlooked that it would start sharing replicaset.vclock between two threads. This could lead to the following replication errors on master: relay//102/main:reader V> Got a corrupted row: relay//102/main:reader V> 00000000: 81 00 00 81 26 81 01 09 02 01 Such a row has an incorrectly-encoded vclock: `81 01 09 02 01`. When writer fiber encoded the vclock length (`81`), there was only one vclock component: {1: 9}, but at the moment of iterating over the components, another WAL write was reported to TX thread, which bumped the second vclock component {1: 9, 2: 1}. Let's fix the race by delivering a copy of current replicaset vclock to the applier thread. Also add a perf test to the perf/ directory. Closes tarantool#7089 Part-of tarantool/tarantool-qa#166 NO_DOC=internal fix NO_TEST=hard to test
When applier ack writer was moved to applier thread, it was overlooked that it would start sharing replicaset.vclock between two threads. This could lead to the following replication errors on master: relay//102/main:reader V> Got a corrupted row: relay//102/main:reader V> 00000000: 81 00 00 81 26 81 01 09 02 01 Such a row has an incorrectly-encoded vclock: `81 01 09 02 01`. When writer fiber encoded the vclock length (`81`), there was only one vclock component: {1: 9}, but at the moment of iterating over the components, another WAL write was reported to TX thread, which bumped the second vclock component {1: 9, 2: 1}. Let's fix the race by delivering a copy of current replicaset vclock to the applier thread. Also add a perf test to the perf/ directory. Closes tarantool#7089 Part-of tarantool/tarantool-qa#166 NO_DOC=internal fix NO_TEST=hard to test
When applier ack writer was moved to applier thread, it was overlooked that it would start sharing replicaset.vclock between two threads. This could lead to the following replication errors on master: relay//102/main:reader V> Got a corrupted row: relay//102/main:reader V> 00000000: 81 00 00 81 26 81 01 09 02 01 Such a row has an incorrectly-encoded vclock: `81 01 09 02 01`. When writer fiber encoded the vclock length (`81`), there was only one vclock component: {1: 9}, but at the moment of iterating over the components, another WAL write was reported to TX thread, which bumped the second vclock component {1: 9, 2: 1}. Let's fix the race by delivering a copy of current replicaset vclock to the applier thread. Also add a perf test to the perf/ directory. Closes #7089 Part-of tarantool/tarantool-qa#166 NO_DOC=internal fix NO_TEST=hard to test
When applier ack writer was moved to applier thread, it was overlooked that it would start sharing replicaset.vclock between two threads. This could lead to the following replication errors on master: relay//102/main:reader V> Got a corrupted row: relay//102/main:reader V> 00000000: 81 00 00 81 26 81 01 09 02 01 Such a row has an incorrectly-encoded vclock: `81 01 09 02 01`. When writer fiber encoded the vclock length (`81`), there was only one vclock component: {1: 9}, but at the moment of iterating over the components, another WAL write was reported to TX thread, which bumped the second vclock component {1: 9, 2: 1}. Let's fix the race by delivering a copy of current replicaset vclock to the applier thread. Also add a perf test to the perf/ directory. Closes #7089 Part-of tarantool/tarantool-qa#166 NO_DOC=internal fix NO_TEST=hard to test (cherry picked from commit ddec704)
Seems like we've fixed all the known issues with this test. No failures on my machine after a 1000 runs. |
When applier ack writer was moved to applier thread, it was overlooked that it would start sharing replicaset.vclock between two threads. This could lead to the following replication errors on master: relay//102/main:reader V> Got a corrupted row: relay//102/main:reader V> 00000000: 81 00 00 81 26 81 01 09 02 01 Such a row has an incorrectly-encoded vclock: `81 01 09 02 01`. When writer fiber encoded the vclock length (`81`), there was only one vclock component: {1: 9}, but at the moment of iterating over the components, another WAL write was reported to TX thread, which bumped the second vclock component {1: 9, 2: 1}. Let's fix the race by delivering a copy of current replicaset vclock to the applier thread. Also add a perf test to the perf/ directory. Closes tarantool#7089 Part-of tarantool/tarantool-qa#166 NO_DOC=internal fix NO_TEST=hard to test
Tarantool version:
Tarantool 2.6.0-143-g0dc72812fb
Target: Linux-x86_64-Debug
Build options: cmake . -DCMAKE_INSTALL_PREFIX=/usr/local -DENABLE_BACKTRACE=ON
Compiler: /usr/bin/cc /usr/bin/c++
C_FLAGS: -fexceptions -funwind-tables -fno-omit-frame-pointer -fno-stack-protector -fno-common -fopenmp -msse2 -fprofile-arcs -ftest-coverage -std=c11 -Wall -Wextra -Wno-strict-aliasing -Wno-char-subscripts -Wno-format-truncation -Wno-gnu-alignof-expression -fno-gnu89-inline -Wno-cast-function-type -Werror
CXX_FLAGS: -fexceptions -funwind-tables -fno-omit-frame-pointer -fno-stack-protector -fno-common -fopenmp -msse2 -fprofile-arcs -ftest-coverage -std=c++11 -Wall -Wextra -Wno-strict-aliasing -Wno-char-subscripts -Wno-format-truncation -Wno-invalid-offsetof -Wno-gnu-alignof-expression -Wno-cast-function-type -Werror
OS version:
Bug description:
Found 2 issues:
artifacts.zip
results file checksum: cc93d7c69c6368217634718bdf3de16c
artifacts.zip
results file checksum: 3fb2e6cef4c8fa1d0edd8654fd2d8ef6
artifacts.zip
results file checksum: 634bda94accdcdef7b1db3e14f28f445
https://gitlab.com/tarantool/tarantool/-/jobs/795376806#L5359
artifacts.zip
results file checksum: 36bcdae426c18a60fd13025c09f197d0
https://gitlab.com/tarantool/tarantool/-/jobs/795916093#L5738
artifacts.zip
results file checksum: 209c865525154a91435c63850f15eca0
Steps to reproduce:
Optional (but very desirable):
The text was updated successfully, but these errors were encountered: