Skip to content

Iterator api#7

Closed
rtsisyk wants to merge 27 commits intomasterfrom
iterator-api
Closed

Iterator api#7
rtsisyk wants to merge 27 commits intomasterfrom
iterator-api

Conversation

@rtsisyk
Copy link
Contributor

@rtsisyk rtsisyk commented Nov 8, 2012

New iteration API (with strategies)
Please take a look at docbook documentation (box.index.iter method).

@ademakov
Copy link
Contributor

ademakov commented Nov 8, 2012

Как-то странно называть стратегией прямой параметр выполняемого действия. Порядок итерации задается непосредственно. Стратегия же это скорее некий алгоритм, находящийся на другом уровне абстракции, и выбираемый по косвенным признакам.

@rtsisyk
Copy link
Contributor Author

rtsisyk commented Nov 8, 2012

Там и планируются некоторые различные алгоритмы, а не только указание порядка итерации.
Посмотри плз http://www.postgresql.org/docs/9.2/static/xindex.html

@ademakov
Copy link
Contributor

ademakov commented Nov 8, 2012

На мой взгляд достаточно неудачно в Постгресе это назвали. Все таки это порядок обхода.

nshy added a commit to nshy/tarantool that referenced this pull request Feb 16, 2024
We create a snapshot on SIGUSR1 signal in a newly spawned system fiber.
It can interfere with Tarantool shutdown. In particular there is an
assertion on shutdown during such a snapshot because of cord making
snaphshot. Let's shutdown this fiber too.

```
  tarantool#5  0x00007e7ec9a54d26 in __assert_fail (
      assertion=0x63ad06748400 "pm_atomic_load(&cord_count) == 0",
      file=0x63ad067478b8 "./src/lib/core/fiber.c", line=2290,
      function=0x63ad06748968 <__PRETTY_FUNCTION__.6> "fiber_free") at assert.c:101
  tarantool#6  0x000063ad061a6a91 in fiber_free ()
      at /home/shiny/dev/tarantool/src/lib/core/fiber.c:2290
  tarantool#7  0x000063ad05edc216 in tarantool_free ()
      at /home/shiny/dev/tarantool/src/main.cc:632
  tarantool#8  0x000063ad05edd144 in main (argc=1, argv=0x63ad079ca3b0)
```

Part of tarantool#8423

NO_CHANGELOG=internal
NO_DOC=internal
nshy added a commit to nshy/tarantool that referenced this pull request Feb 16, 2024
We create a snapshot on SIGUSR1 signal in a newly spawned system fiber.
It can interfere with Tarantool shutdown. In particular there is an
assertion on shutdown during such a snapshot because of cord making
snaphshot. Let's just trigger making snapshot in gc subsystem in it's
own worker fiber.

```
  tarantool#5  0x00007e7ec9a54d26 in __assert_fail (
      assertion=0x63ad06748400 "pm_atomic_load(&cord_count) == 0",
      file=0x63ad067478b8 "./src/lib/core/fiber.c", line=2290,
      function=0x63ad06748968 <__PRETTY_FUNCTION__.6> "fiber_free") at assert.c:101
  tarantool#6  0x000063ad061a6a91 in fiber_free ()
      at /home/shiny/dev/tarantool/src/lib/core/fiber.c:2290
  tarantool#7  0x000063ad05edc216 in tarantool_free ()
      at /home/shiny/dev/tarantool/src/main.cc:632
  tarantool#8  0x000063ad05edd144 in main (argc=1, argv=0x63ad079ca3b0)
```

Part of tarantool#8423

NO_CHANGELOG=internal
NO_DOC=internal
nshy added a commit to nshy/tarantool that referenced this pull request Feb 19, 2024
We create a snapshot on SIGUSR1 signal in a newly spawned system fiber.
It can interfere with Tarantool shutdown. In particular there is an
assertion on shutdown during such a snapshot because of cord making
snaphshot. Let's just trigger making snapshot in gc subsystem in it's
own worker fiber.

```
  tarantool#5  0x00007e7ec9a54d26 in __assert_fail (
      assertion=0x63ad06748400 "pm_atomic_load(&cord_count) == 0",
      file=0x63ad067478b8 "./src/lib/core/fiber.c", line=2290,
      function=0x63ad06748968 <__PRETTY_FUNCTION__.6> "fiber_free") at assert.c:101
  tarantool#6  0x000063ad061a6a91 in fiber_free ()
      at /home/shiny/dev/tarantool/src/lib/core/fiber.c:2290
  tarantool#7  0x000063ad05edc216 in tarantool_free ()
      at /home/shiny/dev/tarantool/src/main.cc:632
  tarantool#8  0x000063ad05edd144 in main (argc=1, argv=0x63ad079ca3b0)
```

Part of tarantool#8423

NO_CHANGELOG=internal
NO_DOC=internal
nshy added a commit to nshy/tarantool that referenced this pull request Feb 19, 2024
We create a snapshot on SIGUSR1 signal in a newly spawned system fiber.
It can interfere with Tarantool shutdown. In particular there is an
assertion on shutdown during such a snapshot because of cord making
snaphshot. Let's just trigger making snapshot in gc subsystem in it's
own worker fiber.

```
  tarantool#5  0x00007e7ec9a54d26 in __assert_fail (
      assertion=0x63ad06748400 "pm_atomic_load(&cord_count) == 0",
      file=0x63ad067478b8 "./src/lib/core/fiber.c", line=2290,
      function=0x63ad06748968 <__PRETTY_FUNCTION__.6> "fiber_free") at assert.c:101
  tarantool#6  0x000063ad061a6a91 in fiber_free ()
      at /home/shiny/dev/tarantool/src/lib/core/fiber.c:2290
  tarantool#7  0x000063ad05edc216 in tarantool_free ()
      at /home/shiny/dev/tarantool/src/main.cc:632
  tarantool#8  0x000063ad05edd144 in main (argc=1, argv=0x63ad079ca3b0)
```

Part of tarantool#8423

NO_CHANGELOG=internal
NO_DOC=internal
locker pushed a commit that referenced this pull request Feb 19, 2024
We create a snapshot on SIGUSR1 signal in a newly spawned system fiber.
It can interfere with Tarantool shutdown. In particular there is an
assertion on shutdown during such a snapshot because of cord making
snaphshot. Let's just trigger making snapshot in gc subsystem in it's
own worker fiber.

```
  #5  0x00007e7ec9a54d26 in __assert_fail (
      assertion=0x63ad06748400 "pm_atomic_load(&cord_count) == 0",
      file=0x63ad067478b8 "./src/lib/core/fiber.c", line=2290,
      function=0x63ad06748968 <__PRETTY_FUNCTION__.6> "fiber_free") at assert.c:101
  #6  0x000063ad061a6a91 in fiber_free ()
      at /home/shiny/dev/tarantool/src/lib/core/fiber.c:2290
  #7  0x000063ad05edc216 in tarantool_free ()
      at /home/shiny/dev/tarantool/src/main.cc:632
  #8  0x000063ad05edd144 in main (argc=1, argv=0x63ad079ca3b0)
```

Part of #8423

NO_CHANGELOG=internal
NO_DOC=internal
ligurio added a commit to ligurio/nanodata that referenced this pull request May 21, 2024
[001] tarantool#4  0x65481f151c11 in luaT_httpc_io_cleanup+33
[001] tarantool#5  0x65481f19ee63 in lj_BC_FUNCC+70
[001] tarantool#6  0x65481f1aa5d5 in gc_call_finalizer+133
[001] tarantool#7  0x65481f1ab1e3 in gc_onestep+211
[001] tarantool#8  0x65481f1aba68 in lj_gc_fullgc+120
[001] tarantool#9  0x65481f1a5fb5 in lua_gc+149
[001] tarantool#10 0x65481f1b57cf in lj_cf_collectgarbage+127
[001] tarantool#11 0x65481f19ee63 in lj_BC_FUNCC+70
[001] tarantool#12 0x65481f1a5c15 in lua_pcall+117
[001] tarantool#13 0x65481f14559f in luaT_call+15
[001] tarantool#14 0x65481f13c7e1 in lua_main+97
[001] tarantool#15 0x65481f13d000 in run_script_f+2032

NO_CHANGELOG=internal
NO_DOC=internal
NO_TEST=internal
ligurio added a commit to ligurio/nanodata that referenced this pull request May 21, 2024
[001] tarantool#4  0x65481f151c11 in luaT_httpc_io_cleanup+33
[001] tarantool#5  0x65481f19ee63 in lj_BC_FUNCC+70
[001] tarantool#6  0x65481f1aa5d5 in gc_call_finalizer+133
[001] tarantool#7  0x65481f1ab1e3 in gc_onestep+211
[001] tarantool#8  0x65481f1aba68 in lj_gc_fullgc+120
[001] tarantool#9  0x65481f1a5fb5 in lua_gc+149
[001] tarantool#10 0x65481f1b57cf in lj_cf_collectgarbage+127
[001] tarantool#11 0x65481f19ee63 in lj_BC_FUNCC+70
[001] tarantool#12 0x65481f1a5c15 in lua_pcall+117
[001] tarantool#13 0x65481f14559f in luaT_call+15
[001] tarantool#14 0x65481f13c7e1 in lua_main+97
[001] tarantool#15 0x65481f13d000 in run_script_f+2032

NO_CHANGELOG=internal
NO_DOC=internal
NO_TEST=internal
locker added a commit to locker/tarantool that referenced this pull request Jun 10, 2024
`key_part::offset_slot_cache` and `key_part::format_epoch` are used for
speeding up tuple field lookup in `tuple_field_raw_by_part()`. These
structure members are accessed and updated without any locks, assuming
this code is executed exclusively in the tx thread. However, this isn't
necessarily true because we also perform tuple field lookups in vinyl
read threads. Apparently, this can result in unexpected races and bugs,
for example:

```
  tarantool#1  0x590be9f7eb6d in crash_collect+256
  tarantool#2  0x590be9f7f5a9 in crash_signal_cb+100
  tarantool#3  0x72b111642520 in __sigaction+80
  tarantool#4  0x590bea385e3c in load_u32+35
  tarantool#5  0x590bea231eba in field_map_get_offset+46
  tarantool#6  0x590bea23242a in tuple_field_raw_by_path+417
  tarantool#7  0x590bea23282b in tuple_field_raw_by_part+203
  tarantool#8  0x590bea23288c in tuple_field_by_part+91
  tarantool#9  0x590bea24cd2d in unsigned long tuple_hint<(field_type)5, false, false>(tuple*, key_def*)+103
  tarantool#10 0x590be9d4fba3 in tuple_hint+40
  tarantool#11 0x590be9d50acf in vy_stmt_hint+178
  tarantool#12 0x590be9d53531 in vy_page_stmt+168
  tarantool#13 0x590be9d535ea in vy_page_find_key+142
  tarantool#14 0x590be9d545e6 in vy_page_read_cb+210
  tarantool#15 0x590be9f94ef0 in cbus_call_perform+44
  tarantool#16 0x590be9f94eae in cmsg_deliver+52
  tarantool#17 0x590be9f9583e in cbus_process+100
  tarantool#18 0x590be9f958a5 in cbus_loop+28
  tarantool#19 0x590be9d512da in vy_run_reader_f+381
  tarantool#20 0x590be9cb4147 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*)+34
  tarantool#21 0x590be9f8b697 in fiber_loop+219
  tarantool#22 0x590bea374bb6 in coro_init+120
```

Fix this by skipping this optimization for threads other than tx.

No test is added because reproducing this race is tricky. Ideally, bugs
like this one should be caught by fuzzing tests or thread sanitizers.

Closes tarantool#10123

NO_DOC=bug fix
NO_TEST=tested manually with fuzzer
locker added a commit to locker/tarantool that referenced this pull request Jun 11, 2024
`key_part::offset_slot_cache` and `key_part::format_epoch` are used for
speeding up tuple field lookup in `tuple_field_raw_by_part()`. These
structure members are accessed and updated without any locks, assuming
this code is executed exclusively in the tx thread. However, this isn't
necessarily true because we also perform tuple field lookups in vinyl
read threads. Apparently, this can result in unexpected races and bugs,
for example:

```
  tarantool#1  0x590be9f7eb6d in crash_collect+256
  tarantool#2  0x590be9f7f5a9 in crash_signal_cb+100
  tarantool#3  0x72b111642520 in __sigaction+80
  tarantool#4  0x590bea385e3c in load_u32+35
  tarantool#5  0x590bea231eba in field_map_get_offset+46
  tarantool#6  0x590bea23242a in tuple_field_raw_by_path+417
  tarantool#7  0x590bea23282b in tuple_field_raw_by_part+203
  tarantool#8  0x590bea23288c in tuple_field_by_part+91
  tarantool#9  0x590bea24cd2d in unsigned long tuple_hint<(field_type)5, false, false>(tuple*, key_def*)+103
  tarantool#10 0x590be9d4fba3 in tuple_hint+40
  tarantool#11 0x590be9d50acf in vy_stmt_hint+178
  tarantool#12 0x590be9d53531 in vy_page_stmt+168
  tarantool#13 0x590be9d535ea in vy_page_find_key+142
  tarantool#14 0x590be9d545e6 in vy_page_read_cb+210
  tarantool#15 0x590be9f94ef0 in cbus_call_perform+44
  tarantool#16 0x590be9f94eae in cmsg_deliver+52
  tarantool#17 0x590be9f9583e in cbus_process+100
  tarantool#18 0x590be9f958a5 in cbus_loop+28
  tarantool#19 0x590be9d512da in vy_run_reader_f+381
  tarantool#20 0x590be9cb4147 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*)+34
  tarantool#21 0x590be9f8b697 in fiber_loop+219
  tarantool#22 0x590bea374bb6 in coro_init+120
```

Fix this by skipping this optimization for threads other than tx.

No test is added because reproducing this race is tricky. Ideally, bugs
like this one should be caught by fuzzing tests or thread sanitizers.

Closes tarantool#10123

NO_DOC=bug fix
NO_TEST=tested manually with fuzzer
locker added a commit that referenced this pull request Jun 13, 2024
`key_part::offset_slot_cache` and `key_part::format_epoch` are used for
speeding up tuple field lookup in `tuple_field_raw_by_part()`. These
structure members are accessed and updated without any locks, assuming
this code is executed exclusively in the tx thread. However, this isn't
necessarily true because we also perform tuple field lookups in vinyl
read threads. Apparently, this can result in unexpected races and bugs,
for example:

```
  #1  0x590be9f7eb6d in crash_collect+256
  #2  0x590be9f7f5a9 in crash_signal_cb+100
  #3  0x72b111642520 in __sigaction+80
  #4  0x590bea385e3c in load_u32+35
  #5  0x590bea231eba in field_map_get_offset+46
  #6  0x590bea23242a in tuple_field_raw_by_path+417
  #7  0x590bea23282b in tuple_field_raw_by_part+203
  #8  0x590bea23288c in tuple_field_by_part+91
  #9  0x590bea24cd2d in unsigned long tuple_hint<(field_type)5, false, false>(tuple*, key_def*)+103
  #10 0x590be9d4fba3 in tuple_hint+40
  #11 0x590be9d50acf in vy_stmt_hint+178
  #12 0x590be9d53531 in vy_page_stmt+168
  #13 0x590be9d535ea in vy_page_find_key+142
  #14 0x590be9d545e6 in vy_page_read_cb+210
  #15 0x590be9f94ef0 in cbus_call_perform+44
  #16 0x590be9f94eae in cmsg_deliver+52
  #17 0x590be9f9583e in cbus_process+100
  #18 0x590be9f958a5 in cbus_loop+28
  #19 0x590be9d512da in vy_run_reader_f+381
  #20 0x590be9cb4147 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*)+34
  #21 0x590be9f8b697 in fiber_loop+219
  #22 0x590bea374bb6 in coro_init+120
```

Fix this by skipping this optimization for threads other than tx.

No test is added because reproducing this race is tricky. Ideally, bugs
like this one should be caught by fuzzing tests or thread sanitizers.

Closes #10123

NO_DOC=bug fix
NO_TEST=tested manually with fuzzer
locker added a commit that referenced this pull request Jun 13, 2024
`key_part::offset_slot_cache` and `key_part::format_epoch` are used for
speeding up tuple field lookup in `tuple_field_raw_by_part()`. These
structure members are accessed and updated without any locks, assuming
this code is executed exclusively in the tx thread. However, this isn't
necessarily true because we also perform tuple field lookups in vinyl
read threads. Apparently, this can result in unexpected races and bugs,
for example:

```
  #1  0x590be9f7eb6d in crash_collect+256
  #2  0x590be9f7f5a9 in crash_signal_cb+100
  #3  0x72b111642520 in __sigaction+80
  #4  0x590bea385e3c in load_u32+35
  #5  0x590bea231eba in field_map_get_offset+46
  #6  0x590bea23242a in tuple_field_raw_by_path+417
  #7  0x590bea23282b in tuple_field_raw_by_part+203
  #8  0x590bea23288c in tuple_field_by_part+91
  #9  0x590bea24cd2d in unsigned long tuple_hint<(field_type)5, false, false>(tuple*, key_def*)+103
  #10 0x590be9d4fba3 in tuple_hint+40
  #11 0x590be9d50acf in vy_stmt_hint+178
  #12 0x590be9d53531 in vy_page_stmt+168
  #13 0x590be9d535ea in vy_page_find_key+142
  #14 0x590be9d545e6 in vy_page_read_cb+210
  #15 0x590be9f94ef0 in cbus_call_perform+44
  #16 0x590be9f94eae in cmsg_deliver+52
  #17 0x590be9f9583e in cbus_process+100
  #18 0x590be9f958a5 in cbus_loop+28
  #19 0x590be9d512da in vy_run_reader_f+381
  #20 0x590be9cb4147 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*)+34
  #21 0x590be9f8b697 in fiber_loop+219
  #22 0x590bea374bb6 in coro_init+120
```

Fix this by skipping this optimization for threads other than tx.

No test is added because reproducing this race is tricky. Ideally, bugs
like this one should be caught by fuzzing tests or thread sanitizers.

Closes #10123

NO_DOC=bug fix
NO_TEST=tested manually with fuzzer

(cherry picked from commit 19d1f1c)
locker added a commit that referenced this pull request Jun 13, 2024
`key_part::offset_slot_cache` and `key_part::format_epoch` are used for
speeding up tuple field lookup in `tuple_field_raw_by_part()`. These
structure members are accessed and updated without any locks, assuming
this code is executed exclusively in the tx thread. However, this isn't
necessarily true because we also perform tuple field lookups in vinyl
read threads. Apparently, this can result in unexpected races and bugs,
for example:

```
  #1  0x590be9f7eb6d in crash_collect+256
  #2  0x590be9f7f5a9 in crash_signal_cb+100
  #3  0x72b111642520 in __sigaction+80
  #4  0x590bea385e3c in load_u32+35
  #5  0x590bea231eba in field_map_get_offset+46
  #6  0x590bea23242a in tuple_field_raw_by_path+417
  #7  0x590bea23282b in tuple_field_raw_by_part+203
  #8  0x590bea23288c in tuple_field_by_part+91
  #9  0x590bea24cd2d in unsigned long tuple_hint<(field_type)5, false, false>(tuple*, key_def*)+103
  #10 0x590be9d4fba3 in tuple_hint+40
  #11 0x590be9d50acf in vy_stmt_hint+178
  #12 0x590be9d53531 in vy_page_stmt+168
  #13 0x590be9d535ea in vy_page_find_key+142
  #14 0x590be9d545e6 in vy_page_read_cb+210
  #15 0x590be9f94ef0 in cbus_call_perform+44
  #16 0x590be9f94eae in cmsg_deliver+52
  #17 0x590be9f9583e in cbus_process+100
  #18 0x590be9f958a5 in cbus_loop+28
  #19 0x590be9d512da in vy_run_reader_f+381
  #20 0x590be9cb4147 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*)+34
  #21 0x590be9f8b697 in fiber_loop+219
  #22 0x590bea374bb6 in coro_init+120
```

Fix this by skipping this optimization for threads other than tx.

No test is added because reproducing this race is tricky. Ideally, bugs
like this one should be caught by fuzzing tests or thread sanitizers.

Closes #10123

NO_DOC=bug fix
NO_TEST=tested manually with fuzzer

(cherry picked from commit 19d1f1c)
nshy added a commit to nshy/tarantool that referenced this pull request Jul 5, 2024
The issue is we increment `page_count` only on page write. If we fail
for some reason before then page info `min_key` in leaked.

LSAN report for 'vinyl/recovery_quota.test.lua':

```
2024-07-05 13:30:34.605 [478603] main/103/on_shutdown vy_scheduler.c:1668 E> 512/0: failed to compact range (-inf..inf)

=================================================================
==478603==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 4 byte(s) in 1 object(s) allocated from:
    #0 0x5e4ebafcae09 in malloc (/home/shiny/dev/tarantool/build-asan-debug/src/tarantool+0x1244e09) (BuildId: 20c5933d67a3831c4f43f6860379d58d35b81974)
    tarantool#1 0x5e4ebb3f9b69 in vy_key_dup /home/shiny/dev/tarantool/src/box/vy_stmt.c:308:14
    tarantool#2 0x5e4ebb49b615 in vy_page_info_create /home/shiny/dev/tarantool/src/box/vy_run.c:257:23
    tarantool#3 0x5e4ebb48f59f in vy_run_writer_start_page /home/shiny/dev/tarantool/src/box/vy_run.c:2196:6
    tarantool#4 0x5e4ebb48c6b6 in vy_run_writer_append_stmt /home/shiny/dev/tarantool/src/box/vy_run.c:2287:6
    tarantool#5 0x5e4ebb72877f in vy_task_write_run /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1132:8
    tarantool#6 0x5e4ebb73305e in vy_task_compaction_execute /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1485:9
    tarantool#7 0x5e4ebb73e152 in vy_task_f /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1795:6
    tarantool#8 0x5e4ebb01e0b1 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1331:10
    tarantool#9 0x5e4ebc389ee0 in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1182:18
    tarantool#10 0x5e4ebd3e9595 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
```

NO_TEST=covered by existing tests (ASAN build)
NO_DOC=bugfix
nshy added a commit to nshy/tarantool that referenced this pull request Jul 8, 2024
The issue is we increment `page_count` only on page write. If we fail
for some reason before then page info `min_key` in leaked.

LSAN report for 'vinyl/recovery_quota.test.lua':

```
2024-07-05 13:30:34.605 [478603] main/103/on_shutdown vy_scheduler.c:1668 E> 512/0: failed to compact range (-inf..inf)

=================================================================
==478603==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 4 byte(s) in 1 object(s) allocated from:
    #0 0x5e4ebafcae09 in malloc (/home/shiny/dev/tarantool/build-asan-debug/src/tarantool+0x1244e09) (BuildId: 20c5933d67a3831c4f43f6860379d58d35b81974)
    tarantool#1 0x5e4ebb3f9b69 in vy_key_dup /home/shiny/dev/tarantool/src/box/vy_stmt.c:308:14
    tarantool#2 0x5e4ebb49b615 in vy_page_info_create /home/shiny/dev/tarantool/src/box/vy_run.c:257:23
    tarantool#3 0x5e4ebb48f59f in vy_run_writer_start_page /home/shiny/dev/tarantool/src/box/vy_run.c:2196:6
    tarantool#4 0x5e4ebb48c6b6 in vy_run_writer_append_stmt /home/shiny/dev/tarantool/src/box/vy_run.c:2287:6
    tarantool#5 0x5e4ebb72877f in vy_task_write_run /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1132:8
    tarantool#6 0x5e4ebb73305e in vy_task_compaction_execute /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1485:9
    tarantool#7 0x5e4ebb73e152 in vy_task_f /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1795:6
    tarantool#8 0x5e4ebb01e0b1 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1331:10
    tarantool#9 0x5e4ebc389ee0 in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1182:18
    tarantool#10 0x5e4ebd3e9595 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
```

NO_TEST=covered by existing tests (ASAN build)
NO_DOC=bugfix
nshy added a commit to nshy/tarantool that referenced this pull request Jul 8, 2024
The issue is we increment `page_count` only on page write. If we fail
for some reason before then page info `min_key` in leaked.

LSAN report for 'vinyl/recovery_quota.test.lua':

```
2024-07-05 13:30:34.605 [478603] main/103/on_shutdown vy_scheduler.c:1668 E> 512/0: failed to compact range (-inf..inf)

=================================================================
==478603==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 4 byte(s) in 1 object(s) allocated from:
    #0 0x5e4ebafcae09 in malloc (/home/shiny/dev/tarantool/build-asan-debug/src/tarantool+0x1244e09) (BuildId: 20c5933d67a3831c4f43f6860379d58d35b81974)
    tarantool#1 0x5e4ebb3f9b69 in vy_key_dup /home/shiny/dev/tarantool/src/box/vy_stmt.c:308:14
    tarantool#2 0x5e4ebb49b615 in vy_page_info_create /home/shiny/dev/tarantool/src/box/vy_run.c:257:23
    tarantool#3 0x5e4ebb48f59f in vy_run_writer_start_page /home/shiny/dev/tarantool/src/box/vy_run.c:2196:6
    tarantool#4 0x5e4ebb48c6b6 in vy_run_writer_append_stmt /home/shiny/dev/tarantool/src/box/vy_run.c:2287:6
    tarantool#5 0x5e4ebb72877f in vy_task_write_run /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1132:8
    tarantool#6 0x5e4ebb73305e in vy_task_compaction_execute /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1485:9
    tarantool#7 0x5e4ebb73e152 in vy_task_f /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1795:6
    tarantool#8 0x5e4ebb01e0b1 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1331:10
    tarantool#9 0x5e4ebc389ee0 in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1182:18
    tarantool#10 0x5e4ebd3e9595 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
```

NO_TEST=covered by existing tests (ASAN build)
NO_DOC=bugfix
nshy added a commit to nshy/tarantool that referenced this pull request Jul 9, 2024
The issue is we increment `page_count` only on page write. If we fail
for some reason before then page info `min_key` in leaked.

LSAN report for 'vinyl/recovery_quota.test.lua':

```
2024-07-05 13:30:34.605 [478603] main/103/on_shutdown vy_scheduler.c:1668 E> 512/0: failed to compact range (-inf..inf)

=================================================================
==478603==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 4 byte(s) in 1 object(s) allocated from:
    #0 0x5e4ebafcae09 in malloc (/home/shiny/dev/tarantool/build-asan-debug/src/tarantool+0x1244e09) (BuildId: 20c5933d67a3831c4f43f6860379d58d35b81974)
    tarantool#1 0x5e4ebb3f9b69 in vy_key_dup /home/shiny/dev/tarantool/src/box/vy_stmt.c:308:14
    tarantool#2 0x5e4ebb49b615 in vy_page_info_create /home/shiny/dev/tarantool/src/box/vy_run.c:257:23
    tarantool#3 0x5e4ebb48f59f in vy_run_writer_start_page /home/shiny/dev/tarantool/src/box/vy_run.c:2196:6
    tarantool#4 0x5e4ebb48c6b6 in vy_run_writer_append_stmt /home/shiny/dev/tarantool/src/box/vy_run.c:2287:6
    tarantool#5 0x5e4ebb72877f in vy_task_write_run /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1132:8
    tarantool#6 0x5e4ebb73305e in vy_task_compaction_execute /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1485:9
    tarantool#7 0x5e4ebb73e152 in vy_task_f /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1795:6
    tarantool#8 0x5e4ebb01e0b1 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1331:10
    tarantool#9 0x5e4ebc389ee0 in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1182:18
    tarantool#10 0x5e4ebd3e9595 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
```

NO_TEST=covered by existing tests (ASAN build)
NO_DOC=bugfix
nshy added a commit to nshy/tarantool that referenced this pull request Jul 12, 2024
The issue is we increment `page_count` only on page write. If we fail
for some reason before then page info `min_key` in leaked.

LSAN report for 'vinyl/recovery_quota.test.lua':

```
2024-07-05 13:30:34.605 [478603] main/103/on_shutdown vy_scheduler.c:1668 E> 512/0: failed to compact range (-inf..inf)

=================================================================
==478603==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 4 byte(s) in 1 object(s) allocated from:
    #0 0x5e4ebafcae09 in malloc (/home/shiny/dev/tarantool/build-asan-debug/src/tarantool+0x1244e09) (BuildId: 20c5933d67a3831c4f43f6860379d58d35b81974)
    tarantool#1 0x5e4ebb3f9b69 in vy_key_dup /home/shiny/dev/tarantool/src/box/vy_stmt.c:308:14
    tarantool#2 0x5e4ebb49b615 in vy_page_info_create /home/shiny/dev/tarantool/src/box/vy_run.c:257:23
    tarantool#3 0x5e4ebb48f59f in vy_run_writer_start_page /home/shiny/dev/tarantool/src/box/vy_run.c:2196:6
    tarantool#4 0x5e4ebb48c6b6 in vy_run_writer_append_stmt /home/shiny/dev/tarantool/src/box/vy_run.c:2287:6
    tarantool#5 0x5e4ebb72877f in vy_task_write_run /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1132:8
    tarantool#6 0x5e4ebb73305e in vy_task_compaction_execute /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1485:9
    tarantool#7 0x5e4ebb73e152 in vy_task_f /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1795:6
    tarantool#8 0x5e4ebb01e0b1 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1331:10
    tarantool#9 0x5e4ebc389ee0 in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1182:18
    tarantool#10 0x5e4ebd3e9595 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
```

NO_TEST=covered by existing tests (ASAN build)
NO_DOC=bugfix
nshy added a commit to nshy/tarantool that referenced this pull request Jul 15, 2024
The issue is we increment `page_count` only on page write. If we fail
for some reason before then page info `min_key` in leaked.

LSAN report for 'vinyl/recovery_quota.test.lua':

```
2024-07-05 13:30:34.605 [478603] main/103/on_shutdown vy_scheduler.c:1668 E> 512/0: failed to compact range (-inf..inf)

=================================================================
==478603==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 4 byte(s) in 1 object(s) allocated from:
    #0 0x5e4ebafcae09 in malloc (/home/shiny/dev/tarantool/build-asan-debug/src/tarantool+0x1244e09) (BuildId: 20c5933d67a3831c4f43f6860379d58d35b81974)
    tarantool#1 0x5e4ebb3f9b69 in vy_key_dup /home/shiny/dev/tarantool/src/box/vy_stmt.c:308:14
    tarantool#2 0x5e4ebb49b615 in vy_page_info_create /home/shiny/dev/tarantool/src/box/vy_run.c:257:23
    tarantool#3 0x5e4ebb48f59f in vy_run_writer_start_page /home/shiny/dev/tarantool/src/box/vy_run.c:2196:6
    tarantool#4 0x5e4ebb48c6b6 in vy_run_writer_append_stmt /home/shiny/dev/tarantool/src/box/vy_run.c:2287:6
    tarantool#5 0x5e4ebb72877f in vy_task_write_run /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1132:8
    tarantool#6 0x5e4ebb73305e in vy_task_compaction_execute /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1485:9
    tarantool#7 0x5e4ebb73e152 in vy_task_f /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1795:6
    tarantool#8 0x5e4ebb01e0b1 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1331:10
    tarantool#9 0x5e4ebc389ee0 in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1182:18
    tarantool#10 0x5e4ebd3e9595 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
```

NO_TEST=covered by existing tests (ASAN build)
NO_DOC=bugfix
ligurio added a commit to ligurio/nanodata that referenced this pull request Aug 19, 2024
Crash the program after printing the first error report (WARNING: USE AT YOUR OWN RISK!). The flag has effect only if code was compiled with -fsanitize-recover=address compile option.

```
 [061] replication/tarantoolgh-5430-cluster-mvcc.test.lua                     [ pass ]
[050]
[050]
[050] [Instance "box" returns with non-zero exit code: 1]
[050]
[050] [test-run server "box"] Last 15 lines of the log file /tmp/t/050_box/box.log:
[050]     tarantool#9 0x55d6c6868851  (<unknown module>)
[050]
[050] Direct leak of 342 byte(s) in 5 object(s) allocated from:
[050]     #0 0x55d69b184cae in malloc (/__w/tarantool/tarantool/src/tarantool+0x1268cae) (BuildId: 4f3fed4334a726219fb69119e67d451f0cb1ccfa)
[050]     tarantool#1 0x55d69d50040c in small_asan_alloc /__w/tarantool/tarantool/src/lib/small/small/util.c:94:24
[050]     tarantool#2 0x55d69d4fcb3c in smalloc /__w/tarantool/tarantool/src/lib/small/small/small_asan.c:57:5
[050]     tarantool#3 0x55d69ce3782f in runtime_tuple_new /__w/tarantool/tarantool/src/box/tuple.c:138:27
[050]     tarantool#4 0x55d69ce33fac in tuple_new /__w/tarantool/tarantool/src/box/tuple.h:801:9
[050]     tarantool#5 0x55d69ce34844 in box_tuple_new /__w/tarantool/tarantool/src/box/tuple.c:845:22
[050]     tarantool#6 0x55d69b523021 in session_settings_index_get /__w/tarantool/tarantool/src/box/session_settings.c:261:12
[050]     tarantool#7 0x55d69b284077 in index_get(index*, char const*, unsigned int, tuple**) /__w/tarantool/tarantool/src/box/index.h:909:9
[050]     tarantool#8 0x55d69b282794 in box_index_get /__w/tarantool/tarantool/src/box/index.cc:390:11
[050]     tarantool#9 0x55d6c685ea09  (<unknown module>)
[050]
[050] SUMMARY: AddressSanitizer: 5627 byte(s) leaked in 83 allocation(s).
[055] box-luatest/gh_8530_alter_space_snapshot_test.>               [ pass ]
```

1. https://github.com/tarantool/tarantool/actions/runs/10454868034/job/28948757147?pr=10431
2. https://github.com/google/sanitizers/wiki/AddressSanitizerFlags

NO_CHANGELOG=internal
NO_DOC=internal
NO_TEST=internal
ligurio added a commit to ligurio/nanodata that referenced this pull request Aug 19, 2024
Crash the program after printing the first error report (WARNING: USE AT YOUR OWN RISK!). The flag has effect only if code was compiled with -fsanitize-recover=address compile option.

```
 [061] replication/tarantoolgh-5430-cluster-mvcc.test.lua                     [ pass ]
[050]
[050]
[050] [Instance "box" returns with non-zero exit code: 1]
[050]
[050] [test-run server "box"] Last 15 lines of the log file /tmp/t/050_box/box.log:
[050]     tarantool#9 0x55d6c6868851  (<unknown module>)
[050]
[050] Direct leak of 342 byte(s) in 5 object(s) allocated from:
[050]     #0 0x55d69b184cae in malloc (/__w/tarantool/tarantool/src/tarantool+0x1268cae) (BuildId: 4f3fed4334a726219fb69119e67d451f0cb1ccfa)
[050]     tarantool#1 0x55d69d50040c in small_asan_alloc /__w/tarantool/tarantool/src/lib/small/small/util.c:94:24
[050]     tarantool#2 0x55d69d4fcb3c in smalloc /__w/tarantool/tarantool/src/lib/small/small/small_asan.c:57:5
[050]     tarantool#3 0x55d69ce3782f in runtime_tuple_new /__w/tarantool/tarantool/src/box/tuple.c:138:27
[050]     tarantool#4 0x55d69ce33fac in tuple_new /__w/tarantool/tarantool/src/box/tuple.h:801:9
[050]     tarantool#5 0x55d69ce34844 in box_tuple_new /__w/tarantool/tarantool/src/box/tuple.c:845:22
[050]     tarantool#6 0x55d69b523021 in session_settings_index_get /__w/tarantool/tarantool/src/box/session_settings.c:261:12
[050]     tarantool#7 0x55d69b284077 in index_get(index*, char const*, unsigned int, tuple**) /__w/tarantool/tarantool/src/box/index.h:909:9
[050]     tarantool#8 0x55d69b282794 in box_index_get /__w/tarantool/tarantool/src/box/index.cc:390:11
[050]     tarantool#9 0x55d6c685ea09  (<unknown module>)
[050]
[050] SUMMARY: AddressSanitizer: 5627 byte(s) leaked in 83 allocation(s).
[055] box-luatest/gh_8530_alter_space_snapshot_test.>               [ pass ]
```

1. https://github.com/tarantool/tarantool/actions/runs/10454868034/job/28948757147?pr=10431
2. https://github.com/google/sanitizers/wiki/AddressSanitizerFlags

NO_CHANGELOG=internal
NO_DOC=internal
NO_TEST=internal
ligurio added a commit to ligurio/nanodata that referenced this pull request Aug 20, 2024
Crash the program after printing the first error report (WARNING: USE AT YOUR OWN RISK!). The flag has effect only if code was compiled with -fsanitize-recover=address compile option.

```
 [061] replication/tarantoolgh-5430-cluster-mvcc.test.lua                     [ pass ]
[050]
[050]
[050] [Instance "box" returns with non-zero exit code: 1]
[050]
[050] [test-run server "box"] Last 15 lines of the log file /tmp/t/050_box/box.log:
[050]     tarantool#9 0x55d6c6868851  (<unknown module>)
[050]
[050] Direct leak of 342 byte(s) in 5 object(s) allocated from:
[050]     #0 0x55d69b184cae in malloc (/__w/tarantool/tarantool/src/tarantool+0x1268cae) (BuildId: 4f3fed4334a726219fb69119e67d451f0cb1ccfa)
[050]     tarantool#1 0x55d69d50040c in small_asan_alloc /__w/tarantool/tarantool/src/lib/small/small/util.c:94:24
[050]     tarantool#2 0x55d69d4fcb3c in smalloc /__w/tarantool/tarantool/src/lib/small/small/small_asan.c:57:5
[050]     tarantool#3 0x55d69ce3782f in runtime_tuple_new /__w/tarantool/tarantool/src/box/tuple.c:138:27
[050]     tarantool#4 0x55d69ce33fac in tuple_new /__w/tarantool/tarantool/src/box/tuple.h:801:9
[050]     tarantool#5 0x55d69ce34844 in box_tuple_new /__w/tarantool/tarantool/src/box/tuple.c:845:22
[050]     tarantool#6 0x55d69b523021 in session_settings_index_get /__w/tarantool/tarantool/src/box/session_settings.c:261:12
[050]     tarantool#7 0x55d69b284077 in index_get(index*, char const*, unsigned int, tuple**) /__w/tarantool/tarantool/src/box/index.h:909:9
[050]     tarantool#8 0x55d69b282794 in box_index_get /__w/tarantool/tarantool/src/box/index.cc:390:11
[050]     tarantool#9 0x55d6c685ea09  (<unknown module>)
[050]
[050] SUMMARY: AddressSanitizer: 5627 byte(s) leaked in 83 allocation(s).
[055] box-luatest/gh_8530_alter_space_snapshot_test.>               [ pass ]
```

1. https://github.com/tarantool/tarantool/actions/runs/10454868034/job/28948757147?pr=10431
2. https://github.com/google/sanitizers/wiki/AddressSanitizerFlags

NO_CHANGELOG=internal
NO_DOC=internal
NO_TEST=internal
nshy added a commit to nshy/tarantool that referenced this pull request Aug 28, 2024
The issue is we increment `page_count` only on page write. If we fail
for some reason before then page info `min_key` in leaked.

LSAN report for 'vinyl/recovery_quota.test.lua':

```
2024-07-05 13:30:34.605 [478603] main/103/on_shutdown vy_scheduler.c:1668 E> 512/0: failed to compact range (-inf..inf)

=================================================================
==478603==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 4 byte(s) in 1 object(s) allocated from:
    #0 0x5e4ebafcae09 in malloc (/home/shiny/dev/tarantool/build-asan-debug/src/tarantool+0x1244e09) (BuildId: 20c5933d67a3831c4f43f6860379d58d35b81974)
    tarantool#1 0x5e4ebb3f9b69 in vy_key_dup /home/shiny/dev/tarantool/src/box/vy_stmt.c:308:14
    tarantool#2 0x5e4ebb49b615 in vy_page_info_create /home/shiny/dev/tarantool/src/box/vy_run.c:257:23
    tarantool#3 0x5e4ebb48f59f in vy_run_writer_start_page /home/shiny/dev/tarantool/src/box/vy_run.c:2196:6
    tarantool#4 0x5e4ebb48c6b6 in vy_run_writer_append_stmt /home/shiny/dev/tarantool/src/box/vy_run.c:2287:6
    tarantool#5 0x5e4ebb72877f in vy_task_write_run /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1132:8
    tarantool#6 0x5e4ebb73305e in vy_task_compaction_execute /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1485:9
    tarantool#7 0x5e4ebb73e152 in vy_task_f /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1795:6
    tarantool#8 0x5e4ebb01e0b1 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1331:10
    tarantool#9 0x5e4ebc389ee0 in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1182:18
    tarantool#10 0x5e4ebd3e9595 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
```

NO_TEST=covered by existing tests (ASAN build)
NO_DOC=bugfix
nshy added a commit to nshy/tarantool that referenced this pull request Aug 28, 2024
The issue is we increment `page_count` only on page write. If we fail
for some reason before then page info `min_key` in leaked.

LSAN report for 'vinyl/recovery_quota.test.lua':

```
2024-07-05 13:30:34.605 [478603] main/103/on_shutdown vy_scheduler.c:1668 E> 512/0: failed to compact range (-inf..inf)

=================================================================
==478603==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 4 byte(s) in 1 object(s) allocated from:
    #0 0x5e4ebafcae09 in malloc (/home/shiny/dev/tarantool/build-asan-debug/src/tarantool+0x1244e09) (BuildId: 20c5933d67a3831c4f43f6860379d58d35b81974)
    tarantool#1 0x5e4ebb3f9b69 in vy_key_dup /home/shiny/dev/tarantool/src/box/vy_stmt.c:308:14
    tarantool#2 0x5e4ebb49b615 in vy_page_info_create /home/shiny/dev/tarantool/src/box/vy_run.c:257:23
    tarantool#3 0x5e4ebb48f59f in vy_run_writer_start_page /home/shiny/dev/tarantool/src/box/vy_run.c:2196:6
    tarantool#4 0x5e4ebb48c6b6 in vy_run_writer_append_stmt /home/shiny/dev/tarantool/src/box/vy_run.c:2287:6
    tarantool#5 0x5e4ebb72877f in vy_task_write_run /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1132:8
    tarantool#6 0x5e4ebb73305e in vy_task_compaction_execute /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1485:9
    tarantool#7 0x5e4ebb73e152 in vy_task_f /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1795:6
    tarantool#8 0x5e4ebb01e0b1 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1331:10
    tarantool#9 0x5e4ebc389ee0 in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1182:18
    tarantool#10 0x5e4ebd3e9595 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
```

NO_TEST=covered by existing tests (ASAN build)
NO_DOC=bugfix
nshy added a commit to nshy/tarantool that referenced this pull request Aug 29, 2024
The issue is we increment `page_count` only on page write. If we fail
for some reason before then page info `min_key` in leaked.

LSAN report for 'vinyl/recovery_quota.test.lua':

```
2024-07-05 13:30:34.605 [478603] main/103/on_shutdown vy_scheduler.c:1668 E> 512/0: failed to compact range (-inf..inf)

=================================================================
==478603==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 4 byte(s) in 1 object(s) allocated from:
    #0 0x5e4ebafcae09 in malloc (/home/shiny/dev/tarantool/build-asan-debug/src/tarantool+0x1244e09) (BuildId: 20c5933d67a3831c4f43f6860379d58d35b81974)
    tarantool#1 0x5e4ebb3f9b69 in vy_key_dup /home/shiny/dev/tarantool/src/box/vy_stmt.c:308:14
    tarantool#2 0x5e4ebb49b615 in vy_page_info_create /home/shiny/dev/tarantool/src/box/vy_run.c:257:23
    tarantool#3 0x5e4ebb48f59f in vy_run_writer_start_page /home/shiny/dev/tarantool/src/box/vy_run.c:2196:6
    tarantool#4 0x5e4ebb48c6b6 in vy_run_writer_append_stmt /home/shiny/dev/tarantool/src/box/vy_run.c:2287:6
    tarantool#5 0x5e4ebb72877f in vy_task_write_run /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1132:8
    tarantool#6 0x5e4ebb73305e in vy_task_compaction_execute /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1485:9
    tarantool#7 0x5e4ebb73e152 in vy_task_f /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1795:6
    tarantool#8 0x5e4ebb01e0b1 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1331:10
    tarantool#9 0x5e4ebc389ee0 in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1182:18
    tarantool#10 0x5e4ebd3e9595 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
```

Closes tarantool#10489
Part-of tarantool#10211

NO_TEST=covered by existing tests
NO_DOC=bugfix
nshy added a commit to nshy/tarantool that referenced this pull request Aug 29, 2024
The issue is we increment `page_count` only on page write. If we fail
for some reason before then page info `min_key` in leaked.

LSAN report for 'vinyl/recovery_quota.test.lua':

```
2024-07-05 13:30:34.605 [478603] main/103/on_shutdown vy_scheduler.c:1668 E> 512/0: failed to compact range (-inf..inf)

=================================================================
==478603==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 4 byte(s) in 1 object(s) allocated from:
    #0 0x5e4ebafcae09 in malloc (/home/shiny/dev/tarantool/build-asan-debug/src/tarantool+0x1244e09) (BuildId: 20c5933d67a3831c4f43f6860379d58d35b81974)
    tarantool#1 0x5e4ebb3f9b69 in vy_key_dup /home/shiny/dev/tarantool/src/box/vy_stmt.c:308:14
    tarantool#2 0x5e4ebb49b615 in vy_page_info_create /home/shiny/dev/tarantool/src/box/vy_run.c:257:23
    tarantool#3 0x5e4ebb48f59f in vy_run_writer_start_page /home/shiny/dev/tarantool/src/box/vy_run.c:2196:6
    tarantool#4 0x5e4ebb48c6b6 in vy_run_writer_append_stmt /home/shiny/dev/tarantool/src/box/vy_run.c:2287:6
    tarantool#5 0x5e4ebb72877f in vy_task_write_run /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1132:8
    tarantool#6 0x5e4ebb73305e in vy_task_compaction_execute /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1485:9
    tarantool#7 0x5e4ebb73e152 in vy_task_f /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1795:6
    tarantool#8 0x5e4ebb01e0b1 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1331:10
    tarantool#9 0x5e4ebc389ee0 in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1182:18
    tarantool#10 0x5e4ebd3e9595 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
```

Closes tarantool#10489
Part-of tarantool#10211

NO_TEST=covered by existing tests
NO_DOC=bugfix
nshy added a commit to nshy/tarantool that referenced this pull request Aug 30, 2024
The issue is we increment `page_count` only on page write. If we fail
for some reason before then page info `min_key` in leaked.

LSAN report for 'vinyl/recovery_quota.test.lua':

```
2024-07-05 13:30:34.605 [478603] main/103/on_shutdown vy_scheduler.c:1668 E> 512/0: failed to compact range (-inf..inf)

=================================================================
==478603==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 4 byte(s) in 1 object(s) allocated from:
    #0 0x5e4ebafcae09 in malloc (/home/shiny/dev/tarantool/build-asan-debug/src/tarantool+0x1244e09) (BuildId: 20c5933d67a3831c4f43f6860379d58d35b81974)
    tarantool#1 0x5e4ebb3f9b69 in vy_key_dup /home/shiny/dev/tarantool/src/box/vy_stmt.c:308:14
    tarantool#2 0x5e4ebb49b615 in vy_page_info_create /home/shiny/dev/tarantool/src/box/vy_run.c:257:23
    tarantool#3 0x5e4ebb48f59f in vy_run_writer_start_page /home/shiny/dev/tarantool/src/box/vy_run.c:2196:6
    tarantool#4 0x5e4ebb48c6b6 in vy_run_writer_append_stmt /home/shiny/dev/tarantool/src/box/vy_run.c:2287:6
    tarantool#5 0x5e4ebb72877f in vy_task_write_run /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1132:8
    tarantool#6 0x5e4ebb73305e in vy_task_compaction_execute /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1485:9
    tarantool#7 0x5e4ebb73e152 in vy_task_f /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1795:6
    tarantool#8 0x5e4ebb01e0b1 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1331:10
    tarantool#9 0x5e4ebc389ee0 in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1182:18
    tarantool#10 0x5e4ebd3e9595 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
```

Closes tarantool#10489
Part-of tarantool#10211

NO_TEST=covered by existing tests
NO_DOC=bugfix
ligurio added a commit to ligurio/nanodata that referenced this pull request Aug 30, 2024
Crash the program after printing the first error report (WARNING: USE AT YOUR OWN RISK!). The flag has effect only if code was compiled with -fsanitize-recover=address compile option.

```
 [061] replication/tarantoolgh-5430-cluster-mvcc.test.lua                     [ pass ]
[050]
[050]
[050] [Instance "box" returns with non-zero exit code: 1]
[050]
[050] [test-run server "box"] Last 15 lines of the log file /tmp/t/050_box/box.log:
[050]     tarantool#9 0x55d6c6868851  (<unknown module>)
[050]
[050] Direct leak of 342 byte(s) in 5 object(s) allocated from:
[050]     #0 0x55d69b184cae in malloc (/__w/tarantool/tarantool/src/tarantool+0x1268cae) (BuildId: 4f3fed4334a726219fb69119e67d451f0cb1ccfa)
[050]     tarantool#1 0x55d69d50040c in small_asan_alloc /__w/tarantool/tarantool/src/lib/small/small/util.c:94:24
[050]     tarantool#2 0x55d69d4fcb3c in smalloc /__w/tarantool/tarantool/src/lib/small/small/small_asan.c:57:5
[050]     tarantool#3 0x55d69ce3782f in runtime_tuple_new /__w/tarantool/tarantool/src/box/tuple.c:138:27
[050]     tarantool#4 0x55d69ce33fac in tuple_new /__w/tarantool/tarantool/src/box/tuple.h:801:9
[050]     tarantool#5 0x55d69ce34844 in box_tuple_new /__w/tarantool/tarantool/src/box/tuple.c:845:22
[050]     tarantool#6 0x55d69b523021 in session_settings_index_get /__w/tarantool/tarantool/src/box/session_settings.c:261:12
[050]     tarantool#7 0x55d69b284077 in index_get(index*, char const*, unsigned int, tuple**) /__w/tarantool/tarantool/src/box/index.h:909:9
[050]     tarantool#8 0x55d69b282794 in box_index_get /__w/tarantool/tarantool/src/box/index.cc:390:11
[050]     tarantool#9 0x55d6c685ea09  (<unknown module>)
[050]
[050] SUMMARY: AddressSanitizer: 5627 byte(s) leaked in 83 allocation(s).
[055] box-luatest/gh_8530_alter_space_snapshot_test.>               [ pass ]
```

1. https://github.com/tarantool/tarantool/actions/runs/10454868034/job/28948757147?pr=10431
2. https://github.com/google/sanitizers/wiki/AddressSanitizerFlags

NO_CHANGELOG=internal
NO_DOC=internal
NO_TEST=internal
nshy added a commit to nshy/tarantool that referenced this pull request Aug 30, 2024
The issue is we increment `page_count` only on page write. If we fail
for some reason before then page info `min_key` in leaked.

LSAN report for 'vinyl/recovery_quota.test.lua':

```
2024-07-05 13:30:34.605 [478603] main/103/on_shutdown vy_scheduler.c:1668 E> 512/0: failed to compact range (-inf..inf)

=================================================================
==478603==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 4 byte(s) in 1 object(s) allocated from:
    #0 0x5e4ebafcae09 in malloc (/home/shiny/dev/tarantool/build-asan-debug/src/tarantool+0x1244e09) (BuildId: 20c5933d67a3831c4f43f6860379d58d35b81974)
    tarantool#1 0x5e4ebb3f9b69 in vy_key_dup /home/shiny/dev/tarantool/src/box/vy_stmt.c:308:14
    tarantool#2 0x5e4ebb49b615 in vy_page_info_create /home/shiny/dev/tarantool/src/box/vy_run.c:257:23
    tarantool#3 0x5e4ebb48f59f in vy_run_writer_start_page /home/shiny/dev/tarantool/src/box/vy_run.c:2196:6
    tarantool#4 0x5e4ebb48c6b6 in vy_run_writer_append_stmt /home/shiny/dev/tarantool/src/box/vy_run.c:2287:6
    tarantool#5 0x5e4ebb72877f in vy_task_write_run /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1132:8
    tarantool#6 0x5e4ebb73305e in vy_task_compaction_execute /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1485:9
    tarantool#7 0x5e4ebb73e152 in vy_task_f /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1795:6
    tarantool#8 0x5e4ebb01e0b1 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1331:10
    tarantool#9 0x5e4ebc389ee0 in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1182:18
    tarantool#10 0x5e4ebd3e9595 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
```

Closes tarantool#10489
Part-of tarantool#10211

NO_TEST=covered by existing tests
NO_DOC=bugfix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants