PUT /_snapshot/my_repo
{
"type": "fs",
"settings": {
"location": "./my_repo_test",
"compress": true
}
}
PUT _slm/policy/my_snapshot_policy
{
"name": "<snapshot-{now}>",
"schedule": "0 * * * * ?",
"repository": "my_repo",
"config": {
"indices": [
"my_snapshot_index"
]
},
"retention": {
"expire_after": "10m",
"min_count": 1,
"max_count": 3
}
}
PUT /my_test_index-1
PUT /my_test_index-1/_alias/my_alias
PUT _ilm/policy/my_policy
{
"policy": {
"phases": {
"hot": {
"min_age": "0ms",
"actions": {
"rollover": {
"max_size": "50gb",
"max_age": "30d",
"max_docs": 3
},
"set_priority": {
"priority": 100
}
}
},
"delete": {
"min_age": "0d",
"actions": {
"wait_for_snapshot": {
"policy": "my_snapshot_policy"
},
"delete": {}
}
}
}
}
}
Hello team,
while adding a field for "wait for snapshot policy" to Delete phase in Index Lifecycle Management UI, I noticed that this action does not in fact ensure that a snapshot of the index exists before deleting the index. This can lead to irreversible data loss of documents in the managed index.
How to recreate this behaviour:
Expand for console commands
(index to be backed up
my_snapshot_index, snapshots created every minute and deleted after 10 min)(rollover after 3 docs, delete after
my_snapshot_policycreated a snapshot).