Project

General

Profile

Actions

Bug #73540

open

rgw/multisite: object deletion is not properly synced on versioning-suspended bucket

Added by Jane Zhu 5 months ago. Updated 5 months ago.

Status:
Pending Backport
Priority:
Normal
Assignee:
Target version:
-
% Done:

0%

Source:
Backport:
squid,tentacle
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Tags (freeform):
backport_processed
Fixed In:
v20.3.0-3821-gba63fd529e
Released In:
Upkeep Timestamp:
2025-10-30T13:51:15+00:00

Description

It's symmetric sync on a versioning-suspended bucket - no bucket replication policy involved.
On the primary zone, everything works as expected: when an object is deleted, the real object is deleted, and replaced by a delete-marker with empty instance-id. However on the secondary zone, once the replication finishes, we can see the real object is still there, and a delete-marker with non-empty instance-id is created on top of it.

The steps to reproduce the issue with mstart:
On Ceph

$ MON=1 OSD=1 MDS=0 MGR=0 RGW_PER_ZONE=1 ../ceph-src/src/test/rgw/test-rgw-multisite.sh 2
$ bin/radosgw-admin -n client.rgw.8101 -c /build/run/c1/ceph.conf user create --uid=jzhu4 --display-name=jzhu4 --access-key=12345 --secret=12345

On client side

$ aws --profile vstart --region="" --endpoint-url http://localhost:8101 s3 mb s3://bucket1

$ aws --profile vstart --endpoint-url http://localhost:8101 s3api put-bucket-versioning --bucket bucket1 --versioning-configuration Status=Enabled
$ aws --profile vstart --endpoint-url http://localhost:8101 s3api put-bucket-versioning --bucket bucket1 --versioning-configuration Status=Suspended

$ aws --profile vstart --endpoint-url http://localhost:8101 s3api put-object --bucket=bucket1 --key=file_4k --body=../test_files/file_4k
$ aws --profile vstart --endpoint-url http://localhost:8101 s3api delete-object --bucket=bucket1 --key=file_4k

On ceph, waiting for the replication caught up.

$ bin/radosgw-admin -n client.rgw.8101 -c /build/run/c1/ceph.conf bucket list --bucket=bucket1
[
    {
        "name": "file_4k",
        "instance": "",
        "ver": {
            "pool": 7,
            "epoch": 1
        },
        "locator": "",
        "exists": true,
        "meta": {
            "category": 0,
            "size": 0,
            "mtime": "2025-10-14T03:21:40.903221Z",
            "etag": "",
            "storage_class": "STANDARD",
            "owner": "jzhu4",
            "owner_display_name": "jzhu4",
            "content_type": "",
            "accounted_size": 0,
            "user_data": "",
            "appendable": false
        },
        "tag": "delete-marker",
        "flags": 7,
        "pending_map": [],
        "versioned_epoch": 1760412100903221541
    }
]

$ bin/radosgw-admin -n client.rgw.8201 -c /build/run/c2/ceph.conf bucket list --bucket=bucket1
[
    {
        "name": "file_4k",
        "instance": "0l3Ad79qVRgfFLlbH7x4w0ZH4goAbOf",
        "ver": {
            "pool": -1,
            "epoch": 0
        },
        "locator": "",
        "exists": false,
        "meta": {
            "category": 0,
            "size": 0,
            "mtime": "2025-10-14T03:21:40.903221Z",
            "etag": "",
            "storage_class": "STANDARD",
            "owner": "jzhu4",
            "owner_display_name": "jzhu4",
            "content_type": "",
            "accounted_size": 0,
            "user_data": "",
            "appendable": false
        },
        "tag": "delete-marker",
        "flags": 7,
        "pending_map": [],
        "versioned_epoch": 1760412100903221541
    },
    {
        "name": "file_4k",
        "instance": "",
        "ver": {
            "pool": 6,
            "epoch": 1
        },
        "locator": "",
        "exists": true,
        "meta": {
            "category": 1,
            "size": 4096,
            "mtime": "2025-10-14T03:20:27.367894Z",
            "etag": "620f0b67a91f7f74151bc5be745b7110",
            "storage_class": "STANDARD",
            "owner": "jzhu4",
            "owner_display_name": "jzhu4",
            "content_type": "application/octet-stream",
            "accounted_size": 4096,
            "user_data": "",
            "appendable": false
        },
        "tag": "_lUcShWIeR6joieGiBsvIBgOOnhMHzBx",
        "flags": 1,
        "pending_map": [],
        "versioned_epoch": 1
    }
]


Related issues 2 (2 open0 closed)

Copied to rgw - Backport #73684: tentacle: rgw/multisite: object deletion is not properly synced on versioning-suspended bucketIn ProgressJane ZhuActions
Copied to rgw - Backport #73685: squid: rgw/multisite: object deletion is not properly synced on versioning-suspended bucketIn ProgressJane ZhuActions
Actions #1

Updated by Jane Zhu 5 months ago

  • Status changed from In Progress to Fix Under Review
  • Backport set to squid,tentacle
  • Pull request ID set to 65948
Actions #2

Updated by Casey Bodley 5 months ago

  • Status changed from Fix Under Review to Pending Backport
Actions #3

Updated by Upkeep Bot 5 months ago

  • Merge Commit set to ba63fd529e2d062faba7bceb862e92a0ceca4e67
  • Fixed In set to v20.3.0-3821-gba63fd529e
  • Upkeep Timestamp set to 2025-10-30T13:51:15+00:00
Actions #4

Updated by Upkeep Bot 5 months ago

  • Copied to Backport #73684: tentacle: rgw/multisite: object deletion is not properly synced on versioning-suspended bucket added
Actions #5

Updated by Upkeep Bot 5 months ago

  • Copied to Backport #73685: squid: rgw/multisite: object deletion is not properly synced on versioning-suspended bucket added
Actions #6

Updated by Upkeep Bot 5 months ago

  • Tags (freeform) set to backport_processed
Actions

Also available in: Atom PDF