ceph-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: WeiGuo Ren <rwg1335252904@gmail.com>
To: Ceph Development <ceph-devel@vger.kernel.org>
Subject: Re: In the ceph multisite master-zone, read ,write,delete objects, and the master-zone has data remaining.
Date: Tue, 16 Mar 2021 10:04:05 +0800	[thread overview]
Message-ID: <CAPy+zYW17u=5mnyx33jODXdMyEQ2dnHWRUHtVW_xmu9+zmSnVA@mail.gmail.com> (raw)
In-Reply-To: <CAPy+zYVsiBspbi28VauMszHRn=a1bqLD06+bTxvvAhXN==5ixQ@mail.gmail.com>

Do we need to solve this problem?

WeiGuo Ren <rwg1335252904@gmail.com> 于2021年3月10日周三 下午4:34写道:
>
> In my test environment, the ceph version is v14.2.5, and there are two
> rgws, which are instances of two zones, respectively rgwA
> (master-zone) and rgwB (slave-zone). Cosbench reads, writes, and
> deletes to rgwA. , The final result rgwA has data residue, but rgwB
> has no residue.
>
> Looking at the log later, I found that this happened:
> 1. When rgwA deletes the object, the rgwA instance has not yet started
> datasync (or the process is slow) to synchronize the object in the
> slave-zone.
> 2. When rgwA starts data synchronization, rgwB has not deleted the object.
> In process 2, rgwA will retrieve the object from the slave-zone, and
> then rgwA will enter the incremental synchronization state to
> synchronize the bilog, but the bilog about the del object will be
> filtered out, because syncs_trace has  master zone.
>
> Below I did a similar reproducing operation (both in the master
> version and ceph 14.2.5)
> rgwA and rgwB are two zones of the same zonegroup .rgwA and rgwB is
> running ( set rgw_run_sync_thread=true)
> rgwA and rgwB are two zones of the same zonegroup .rgwA and rgwB is
> running ( set rgw_run_sync_thread=true)
> t1: rgwA set rgw_run_sync_thread=false and restart it for it to take
> effect. We use s3cmd to create a bucket in rgwA. And upload an object1
> in rgwA. We use s3cmd to observe whether object1 has been synchronized
> in rgwB. or  look radosgw-admin bucket sync status is cauht up it. If
> the synchronization has passed, proceed to the next step.
> t2:rgwB set rgw_run_sync_thread=false and restart it for it to take
> effect. rgwA delete object1 .
> t3:rgwA set rgw_run_sync_thread=true and restart it for it to take
> effect. LOOK radosgw-admin bucket sync status is cauht up it.
> t4: rgwB set rgw_run_sync_thread=true and restart it for it to take
> effect. LOOK radosgw-admin bucket sync status is cauht up it .
> The reslut: rgwA has object1,rgwB dosen't have object1.
> This URL mentioned this problem  https://tracker.ceph.com/issues/47555
>
> Could someone can help me? or If the bucket about the rgwA instance is
> not in the incremental synchronization state, can we prohibit rgwA
> from deleting object1?

  reply	other threads:[~2021-03-16  2:05 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-10  8:34 In the ceph multisite master-zone, read ,write,delete objects, and the master-zone has data remaining WeiGuo Ren
2021-03-16  2:04 ` WeiGuo Ren [this message]
2021-03-17  6:33   ` WeiGuo Ren

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPy+zYW17u=5mnyx33jODXdMyEQ2dnHWRUHtVW_xmu9+zmSnVA@mail.gmail.com' \
    --to=rwg1335252904@gmail.com \
    --cc=ceph-devel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).