All of lore.kernel.org
 help / color / mirror / Atom feed
From: Joseph Qi via Ocfs2-devel <ocfs2-devel@oss.oracle.com>
To: Heming Zhao <heming.zhao@suse.com>, ocfs2-devel@oss.oracle.com
Subject: Re: [Ocfs2-devel] [PATCH 0/1] test case for patch 1/1
Date: Sat, 18 Jun 2022 10:35:22 +0800	[thread overview]
Message-ID: <3144cb10-4359-bb1e-1875-f9803fc9f4a9@linux.alibaba.com> (raw)
In-Reply-To: <20220608104808.18130-1-heming.zhao@suse.com>



On 6/8/22 6:48 PM, Heming Zhao wrote:
> === test cases ====
> 
> <1> remount on local node for cluster env
> 
> mount -t ocfs2 /dev/vdb /mnt
> mount -t ocfs2 /dev/vdb /mnt              <=== failure
> mount -t ocfs2 -o nocluster /dev/vdb /mnt <=== failure
> 

This is mount multiple times, not remount.
Don't get it has any relations with your changes.

> <2> remount on local node for nocluster env
> 
> mount -t ocfs2 -o nocluster /dev/vdb /mnt
> mount -t ocfs2 /dev/vdb /mnt              <=== failure
> mount -t ocfs2 -o nocluster /dev/vdb /mnt <=== failure
> 
> <3> remount on another node for cluster env
> 
> node2:
> mount -t ocfs2 /dev/vdb /mnt
> 
> node1:
> mount -t ocfs2 /dev/vdb /mnt  <== success
> umount
> mount -t ocfs2 -o nocluster /dev/vdb /mnt <== failure
> 
> <4> remount on another node for nocluster env
> 
> node2:
> mount -t ocfs2 -o nocluster /dev/vdb /mnt
> 
> node1:
> mount -t ocfs2 /dev/vdb /mnt              <== failure
> mount -t ocfs2 -o nocluster /dev/vdb /mnt <== success, see below comment
> 
Why allow two nodes mount nocluster sucessfully?
Since there is no cluster lock enabled, it will corrupt data.

> <5> simulate after crash status for cluster env
> 
> (below all steps did on node1. node2 is unmount status)
> mount -t ocfs2 /dev/vdb /mnt
> dd if=/dev/vdb bs=1 count=8 skip=76058624 of=/root/slotmap.cluster.mnted
> umount /mnt
> dd if=/root/slotmap.cluster.mnted of=/dev/vdb seek=76058624 bs=1 count=8
> mount -t ocfs2 -o nocluster /dev/vdb /mnt   <== failure
> mount -t ocfs2 /dev/vdb /mnt && umount /mnt <== clean slot 0
> mount -t ocfs2 -o nocluster /dev/vdb /mnt   <== success
> 
> <6> simulate after crash status for nocluster env
> 
> (below all steps did on node1. node2 is unmount status)
> mount -t ocfs2 -o nocluster /dev/vdb /mnt
> dd if=/dev/vdb bs=1 count=8 skip=76058624 of=/root/slotmap.nocluster.mnted
> umount /mnt
> dd if=/root/slotmap.nocluster.mnted of=/dev/vdb seek=76058624 bs=1 count=8
> mount -t ocfs2 /dev/vdb /mnt   <== failure
> mount -t ocfs2 -o nocluster /dev/vdb /mnt && umount /mnt <== clean slot 0
> mount -t ocfs2 /dev/vdb /mnt   <== success
> 
'bs=1 count=8 skip=76058624', is this for slotmap backup?

Thanks,
Joseph

> 
> -----
> For test case <4>, the kernel job is done, but there still left
> userspace work todo. 
> In my view, mount.ocfs2 needs add double confirm for this scenario.
> 
> current style:
> ```
> # mount -t ocfs2 -o nocluster /dev/vdb /mnt && umount /mnt
> Warning: to mount a clustered volume without the cluster stack.
> Please make sure you only mount the file system from one node.
> Otherwise, the file system may be damaged.
> Proceed (y/N): y
> ```
> 
> I plan to change as:
> ```
> # mount -t ocfs2 -o nocluster /dev/vdb /mnt && umount /mnt
> Warning: to mount a clustered volume without the cluster stack.
> Please make sure you only mount the file system from one node.
> Otherwise, the file system may be damaged.
> Proceed (y/N): y
> Warning: detect volume already mounted as nocluster mode.
> Do you mount this volume on another node?
> Please confirm you want to mount this volume on this node.
> Proceed (y/N): y
> ```
> 
> Heming Zhao (1):
>   ocfs2: fix ocfs2_find_slot repeats alloc same slot issue
> 
>  fs/ocfs2/dlmglue.c  |  3 ++
>  fs/ocfs2/ocfs2_fs.h |  3 ++
>  fs/ocfs2/slot_map.c | 70 ++++++++++++++++++++++++++++++++++++---------
>  3 files changed, 62 insertions(+), 14 deletions(-)
> 

_______________________________________________
Ocfs2-devel mailing list
Ocfs2-devel@oss.oracle.com
https://oss.oracle.com/mailman/listinfo/ocfs2-devel

  parent reply	other threads:[~2022-06-18  2:35 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-08 10:48 [Ocfs2-devel] [PATCH 0/1] test case for patch 1/1 Heming Zhao via Ocfs2-devel
2022-06-08 10:48 ` [Ocfs2-devel] [PATCH 1/1] ocfs2: fix ocfs2_find_slot repeats alloc same slot issue Heming Zhao via Ocfs2-devel
2022-06-12 14:16   ` Joseph Qi via Ocfs2-devel
2022-06-13  7:59     ` heming.zhao--- via Ocfs2-devel
2022-06-13  8:21       ` Joseph Qi via Ocfs2-devel
2022-06-13  8:48         ` heming.zhao--- via Ocfs2-devel
2022-06-13 15:43           ` Junxiao Bi via Ocfs2-devel
2022-06-14  2:59             ` heming.zhao--- via Ocfs2-devel
2022-06-14  3:27               ` Joseph Qi via Ocfs2-devel
2022-06-14 12:13                 ` heming.zhao--- via Ocfs2-devel
2022-06-15  2:03                   ` Joseph Qi via Ocfs2-devel
2022-06-15  4:37                     ` heming.zhao--- via Ocfs2-devel
2022-06-14  6:22               ` Junxiao Bi via Ocfs2-devel
2022-06-18  2:35 ` Joseph Qi via Ocfs2-devel [this message]
2022-06-18 10:18   ` [Ocfs2-devel] [PATCH 0/1] test case for patch 1/1 heming.zhao--- via Ocfs2-devel
2022-06-25 12:47     ` Joseph Qi via Ocfs2-devel
2022-06-25 15:02       ` heming.zhao--- via Ocfs2-devel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3144cb10-4359-bb1e-1875-f9803fc9f4a9@linux.alibaba.com \
    --to=ocfs2-devel@oss.oracle.com \
    --cc=heming.zhao@suse.com \
    --cc=joseph.qi@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.