All of lore.kernel.org
 help / color / mirror / Atom feed
From: Joseph Qi via Ocfs2-devel <ocfs2-devel@oss.oracle.com>
To: "heming.zhao@suse.com" <heming.zhao@suse.com>,
	ocfs2-devel@oss.oracle.com
Subject: Re: [Ocfs2-devel] [PATCH 0/1] test case for patch 1/1
Date: Sat, 25 Jun 2022 20:47:49 +0800	[thread overview]
Message-ID: <070965a8-1b67-b190-310c-f65005621142@linux.alibaba.com> (raw)
In-Reply-To: <f0d9f738-a9ed-0b78-f989-a5b228d0233e@suse.com>



On 6/18/22 6:18 PM, heming.zhao@suse.com wrote:
> On 6/18/22 10:35, Joseph Qi wrote:
>>
>>
>> On 6/8/22 6:48 PM, Heming Zhao wrote:
>>> === test cases ====
>>>
>>> <1> remount on local node for cluster env
>>>
>>> mount -t ocfs2 /dev/vdb /mnt
>>> mount -t ocfs2 /dev/vdb /mnt              <=== failure
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt <=== failure
>>>
>>
>> This is mount multiple times, not remount.
>> Don't get it has any relations with your changes.
> 
> Yes. not related with my patch.
> I include this test only for watching remount result.
> 
>>
>>> <2> remount on local node for nocluster env
>>>
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt
>>> mount -t ocfs2 /dev/vdb /mnt              <=== failure
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt <=== failure
>>>
>>> <3> remount on another node for cluster env
>>>
>>> node2:
>>> mount -t ocfs2 /dev/vdb /mnt
>>>
>>> node1:
>>> mount -t ocfs2 /dev/vdb /mnt  <== success
>>> umount
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt <== failure
>>>
>>> <4> remount on another node for nocluster env
>>>
>>> node2:
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt
>>>
>>> node1:
>>> mount -t ocfs2 /dev/vdb /mnt              <== failure
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt <== success, see below comment
>>>
>> Why allow two nodes mount nocluster sucessfully?
>> Since there is no cluster lock enabled, it will corrupt data.
> 
> I didn't know about ext4 mmp feature at that time. If ext4 allows mount on
> different machine (can corrupt data), ocfs2 also allow this case to happen.
> But following ext4 mmp, we could better to add some similar code for blocking
> multi mount.
>>
>>> <5> simulate after crash status for cluster env
>>>
>>> (below all steps did on node1. node2 is unmount status)
>>> mount -t ocfs2 /dev/vdb /mnt
>>> dd if=/dev/vdb bs=1 count=8 skip=76058624 of=/root/slotmap.cluster.mnted
>>> umount /mnt
>>> dd if=/root/slotmap.cluster.mnted of=/dev/vdb seek=76058624 bs=1 count=8
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt   <== failure
>>> mount -t ocfs2 /dev/vdb /mnt && umount /mnt <== clean slot 0
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt   <== success
>>>
>>> <6> simulate after crash status for nocluster env
>>>
>>> (below all steps did on node1. node2 is unmount status)
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt
>>> dd if=/dev/vdb bs=1 count=8 skip=76058624 of=/root/slotmap.nocluster.mnted
>>> umount /mnt
>>> dd if=/root/slotmap.nocluster.mnted of=/dev/vdb seek=76058624 bs=1 count=8
>>> mount -t ocfs2 /dev/vdb /mnt   <== failure
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt && umount /mnt <== clean slot 0
>>> mount -t ocfs2 /dev/vdb /mnt   <== success
>>>
>> 'bs=1 count=8 skip=76058624', is this for slotmap backup?
> 
> sorry, I forgot to explain this secret number meaning. Your guess is right.
> 
> how to calculate:
> ```
> my test disk is 500M raw file, attached to kvm-qemu with shared mode.
> (my env) block size: 1K cluster size: 4K '//slot_map' inode number: 0xD.
> debugfs: stat //slot_map
>         Inode: 13   Mode: 0644   Generation: 4183895025 (0xf9612bf1)
>         FS Generation: 4183895025 (0xf9612bf1)
>         CRC32: 00000000   ECC: 0000
>         Type: Regular   Attr: 0x0   Flags: Valid System
>         Dynamic Features: (0x0)
>         User: 0 (root)   Group: 0 (root)   Size: 4096
>         Links: 1   Clusters: 1
>         ctime: 0x62286e49 0x0 -- Wed Mar  9 17:07:21.0 2022
>         atime: 0x62286e49 0x0 -- Wed Mar  9 17:07:21.0 2022
>         mtime: 0x62286e4a 0x0 -- Wed Mar  9 17:07:22.0 2022
>         dtime: 0x0 -- Thu Jan  1 08:00:00 1970
>         Refcount Block: 0
>         Last Extblk: 0   Orphan Slot: 0
>         Sub Alloc Slot: Global   Sub Alloc Bit: 5
>         Tree Depth: 0   Count: 51   Next Free Rec: 1
>         ## Offset        Clusters       Block#          Flags
>         0  0             1              74276           0x0
> 
> 74276 * 1024 => 76058624 (0x4889000)
> ```
> 
> At last, do you think I could send v2 patch which includes part of ext4 mmp feature.
> I plan to copy ext4_multi_mount_protect(), and won't include generating kmmpd kthread
> code.
> btw, to be honest, I can't totally got the idea of kmmpd. kmmpd does periodically
> update/detect mmp area. And ext4_multi_mount_protect() already blocks new mounting
> action. So in my view, kmmpd update/detect actions only work for user/something
> directly modifies disk mmp area case. if my guess is right, the kmmpd is not necessary.
> 

Sorry for the late reply.
Since now this feature is incomplete and o2cb is the default stack, I'd
like take Junxiao's suggestion, we revert this feature first to quickly
fix the regression.
And we can take it again in the future once the feature is mature.

Thanks,
Joseph

_______________________________________________
Ocfs2-devel mailing list
Ocfs2-devel@oss.oracle.com
https://oss.oracle.com/mailman/listinfo/ocfs2-devel

  reply	other threads:[~2022-06-25 12:48 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-08 10:48 [Ocfs2-devel] [PATCH 0/1] test case for patch 1/1 Heming Zhao via Ocfs2-devel
2022-06-08 10:48 ` [Ocfs2-devel] [PATCH 1/1] ocfs2: fix ocfs2_find_slot repeats alloc same slot issue Heming Zhao via Ocfs2-devel
2022-06-12 14:16   ` Joseph Qi via Ocfs2-devel
2022-06-13  7:59     ` heming.zhao--- via Ocfs2-devel
2022-06-13  8:21       ` Joseph Qi via Ocfs2-devel
2022-06-13  8:48         ` heming.zhao--- via Ocfs2-devel
2022-06-13 15:43           ` Junxiao Bi via Ocfs2-devel
2022-06-14  2:59             ` heming.zhao--- via Ocfs2-devel
2022-06-14  3:27               ` Joseph Qi via Ocfs2-devel
2022-06-14 12:13                 ` heming.zhao--- via Ocfs2-devel
2022-06-15  2:03                   ` Joseph Qi via Ocfs2-devel
2022-06-15  4:37                     ` heming.zhao--- via Ocfs2-devel
2022-06-14  6:22               ` Junxiao Bi via Ocfs2-devel
2022-06-18  2:35 ` [Ocfs2-devel] [PATCH 0/1] test case for patch 1/1 Joseph Qi via Ocfs2-devel
2022-06-18 10:18   ` heming.zhao--- via Ocfs2-devel
2022-06-25 12:47     ` Joseph Qi via Ocfs2-devel [this message]
2022-06-25 15:02       ` heming.zhao--- via Ocfs2-devel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=070965a8-1b67-b190-310c-f65005621142@linux.alibaba.com \
    --to=ocfs2-devel@oss.oracle.com \
    --cc=heming.zhao@suse.com \
    --cc=joseph.qi@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.