ocfs2-devel.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Joseph Qi via Ocfs2-devel <ocfs2-devel@oss.oracle.com>
To: Heming Zhao <heming.zhao@suse.com>, ocfs2-devel@oss.oracle.com
Subject: Re: [Ocfs2-devel] [PATCH 1/2] ocfs2: fix jbd2 assertion in	defragment path
Date: Fri, 10 Jun 2022 15:46:07 +0800	[thread overview]
Message-ID: <729b67e8-51f0-87e1-e157-bcfea6f4b503@linux.alibaba.com> (raw)
In-Reply-To: <20220604000828.2b3arxq6txtq22d4@c73>



On 6/4/22 8:08 AM, Heming Zhao wrote:
> On Thu, Jun 02, 2022 at 05:34:18PM +0800, Joseph Qi wrote:
>> Hi,
>>
>> Sorry for the late response since I was busy on other things...
>>
>> On 5/21/22 6:14 PM, Heming Zhao wrote:
>>> defragfs triggered jbd2 report ASSERT when working.
>>>
>>> code path:
>>>
>>> ocfs2_ioctl_move_extents
>>>  ocfs2_move_extents
>>>   __ocfs2_move_extents_range
>>>    ocfs2_defrag_extent
>>>     __ocfs2_move_extent
>>>      + ocfs2_journal_access_di
>>>      + ocfs2_split_extent  //[1]
>>>      + ocfs2_journal_dirty //crash
>>>
>>> [1]: ocfs2_split_extent called ocfs2_extend_trans, which committed
>>> dirty buffers then restarted the transaction. The changed transaction
>>> triggered jbd2 ASSERT.
>>>
>>> crash stacks:
>>>
>>> PID: 11297  TASK: ffff974a676dcd00  CPU: 67  COMMAND: "defragfs.ocfs2"
>>>  #0 [ffffb25d8dad3900] machine_kexec at ffffffff8386fe01
>>>  #1 [ffffb25d8dad3958] __crash_kexec at ffffffff8395959d
>>>  #2 [ffffb25d8dad3a20] crash_kexec at ffffffff8395a45d
>>>  #3 [ffffb25d8dad3a38] oops_end at ffffffff83836d3f
>>>  #4 [ffffb25d8dad3a58] do_trap at ffffffff83833205
>>>  #5 [ffffb25d8dad3aa0] do_invalid_op at ffffffff83833aa6
>>>  #6 [ffffb25d8dad3ac0] invalid_op at ffffffff84200d18
>>>     [exception RIP: jbd2_journal_dirty_metadata+0x2ba]
>>>     RIP: ffffffffc09ca54a  RSP: ffffb25d8dad3b70  RFLAGS: 00010207
>>>     RAX: 0000000000000000  RBX: ffff9706eedc5248  RCX: 0000000000000000
>>>     RDX: 0000000000000001  RSI: ffff97337029ea28  RDI: ffff9706eedc5250
>>>     RBP: ffff9703c3520200   R8: 000000000f46b0b2   R9: 0000000000000000
>>>     R10: 0000000000000001  R11: 00000001000000fe  R12: ffff97337029ea28
>>>     R13: 0000000000000000  R14: ffff9703de59bf60  R15: ffff9706eedc5250
>>>     ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
>>>  #7 [ffffb25d8dad3ba8] ocfs2_journal_dirty at ffffffffc137fb95 [ocfs2]
>>>  #8 [ffffb25d8dad3be8] __ocfs2_move_extent at ffffffffc139a950 [ocfs2]
>>>  #9 [ffffb25d8dad3c80] ocfs2_defrag_extent at ffffffffc139b2d2 [ocfs2]
>>>
>>> The fix method is simple, ocfs2_split_extent includes three paths. all
>>> the paths already have journal access/dirty pair. We only need to
>>> remove journal access/dirty from __ocfs2_move_extent.
>>>
>>
>> I am not sure what you mean by "all three paths have journal access/
>> dirty pair".
> 
> I am a newbie for ocfs2 & jbd2, I can't make sure this [1/2] patch is correct.
> Below is my analysis.
> 
> All three paths (below 1.[123]):
> 
> __ocfs2_move_extent
>  + ocfs2_journal_access_di
>  |
>  + ocfs2_split_extent           //[1]
>  |  + ocfs2_replace_extent_rec  //[1.1]
>  |  + ocfs2_split_and_insert    //[1.2]
>  |  + ocfs2_try_to_merge_extent //[1.3]
>  |
>  + + ocfs2_journal_dirty       //crash
> 
> All three paths have itselves journal access/dirty pair.
> So the access/dirty pair in caller __ocfs2_move_extent is unnecessary.
> 

ocfs2_journal_access_xxx() is to notify jbd2 to do a specific operation
to a bh. I said in my last mail, they are for different bh.

>>
>> It seems we can't do it in your way, as journal access has different
>> ocfs2_trigger with different offset, e.g. dinode is different from
>> extent block.
> 
> There are 3 ocfs2_split_extent callers:
>  fs/ocfs2/alloc.c <<ocfs2_change_extent_flag>>
>  fs/ocfs2/move_extents.c <<__ocfs2_move_extent>>
>  fs/ocfs2/refcounttree.c <<ocfs2_clear_ext_refcount>>
> 
> We can see, except defrag flow (caller __ocfs2_move_extent), other two
> don't use jbd2 access/dirty pair around ocfs2_split_extent.
> 

Not exactly, you have to take look the whole flow.

Thanks,
Joseph

_______________________________________________
Ocfs2-devel mailing list
Ocfs2-devel@oss.oracle.com
https://oss.oracle.com/mailman/listinfo/ocfs2-devel

      reply	other threads:[~2022-06-10  7:46 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-21 10:14 [Ocfs2-devel] [PATCH 1/2] ocfs2: fix jbd2 assertion in defragment path Heming Zhao via Ocfs2-devel
2022-05-21 10:14 ` [Ocfs2-devel] [PATCH 2/2] ocfs2: fix for local alloc window restore unconditionally Heming Zhao via Ocfs2-devel
2022-06-02 10:02   ` Joseph Qi via Ocfs2-devel
2022-06-12  2:57   ` Joseph Qi via Ocfs2-devel
2022-06-12  7:45     ` heming.zhao--- via Ocfs2-devel
2022-06-12 12:38       ` Joseph Qi via Ocfs2-devel
2022-06-13  1:48         ` heming.zhao--- via Ocfs2-devel
2022-06-02  9:34 ` [Ocfs2-devel] [PATCH 1/2] ocfs2: fix jbd2 assertion in defragment path Joseph Qi via Ocfs2-devel
2022-06-04  0:08   ` Heming Zhao via Ocfs2-devel
2022-06-10  7:46     ` Joseph Qi via Ocfs2-devel [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=729b67e8-51f0-87e1-e157-bcfea6f4b503@linux.alibaba.com \
    --to=ocfs2-devel@oss.oracle.com \
    --cc=heming.zhao@suse.com \
    --cc=joseph.qi@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).