ocfs2-devel.oss.oracle.com archive mirror
 help / color / mirror / Atom feed
From: Joseph Qi <joseph.qi@linux.alibaba.com>
To: ocfs2-devel@oss.oracle.com
Subject: [Ocfs2-devel] [PATCH] ocfs2: initialize ip_next_orphan
Date: Mon, 2 Nov 2020 09:40:01 +0800	[thread overview]
Message-ID: <d0045479-2766-b1e5-d664-5a6ba95f29f9@linux.alibaba.com> (raw)
In-Reply-To: <04a41689-835b-5a6f-a2bd-f5c8df7a8b32@oracle.com>

On 2020/10/30 23:32, Wengang Wang wrote:
> Thanks for review Joseph,
> Please see in lines:
> On 10/29/20 10:55 PM, Joseph Qi wrote:
>> On 2020/10/30 05:04, Wengang Wang wrote:
>>> Though problem if found on a lower 4.1.12 kernel, I think upstream
>>> has same issue.
>>> In one node in the cluster, there is the following callback trace:
>>> # cat /proc/21473/stack
>>> [<ffffffffc09a2f06>] __ocfs2_cluster_lock.isra.36+0x336/0x9e0 [ocfs2]
>>> [<ffffffffc09a4481>] ocfs2_inode_lock_full_nested+0x121/0x520 [ocfs2]
>>> [<ffffffffc09b2ce2>] ocfs2_evict_inode+0x152/0x820 [ocfs2]
>>> [<ffffffff8122b36e>] evict+0xae/0x1a0
>>> [<ffffffff8122bd26>] iput+0x1c6/0x230
>>> [<ffffffffc09b60ed>] ocfs2_orphan_filldir+0x5d/0x100 [ocfs2]
>>> [<ffffffffc0992ae0>] ocfs2_dir_foreach_blk+0x490/0x4f0 [ocfs2]
>>> [<ffffffffc099a1e9>] ocfs2_dir_foreach+0x29/0x30 [ocfs2]
>>> [<ffffffffc09b7716>] ocfs2_recover_orphans+0x1b6/0x9a0 [ocfs2]
>>> [<ffffffffc09b9b4e>] ocfs2_complete_recovery+0x1de/0x5c0 [ocfs2]
>>> [<ffffffff810a1399>] process_one_work+0x169/0x4a0
>>> [<ffffffff810a1bcb>] worker_thread+0x5b/0x560
>>> [<ffffffff810a7a2b>] kthread+0xcb/0xf0
>>> [<ffffffff816f5d21>] ret_from_fork+0x61/0x90
>>> [<ffffffffffffffff>] 0xffffffffffffffff
>>> The above stack is not reasonable, the final iput shouldn't happen in
>>> ocfs2_orphan_filldir() function. Looking at the code,
>>> 2067???????? /* Skip inodes which are already added to recover list, since dio may
>>> 2068????????? * happen concurrently with unlink/rename */
>>> 2069???????? if (OCFS2_I(iter)->ip_next_orphan) {
>>> 2070???????????????? iput(iter);
>>> 2071???????????????? return 0;
>>> 2072???????? }
>>> 2073
>>> The logic thinks the inode is already in recover list on seeing
>>> ip_next_orphan is non-NULL, so it skip this inode after dropping a
>>> reference which incremented in ocfs2_iget().
>>> While, if the inode is already in recover list, it should have another
>>> reference and the iput() at line 2070 should not be the final iput
>>> (dropping the last reference). So I don't think the inode is really
>>> in the recover list (no vmcore to confirm).
>>> Note that ocfs2_queue_orphans(), though not shown up in the call back trace,
>>> is holding cluster lock on the orphan directory when looking up for unlinked
>>> inodes. The on disk inode eviction could involve a lot of IOs which may need
>>> long time to finish. That means this node could hold the cluster lock for
>>> very long time, that can lead to the lock requests (from other nodes) to the
>>> orhpan directory hang for long time.
>>> Looking at more on ip_next_orphan, I found it's not initialized when
>>> allocating a new ocfs2_inode_info structure.
>> I don't see the internal relations.
> If not initialized, ip_next_orphan could be any value. When it's an arbitrary value rather than zero (NULL), the problem would appear (at line 2069 and 2070).
> But, what I am curious is that why this problem didn't raise much earlier? Hope I can find the answer here.
>> And AFAIK, ip_next_orphan will be initialized during ocfs2_queue_orphans().
> I am not seeing it's initialized in ocfs2_queue_orphans() in source code v5.10-rc1. Can you provide more details where it's initialized?

I thought it is initialzed by ocfs2_queue_orphans() ->
ocfs2_orphan_filldir(). But take a closer look at the code, it's after
the check you paste above, so you are right.
I also have the same question now, why we don't encounter it before
since recovery is very common case for us.


  reply	other threads:[~2020-11-02  1:40 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-29 21:04 [Ocfs2-devel] [PATCH] ocfs2: initialize ip_next_orphan Wengang Wang
2020-10-30  5:55 ` Joseph Qi
2020-10-30 15:32   ` Wengang Wang
2020-11-02  1:40     ` Joseph Qi [this message]
2020-11-02 16:40       ` Wengang Wang
2020-11-03 21:53         ` Wengang Wang
2020-11-06 16:47         ` Wengang Wang
2020-11-09  1:58           ` Joseph Qi
2020-11-09 16:51             ` Wengang Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d0045479-2766-b1e5-d664-5a6ba95f29f9@linux.alibaba.com \
    --to=joseph.qi@linux.alibaba.com \
    --cc=ocfs2-devel@oss.oracle.com \


* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).