From mboxrd@z Thu Jan 1 00:00:00 1970 From: Wengang Wang Date: Mon, 9 Nov 2020 09:17:46 -0800 Subject: [Ocfs2-devel] [PATCH V2] ocfs2: initialize ip_next_orphan Message-ID: <20201109171746.27884-1-wen.gang.wang@oracle.com> List-Id: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: ocfs2-devel@oss.oracle.com Though problem if found on a lower 4.1.12 kernel, I think upstream has same issue. In one node in the cluster, there is the following callback trace: # cat /proc/21473/stack [] __ocfs2_cluster_lock.isra.36+0x336/0x9e0 [ocfs2] [] ocfs2_inode_lock_full_nested+0x121/0x520 [ocfs2] [] ocfs2_evict_inode+0x152/0x820 [ocfs2] [] evict+0xae/0x1a0 [] iput+0x1c6/0x230 [] ocfs2_orphan_filldir+0x5d/0x100 [ocfs2] [] ocfs2_dir_foreach_blk+0x490/0x4f0 [ocfs2] [] ocfs2_dir_foreach+0x29/0x30 [ocfs2] [] ocfs2_recover_orphans+0x1b6/0x9a0 [ocfs2] [] ocfs2_complete_recovery+0x1de/0x5c0 [ocfs2] [] process_one_work+0x169/0x4a0 [] worker_thread+0x5b/0x560 [] kthread+0xcb/0xf0 [] ret_from_fork+0x61/0x90 [] 0xffffffffffffffff The above stack is not reasonable, the final iput shouldn't happen in ocfs2_orphan_filldir() function. Looking at the code, 2067 /* Skip inodes which are already added to recover list, since dio may 2068 * happen concurrently with unlink/rename */ 2069 if (OCFS2_I(iter)->ip_next_orphan) { 2070 iput(iter); 2071 return 0; 2072 } 2073 The logic thinks the inode is already in recover list on seeing ip_next_orphan is non-NULL, so it skip this inode after dropping a reference which incremented in ocfs2_iget(). While, if the inode is already in recover list, it should have another reference and the iput() at line 2070 should not be the final iput (dropping the last reference). So I don't think the inode is really in the recover list (no vmcore to confirm). Note that ocfs2_queue_orphans(), though not shown up in the call back trace, is holding cluster lock on the orphan directory when looking up for unlinked inodes. The on disk inode eviction could involve a lot of IOs which may need long time to finish. That means this node could hold the cluster lock for very long time, that can lead to the lock requests (from other nodes) to the orhpan directory hang for long time. Looking at more on ip_next_orphan, I found it's not initialized when allocating a new ocfs2_inode_info structure. Fix: initialize ip_next_orphan as NULL. Signed-off-by: Wengang Wang --- v1 -> v2: move the initialization of ip_next_orphan earlier. --- fs/ocfs2/super.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c index 1d91dd1e8711..2febc76e9de7 100644 --- a/fs/ocfs2/super.c +++ b/fs/ocfs2/super.c @@ -1713,6 +1713,7 @@ static void ocfs2_inode_init_once(void *data) oi->ip_blkno = 0ULL; oi->ip_clusters = 0; + oi->ip_next_orphan = NULL; ocfs2_resv_init_once(&oi->ip_la_data_resv); -- 2.21.0