From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA8DFC43603 for ; Thu, 12 Dec 2019 02:28:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8BA84214D8 for ; Thu, 12 Dec 2019 02:28:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727811AbfLLC2h convert rfc822-to-8bit (ORCPT ); Wed, 11 Dec 2019 21:28:37 -0500 Received: from smtp.h3c.com ([60.191.123.50]:4948 "EHLO h3cspam02-ex.h3c.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727664AbfLLC2g (ORCPT ); Wed, 11 Dec 2019 21:28:36 -0500 Received: from DAG2EX01-BASE.srv.huawei-3com.com ([10.8.0.64]) by h3cspam02-ex.h3c.com with ESMTPS id xBC2Rslq008144 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 12 Dec 2019 10:27:54 +0800 (GMT-8) (envelope-from li.kai4@h3c.com) Received: from DAG2EX07-IDC.srv.huawei-3com.com (10.8.0.70) by DAG2EX01-BASE.srv.huawei-3com.com (10.8.0.64) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Thu, 12 Dec 2019 10:27:55 +0800 Received: from DAG2EX07-IDC.srv.huawei-3com.com ([::1]) by DAG2EX07-IDC.srv.huawei-3com.com ([fe80::c439:37f7:8e24:31c4%9]) with mapi id 15.01.1713.004; Thu, 12 Dec 2019 10:27:55 +0800 From: Likai To: Changwei Ge , "mark@fasheh.com" , "jlbec@evilplan.org" , "joseph.qi@linux.alibaba.com" CC: "ocfs2-devel@oss.oracle.com" , "linux-fsdevel@vger.kernel.org" , "linux-kernel@vger.kernel.org" Subject: Re: [PATCH] ocfs2: call journal flush to mark journal as empty after journal recovery when mount Thread-Topic: [PATCH] ocfs2: call journal flush to mark journal as empty after journal recovery when mount Thread-Index: AQHVsApQjv9leuKFH0CaG26YADPZnQ== Date: Thu, 12 Dec 2019 02:27:55 +0000 Message-ID: <4f5b0bc5e63c4a7b963b54b488589785@h3c.com> References: <20191211100338.510-1-li.kai4@h3c.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.125.108.72] x-sender-location: DAG2 Content-Type: text/plain; charset="iso-2022-jp" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 X-DNSRBL: X-MAIL: h3cspam02-ex.h3c.com xBC2Rslq008144 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Changwei, thanks for your comments, I will modify the log info inline. thanks On 2019/12/11 20:48, Changwei Ge wrote: > Hi Kai, > > Now the problem is more clear to me. > > Just a few comments inline. > > On 12/11/19 6:03 PM, Kai Li wrote: >> If journal is dirty when mount, it will be replayed but jbd2 sb >> log tail cannot be updated to mark a new start because >> journal->j_flag has already been set with JBD2_ABORT first >> in journal_init_common. When a new transaction is committed, it >> will be recored in block 1 first(journal->j_tail is set to 1 in >> journal_reset). >> >> If emergency restart happens again before journal super block is >> updated unfortunately, the new recorded trans will not be replayed >> in the next mount. > So this issue happens before the first committing log iteration comes > (running in jbd2 kernel thread), right? > This leads jbd2 losing the chance to update *on-disk* journal super > block but the dynamic status of journal is still remaining in memory. > > Yes, I agree. We should update on-disk journal super block after journal > recovery since that procedure doesn't involve journal status in memory > which is just reset to the very beginning(tail and head pointing to the > first block) of journal region. > > Moreover, during journal recovery running by peer nodes, ocfs2 already > does that like what this patch does. > > ocfs2_recover_node() -> ocfs2_replay_journal(). > > Very good job to catch such an exception. :-) > > >> This exception happens when this lun is used by only one node. If it >> is used by multi-nodes, other node will replay its journal and its >> journal sb block will be updated after recovery. >> >> To fix this problem, use jbd2_journal_flush to mark journal as empty as >> ocfs2_replay_journal has done. >> >> The following jbd2 journal can be generated by touching a new file after >> journal is replayed, and seq 15 is the first valid commit, but first seq >> is 13 in journal super block. >> logdump: >> Block 0: Journal Superblock >> Seq: 0 Type: 4 (JBD2_SUPERBLOCK_V2) >> Blocksize: 4096 Total Blocks: 32768 First Block: 1 >> First Commit ID: 13 Start Log Blknum: 1 >> Error: 0 >> Feature Compat: 0 >> Feature Incompat: 2 block64 >> Feature RO compat: 0 >> Journal UUID: 4ED3822C54294467A4F8E87D2BA4BC36 >> FS Share Cnt: 1 Dynamic Superblk Blknum: 0 >> Per Txn Block Limit Journal: 0 Data: 0 >> >> Block 1: Journal Commit Block >> Seq: 14 Type: 2 (JBD2_COMMIT_BLOCK) >> >> Block 2: Journal Descriptor >> Seq: 15 Type: 1 (JBD2_DESCRIPTOR_BLOCK) >> No. Blocknum Flags >> 0. 587 none >> UUID: 00000000000000000000000000000000 >> 1. 8257792 JBD2_FLAG_SAME_UUID >> 2. 619 JBD2_FLAG_SAME_UUID >> 3. 24772864 JBD2_FLAG_SAME_UUID >> 4. 8257802 JBD2_FLAG_SAME_UUID >> 5. 513 JBD2_FLAG_SAME_UUID JBD2_FLAG_LAST_TAG >> ... >> Block 7: Inode >> Inode: 8257802 Mode: 0640 Generation: 57157641 (0x3682809) >> FS Generation: 2839773110 (0xa9437fb6) >> CRC32: 00000000 ECC: 0000 >> Type: Regular Attr: 0x0 Flags: Valid >> Dynamic Features: (0x1) InlineData >> User: 0 (root) Group: 0 (root) Size: 7 >> Links: 1 Clusters: 0 >> ctime: 0x5de5d870 0x11104c61 -- Tue Dec 3 11:37:20.286280801 2019 >> atime: 0x5de5d870 0x113181a1 -- Tue Dec 3 11:37:20.288457121 2019 >> mtime: 0x5de5d870 0x11104c61 -- Tue Dec 3 11:37:20.286280801 2019 >> dtime: 0x0 -- Thu Jan 1 08:00:00 1970 >> ... >> Block 9: Journal Commit Block >> Seq: 15 Type: 2 (JBD2_COMMIT_BLOCK) >> >> The following is jouranl recovery log when recovering the upper jbd2 >> journal when mount again. >> syslog: >> [ 2265.648622] ocfs2: File system on device (252,1) was not unmounted cleanly, recovering it. >> [ 2265.649695] fs/jbd2/recovery.c:(do_one_pass, 449): Starting recovery pass 0 >> [ 2265.650407] fs/jbd2/recovery.c:(do_one_pass, 449): Starting recovery pass 1 >> [ 2265.650409] fs/jbd2/recovery.c:(do_one_pass, 449): Starting recovery pass 2 >> [ 2265.650410] fs/jbd2/recovery.c:(jbd2_journal_recover, 278): JBD2: recovery, exit status 0, recovered transactions 13 to 13 >> >> Due to first commit seq 13 recorded in journal super is not consistent >> with the value recorded in block 1(seq is 14), journal recovery will be >> terminated before seq 15 even though it is an unbroken commit, inode >> 8257802 is a new file and it will be lost. >> >> Signed-off-by: Kai Li >> --- >> fs/ocfs2/journal.c | 8 ++++++++ >> 1 file changed, 8 insertions(+) >> >> diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c >> index 1afe57f425a0..b8b9d26fa731 100644 >> --- a/fs/ocfs2/journal.c >> +++ b/fs/ocfs2/journal.c >> @@ -1066,6 +1066,14 @@ int ocfs2_journal_load(struct ocfs2_journal *journal, int local, int replayed) >> >> ocfs2_clear_journal_error(osb->sb, journal->j_journal, osb->slot_num); >> >> + if (replayed) { >> + /* wipe the journal */ mlog( ML_NOTICE, "journal recovery complete" ); Its reason is that if journal is dirty we will print a notice "File system on device (*) was not unmounted cleanly, recovering it" before, here it is done. >> + jbd2_journal_lock_updates(journal->j_journal); >> + status = jbd2_journal_flush(journal->j_journal); >> + jbd2_journal_unlock_updates(journal->j_journal); if (status < 0) mlog_errno(status); if it fails, it may cause file lost in future. >> + mlog(ML_NOTICE, "journal recovery complete, status=%d", status); > Is the above mlog line necessary? > Can we just log a warning only jbd2_journal_flush() fails and let the > mount continue? IMO, the mlog line can't help much. > > Otherwise, this patch looks good to me. > > > Thanks, > Changwei > > >> + } >> + >> status = ocfs2_journal_toggle_dirty(osb, 1, replayed); >> if (status < 0) { >> mlog_errno(status); >>