All of lore.kernel.org
 help / color / mirror / Atom feed
From: Zhihao Cheng <chengzhihao1@huawei.com>
To: <richard@nod.at>, <miquel.raynal@bootlin.com>, <vigneshr@ti.com>,
	<mcoquelin.stm32@gmail.com>, <kirill.shutemov@linux.intel.com>,
	<s.hauer@pengutronix.de>
Cc: <linux-mtd@lists.infradead.org>, <linux-kernel@vger.kernel.org>
Subject: [PATCH v6 15/15] ubi: fastmap: Fix high cpu usage of ubi_bgt by making sure wl_pool not empty
Date: Mon, 27 Dec 2021 11:22:46 +0800	[thread overview]
Message-ID: <20211227032246.2886878-16-chengzhihao1@huawei.com> (raw)
In-Reply-To: <20211227032246.2886878-1-chengzhihao1@huawei.com>

There at least 6 PEBs reserved on UBI device:
1. EBA_RESERVED_PEBS[1]
2. WL_RESERVED_PEBS[1]
3. UBI_LAYOUT_VOLUME_EBS[2]
4. MIN_FASTMAP_RESERVED_PEBS[2]

When all ubi volumes take all their PEBs, there are 3 (EBA_RESERVED_PEBS +
WL_RESERVED_PEBS + MIN_FASTMAP_RESERVED_PEBS - MIN_FASTMAP_TAKEN_PEBS[1])
free PEBs. Since f9c34bb529975fe ("ubi: Fix producing anchor PEBs") and
4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs considering wear level
rules") applied, there is only 1 (3 - FASTMAP_ANCHOR_PEBS[1] -
FASTMAP_NEXT_ANCHOR_PEBS[1]) free PEB to fill pool and wl_pool, after
filling pool, wl_pool is always empty. So, UBI could be stuck in an
infinite loop:

	ubi_thread	   system_wq
wear_leveling_worker <--------------------------------------------------
  get_peb_for_wl							|
    // fm_wl_pool, used = size = 0					|
    schedule_work(&ubi->fm_work)					|
									|
		    update_fastmap_work_fn				|
		      ubi_update_fastmap				|
			ubi_refill_pools				|
			// ubi->free_count - ubi->beb_rsvd_pebs < 5	|
			// wl_pool is not filled with any PEBs		|
			schedule_erase(old_fm_anchor)			|
			ubi_ensure_anchor_pebs				|
			  __schedule_ubi_work(wear_leveling_worker)	|
									|
__erase_worker								|
  ensure_wear_leveling							|
    __schedule_ubi_work(wear_leveling_worker) --------------------------

, which cause high cpu usage of ubi_bgt:
top - 12:10:42 up 5 min,  2 users,  load average: 1.76, 0.68, 0.27
Tasks: 123 total,   3 running,  54 sleeping,   0 stopped,   0 zombie

  PID USER PR   NI VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 1589 root 20   0   0      0      0 R  45.0  0.0   0:38.86 ubi_bgt0d
  319 root 20   0   0      0      0 I  15.2  0.0   0:15.29 kworker/0:3-eve
  371 root 20   0   0      0      0 I  14.9  0.0   0:12.85 kworker/3:3-eve
   20 root 20   0   0      0      0 I  11.3  0.0   0:05.33 kworker/1:0-eve
  202 root 20   0   0      0      0 I  11.3  0.0   0:04.93 kworker/2:3-eve

The fm_next_anchor always takes one free PEB to prepare for updating
fastmap, fm_anchor and fm_next_anchor are always allocated first of pool
and wl_pool, so UBI can regard these two PEBs as reserved PEBs.

Fix it by:
  1) Adding 2 PEBs reserved for fm_anchor and fm_next_anchor.
  2) Abandoning filling wl_pool until free count belows beb_rsvd_pebs.
Then, there are at least 2(EBA_RESERVED_PEBS + MIN_FASTMAP_RESERVED_PEBS -
MIN_FASTMAP_TAKEN_PEBS[1]) PEBs in pool and 1(WL_RESERVED_PEBS) PEB in
wl_pool after calling ubi_refill_pools() with all erase works done.

This modification will cause a compatibility problem with old UBI image.
If UBI volumes take the maximun number of PEBs for one certain UBI device,
there are no available PEBs to satisfy 2 new reserved PEBs, bad reserved
PEBs are taken firstly, if still not enough, ENOSPC will returned from ubi
initialization.

Fetch a reproducer in [Link].

Fixes: f9c34bb529975fe ("ubi: Fix producing anchor PEBs")
Fixes: 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs ... rules")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=215407
Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
---
 drivers/mtd/ubi/fastmap-wl.c |  5 +++--
 drivers/mtd/ubi/wl.h         | 11 +++++++++--
 2 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/mtd/ubi/fastmap-wl.c b/drivers/mtd/ubi/fastmap-wl.c
index 28f55f9cf715..18f23e76fee9 100644
--- a/drivers/mtd/ubi/fastmap-wl.c
+++ b/drivers/mtd/ubi/fastmap-wl.c
@@ -134,7 +134,8 @@ void ubi_refill_pools(struct ubi_device *ubi)
 	for (;;) {
 		enough = 0;
 		if (pool->size < pool->max_size) {
-			if (!ubi->free.rb_node)
+			if (!ubi->free.rb_node ||
+			    (ubi->free_count <= ubi->beb_rsvd_pebs))
 				break;
 
 			e = wl_get_wle(ubi);
@@ -148,7 +149,7 @@ void ubi_refill_pools(struct ubi_device *ubi)
 
 		if (wl_pool->size < wl_pool->max_size) {
 			if (!ubi->free.rb_node ||
-			   (ubi->free_count - ubi->beb_rsvd_pebs < 5))
+			    (ubi->free_count <= ubi->beb_rsvd_pebs))
 				break;
 
 			e = find_wl_entry(ubi, &ubi->free, WL_FREE_MAX_DIFF);
diff --git a/drivers/mtd/ubi/wl.h b/drivers/mtd/ubi/wl.h
index c93a53293786..e6c8f68d5ffb 100644
--- a/drivers/mtd/ubi/wl.h
+++ b/drivers/mtd/ubi/wl.h
@@ -2,14 +2,21 @@
 #ifndef UBI_WL_H
 #define UBI_WL_H
 #ifdef CONFIG_MTD_UBI_FASTMAP
+
+/* Number of physical eraseblocks for fm anchor */
+#define FASTMAP_ANCHOR_PEBS 1
+/* Number of physical eraseblocks for next fm anchor */
+#define FASTMAP_NEXT_ANCHOR_PEBS 1
+
 static void update_fastmap_work_fn(struct work_struct *wrk);
 static struct ubi_wl_entry *find_anchor_wl_entry(struct rb_root *root);
 static struct ubi_wl_entry *get_peb_for_wl(struct ubi_device *ubi);
 static void ubi_fastmap_close(struct ubi_device *ubi);
 static inline void ubi_fastmap_init(struct ubi_device *ubi, int *count)
 {
-	/* Reserve enough LEBs to store two fastmaps. */
-	*count += (ubi->fm_size / ubi->leb_size) * 2;
+	/* Reserve enough LEBs to store two fastmaps and anchor fm pebs */
+	*count += (ubi->fm_size / ubi->leb_size) * 2 +
+		  FASTMAP_ANCHOR_PEBS + FASTMAP_NEXT_ANCHOR_PEBS;
 	INIT_WORK(&ubi->fm_work, update_fastmap_work_fn);
 }
 static struct ubi_wl_entry *may_reserve_for_fm(struct ubi_device *ubi,
-- 
2.31.1


WARNING: multiple messages have this Message-ID (diff)
From: Zhihao Cheng <chengzhihao1@huawei.com>
To: <richard@nod.at>, <miquel.raynal@bootlin.com>, <vigneshr@ti.com>,
	<mcoquelin.stm32@gmail.com>, <kirill.shutemov@linux.intel.com>,
	<s.hauer@pengutronix.de>
Cc: <linux-mtd@lists.infradead.org>, <linux-kernel@vger.kernel.org>
Subject: [PATCH v6 15/15] ubi: fastmap: Fix high cpu usage of ubi_bgt by making sure wl_pool not empty
Date: Mon, 27 Dec 2021 11:22:46 +0800	[thread overview]
Message-ID: <20211227032246.2886878-16-chengzhihao1@huawei.com> (raw)
In-Reply-To: <20211227032246.2886878-1-chengzhihao1@huawei.com>

There at least 6 PEBs reserved on UBI device:
1. EBA_RESERVED_PEBS[1]
2. WL_RESERVED_PEBS[1]
3. UBI_LAYOUT_VOLUME_EBS[2]
4. MIN_FASTMAP_RESERVED_PEBS[2]

When all ubi volumes take all their PEBs, there are 3 (EBA_RESERVED_PEBS +
WL_RESERVED_PEBS + MIN_FASTMAP_RESERVED_PEBS - MIN_FASTMAP_TAKEN_PEBS[1])
free PEBs. Since f9c34bb529975fe ("ubi: Fix producing anchor PEBs") and
4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs considering wear level
rules") applied, there is only 1 (3 - FASTMAP_ANCHOR_PEBS[1] -
FASTMAP_NEXT_ANCHOR_PEBS[1]) free PEB to fill pool and wl_pool, after
filling pool, wl_pool is always empty. So, UBI could be stuck in an
infinite loop:

	ubi_thread	   system_wq
wear_leveling_worker <--------------------------------------------------
  get_peb_for_wl							|
    // fm_wl_pool, used = size = 0					|
    schedule_work(&ubi->fm_work)					|
									|
		    update_fastmap_work_fn				|
		      ubi_update_fastmap				|
			ubi_refill_pools				|
			// ubi->free_count - ubi->beb_rsvd_pebs < 5	|
			// wl_pool is not filled with any PEBs		|
			schedule_erase(old_fm_anchor)			|
			ubi_ensure_anchor_pebs				|
			  __schedule_ubi_work(wear_leveling_worker)	|
									|
__erase_worker								|
  ensure_wear_leveling							|
    __schedule_ubi_work(wear_leveling_worker) --------------------------

, which cause high cpu usage of ubi_bgt:
top - 12:10:42 up 5 min,  2 users,  load average: 1.76, 0.68, 0.27
Tasks: 123 total,   3 running,  54 sleeping,   0 stopped,   0 zombie

  PID USER PR   NI VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 1589 root 20   0   0      0      0 R  45.0  0.0   0:38.86 ubi_bgt0d
  319 root 20   0   0      0      0 I  15.2  0.0   0:15.29 kworker/0:3-eve
  371 root 20   0   0      0      0 I  14.9  0.0   0:12.85 kworker/3:3-eve
   20 root 20   0   0      0      0 I  11.3  0.0   0:05.33 kworker/1:0-eve
  202 root 20   0   0      0      0 I  11.3  0.0   0:04.93 kworker/2:3-eve

The fm_next_anchor always takes one free PEB to prepare for updating
fastmap, fm_anchor and fm_next_anchor are always allocated first of pool
and wl_pool, so UBI can regard these two PEBs as reserved PEBs.

Fix it by:
  1) Adding 2 PEBs reserved for fm_anchor and fm_next_anchor.
  2) Abandoning filling wl_pool until free count belows beb_rsvd_pebs.
Then, there are at least 2(EBA_RESERVED_PEBS + MIN_FASTMAP_RESERVED_PEBS -
MIN_FASTMAP_TAKEN_PEBS[1]) PEBs in pool and 1(WL_RESERVED_PEBS) PEB in
wl_pool after calling ubi_refill_pools() with all erase works done.

This modification will cause a compatibility problem with old UBI image.
If UBI volumes take the maximun number of PEBs for one certain UBI device,
there are no available PEBs to satisfy 2 new reserved PEBs, bad reserved
PEBs are taken firstly, if still not enough, ENOSPC will returned from ubi
initialization.

Fetch a reproducer in [Link].

Fixes: f9c34bb529975fe ("ubi: Fix producing anchor PEBs")
Fixes: 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs ... rules")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=215407
Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
---
 drivers/mtd/ubi/fastmap-wl.c |  5 +++--
 drivers/mtd/ubi/wl.h         | 11 +++++++++--
 2 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/mtd/ubi/fastmap-wl.c b/drivers/mtd/ubi/fastmap-wl.c
index 28f55f9cf715..18f23e76fee9 100644
--- a/drivers/mtd/ubi/fastmap-wl.c
+++ b/drivers/mtd/ubi/fastmap-wl.c
@@ -134,7 +134,8 @@ void ubi_refill_pools(struct ubi_device *ubi)
 	for (;;) {
 		enough = 0;
 		if (pool->size < pool->max_size) {
-			if (!ubi->free.rb_node)
+			if (!ubi->free.rb_node ||
+			    (ubi->free_count <= ubi->beb_rsvd_pebs))
 				break;
 
 			e = wl_get_wle(ubi);
@@ -148,7 +149,7 @@ void ubi_refill_pools(struct ubi_device *ubi)
 
 		if (wl_pool->size < wl_pool->max_size) {
 			if (!ubi->free.rb_node ||
-			   (ubi->free_count - ubi->beb_rsvd_pebs < 5))
+			    (ubi->free_count <= ubi->beb_rsvd_pebs))
 				break;
 
 			e = find_wl_entry(ubi, &ubi->free, WL_FREE_MAX_DIFF);
diff --git a/drivers/mtd/ubi/wl.h b/drivers/mtd/ubi/wl.h
index c93a53293786..e6c8f68d5ffb 100644
--- a/drivers/mtd/ubi/wl.h
+++ b/drivers/mtd/ubi/wl.h
@@ -2,14 +2,21 @@
 #ifndef UBI_WL_H
 #define UBI_WL_H
 #ifdef CONFIG_MTD_UBI_FASTMAP
+
+/* Number of physical eraseblocks for fm anchor */
+#define FASTMAP_ANCHOR_PEBS 1
+/* Number of physical eraseblocks for next fm anchor */
+#define FASTMAP_NEXT_ANCHOR_PEBS 1
+
 static void update_fastmap_work_fn(struct work_struct *wrk);
 static struct ubi_wl_entry *find_anchor_wl_entry(struct rb_root *root);
 static struct ubi_wl_entry *get_peb_for_wl(struct ubi_device *ubi);
 static void ubi_fastmap_close(struct ubi_device *ubi);
 static inline void ubi_fastmap_init(struct ubi_device *ubi, int *count)
 {
-	/* Reserve enough LEBs to store two fastmaps. */
-	*count += (ubi->fm_size / ubi->leb_size) * 2;
+	/* Reserve enough LEBs to store two fastmaps and anchor fm pebs */
+	*count += (ubi->fm_size / ubi->leb_size) * 2 +
+		  FASTMAP_ANCHOR_PEBS + FASTMAP_NEXT_ANCHOR_PEBS;
 	INIT_WORK(&ubi->fm_work, update_fastmap_work_fn);
 }
 static struct ubi_wl_entry *may_reserve_for_fm(struct ubi_device *ubi,
-- 
2.31.1


______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

  parent reply	other threads:[~2021-12-27  3:12 UTC|newest]

Thread overview: 76+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-27  3:22 [PATCH v6 00/15] Some bugfixs for ubi/ubifs Zhihao Cheng
2021-12-27  3:22 ` Zhihao Cheng
2021-12-27  3:22 ` [PATCH v6 01/15] ubifs: rename_whiteout: Fix double free for whiteout_ui->data Zhihao Cheng
2021-12-27  3:22   ` Zhihao Cheng
2021-12-27  3:22 ` [PATCH v6 02/15] ubifs: Fix deadlock in concurrent rename whiteout and inode writeback Zhihao Cheng
2021-12-27  3:22   ` Zhihao Cheng
2021-12-27  3:22 ` [PATCH v6 03/15] ubifs: Fix wrong number of inodes locked by ui_mutex in ubifs_inode comment Zhihao Cheng
2021-12-27  3:22   ` Zhihao Cheng
2021-12-27  3:22 ` [PATCH v6 04/15] ubifs: Add missing iput if do_tmpfile() failed in rename whiteout Zhihao Cheng
2021-12-27  3:22   ` Zhihao Cheng
2021-12-27  3:22 ` [PATCH v6 05/15] ubifs: Rename whiteout atomically Zhihao Cheng
2021-12-27  3:22   ` Zhihao Cheng
2022-01-09 21:14   ` Richard Weinberger
2022-01-09 21:14     ` Richard Weinberger
2022-01-10  9:35     ` Zhihao Cheng
2022-01-10  9:35       ` Zhihao Cheng
2022-01-10 10:14       ` Richard Weinberger
2022-01-10 10:14         ` Richard Weinberger
2022-01-10 20:58         ` Richard Weinberger
2022-01-10 20:58           ` Richard Weinberger
2021-12-27  3:22 ` [PATCH v6 06/15] ubifs: Fix 'ui->dirty' race between do_tmpfile() and writeback work Zhihao Cheng
2021-12-27  3:22   ` Zhihao Cheng
2021-12-27  3:22 ` [PATCH v6 07/15] ubifs: Rectify space amount budget for mkdir/tmpfile operations Zhihao Cheng
2021-12-27  3:22   ` Zhihao Cheng
2021-12-27  3:22 ` [PATCH v6 08/15] ubifs: setflags: Make dirtied_ino_d 8 bytes aligned Zhihao Cheng
2021-12-27  3:22   ` Zhihao Cheng
2021-12-27  3:22 ` [PATCH v6 09/15] ubifs: Fix read out-of-bounds in ubifs_wbuf_write_nolock() Zhihao Cheng
2021-12-27  3:22   ` Zhihao Cheng
2021-12-27  3:22 ` [PATCH v6 10/15] ubifs: Fix to add refcount once page is set private Zhihao Cheng
2021-12-27  3:22   ` Zhihao Cheng
2021-12-27  3:22 ` [PATCH v6 11/15] ubi: fastmap: Return error code if memory allocation fails in add_aeb() Zhihao Cheng
2021-12-27  3:22   ` Zhihao Cheng
2021-12-27  3:22 ` [PATCH v6 12/15] ubi: fastmap: Add all fastmap pebs into 'ai->fastmap' when fm->used_blocks>=2 Zhihao Cheng
2021-12-27  3:22   ` Zhihao Cheng
2022-01-10 23:23   ` Richard Weinberger
2022-01-10 23:23     ` Richard Weinberger
2022-01-11  2:48     ` Zhihao Cheng
2022-01-11  2:48       ` Zhihao Cheng
2022-01-11  7:27       ` Richard Weinberger
2022-01-11  7:27         ` Richard Weinberger
2022-01-11 11:44         ` Zhihao Cheng
2022-01-11 11:44           ` Zhihao Cheng
2022-01-11 11:57           ` Richard Weinberger
2022-01-11 11:57             ` Richard Weinberger
2022-01-11 13:23             ` Zhihao Cheng
2022-01-11 13:23               ` Zhihao Cheng
2022-01-11 13:56               ` Richard Weinberger
2022-01-11 13:56                 ` Richard Weinberger
2022-01-12  3:46                 ` Zhihao Cheng
2022-01-12  3:46                   ` Zhihao Cheng
2022-01-14 18:45                   ` Richard Weinberger
2022-01-14 18:45                     ` Richard Weinberger
2022-01-15  8:22                     ` Zhihao Cheng
2022-01-15  8:22                       ` Zhihao Cheng
2022-01-15  8:46                       ` Zhihao Cheng
2022-01-15  8:46                         ` Zhihao Cheng
2022-01-15 10:01                         ` Richard Weinberger
2022-01-15 10:01                           ` Richard Weinberger
2022-01-17  1:31                           ` Zhihao Cheng
2022-01-17  1:31                             ` Zhihao Cheng
2022-01-17  2:52                             ` Zhihao Cheng
2022-01-17  2:52                               ` Zhihao Cheng
2021-12-27  3:22 ` [PATCH v6 13/15] ubifs: ubifs_writepage: Mark page dirty after writing inode failed Zhihao Cheng
2021-12-27  3:22   ` Zhihao Cheng
2021-12-27  3:22 ` [PATCH v6 14/15] ubifs: ubifs_releasepage: Remove ubifs_assert(0) to valid this process Zhihao Cheng
2021-12-27  3:22   ` Zhihao Cheng
2021-12-27  3:22 ` Zhihao Cheng [this message]
2021-12-27  3:22   ` [PATCH v6 15/15] ubi: fastmap: Fix high cpu usage of ubi_bgt by making sure wl_pool not empty Zhihao Cheng
2022-01-17  1:40   ` Zhihao Cheng
2022-01-17  1:40     ` Zhihao Cheng
2021-12-27 10:13 ` [PATCH v6 00/15] Some bugfixs for ubi/ubifs Richard Weinberger
2021-12-27 10:13   ` Richard Weinberger
2021-12-27 12:19   ` Zhihao Cheng
2021-12-27 12:19     ` Zhihao Cheng
2021-12-27 13:00     ` Richard Weinberger
2021-12-27 13:00       ` Richard Weinberger

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211227032246.2886878-16-chengzhihao1@huawei.com \
    --to=chengzhihao1@huawei.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mtd@lists.infradead.org \
    --cc=mcoquelin.stm32@gmail.com \
    --cc=miquel.raynal@bootlin.com \
    --cc=richard@nod.at \
    --cc=s.hauer@pengutronix.de \
    --cc=vigneshr@ti.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.