All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/4] Cleanup patches of vmalloc
@ 2022-06-07 10:59 Baoquan He
  2022-06-07 10:59 ` [PATCH v2 1/4] mm/vmalloc: invoke classify_va_fit_type() in adjust_va_to_fit_type() Baoquan He
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Baoquan He @ 2022-06-07 10:59 UTC (permalink / raw)
  To: akpm, urezki; +Cc: linux-mm, linux-kernel, hch, Baoquan He

Some cleanup patches found when reading vmalloc code.

v1->v2:
- In Patch 1, Uladzislau suggested a better way to improve code logic,
  and the NOTHING_FIT redundant checking issue naturally goes away.

- Take off the old patch 5 in v1. Instead convert to use generic
  ioremap code in arch codes, then rename the left ioremap_page_range().
  This should be done in later patches. This is suggested by Christoph.

Baoquan He (4):
  mm/vmalloc: invoke classify_va_fit_type() in adjust_va_to_fit_type()
  mm/vmalloc: remove the redundant boundary check
  mm/vmalloc: fix typo in local variable name
  mm/vmalloc: Add code comment for find_vmap_area_exceed_addr()

 mm/vmalloc.c | 46 +++++++++++++++++-----------------------------
 1 file changed, 17 insertions(+), 29 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2 1/4] mm/vmalloc: invoke classify_va_fit_type() in adjust_va_to_fit_type()
  2022-06-07 10:59 [PATCH v2 0/4] Cleanup patches of vmalloc Baoquan He
@ 2022-06-07 10:59 ` Baoquan He
  2022-06-07 12:53   ` Uladzislau Rezki
  2022-06-07 10:59 ` [PATCH v2 2/4] mm/vmalloc: remove the redundant boundary check Baoquan He
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 6+ messages in thread
From: Baoquan He @ 2022-06-07 10:59 UTC (permalink / raw)
  To: akpm, urezki; +Cc: linux-mm, linux-kernel, hch, Baoquan He

In function adjust_va_to_fit_type(), it checks all values of passed
in fit type, including NOTHING_FIT in the else branch. However, the
check of NOTHING_FIT has been done inside adjust_va_to_fit_type() and
before it's called in all call sites.

In fact, both of these two functions are coupled tightly, since
classify_va_fit_type() is doing the preparation work for
adjust_va_to_fit_type(). So putting invocation of classify_va_fit_type()
inside adjust_va_to_fit_type() can simplify code logic and the redundant
check of NOTHING_FIT issue will go away.

Suggested-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Signed-off-by: Baoquan He <bhe@redhat.com>
---
 mm/vmalloc.c | 23 ++++++-----------------
 1 file changed, 6 insertions(+), 17 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 07db42455dd4..f9d45aa90b7c 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1335,10 +1335,10 @@ classify_va_fit_type(struct vmap_area *va,
 
 static __always_inline int
 adjust_va_to_fit_type(struct vmap_area *va,
-	unsigned long nva_start_addr, unsigned long size,
-	enum fit_type type)
+	unsigned long nva_start_addr, unsigned long size)
 {
 	struct vmap_area *lva = NULL;
+	enum fit_type type = classify_va_fit_type(va, nva_start_addr, size);
 
 	if (type == FL_FIT_TYPE) {
 		/*
@@ -1444,7 +1444,6 @@ __alloc_vmap_area(unsigned long size, unsigned long align,
 	bool adjust_search_size = true;
 	unsigned long nva_start_addr;
 	struct vmap_area *va;
-	enum fit_type type;
 	int ret;
 
 	/*
@@ -1472,14 +1471,9 @@ __alloc_vmap_area(unsigned long size, unsigned long align,
 	if (nva_start_addr + size > vend)
 		return vend;
 
-	/* Classify what we have found. */
-	type = classify_va_fit_type(va, nva_start_addr, size);
-	if (WARN_ON_ONCE(type == NOTHING_FIT))
-		return vend;
-
 	/* Update the free vmap_area. */
-	ret = adjust_va_to_fit_type(va, nva_start_addr, size, type);
-	if (ret)
+	ret = adjust_va_to_fit_type(va, nva_start_addr, size);
+	if (WARN_ON_ONCE(ret))
 		return vend;
 
 #if DEBUG_AUGMENT_LOWEST_MATCH_CHECK
@@ -3735,7 +3729,6 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
 	int area, area2, last_area, term_area;
 	unsigned long base, start, size, end, last_end, orig_start, orig_end;
 	bool purged = false;
-	enum fit_type type;
 
 	/* verify parameters and allocate data structures */
 	BUG_ON(offset_in_page(align) || !is_power_of_2(align));
@@ -3846,15 +3839,11 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
 			/* It is a BUG(), but trigger recovery instead. */
 			goto recovery;
 
-		type = classify_va_fit_type(va, start, size);
-		if (WARN_ON_ONCE(type == NOTHING_FIT))
+		ret = adjust_va_to_fit_type(va, start, size);
+		if (WARN_ON_ONCE(unlikely(ret)))
 			/* It is a BUG(), but trigger recovery instead. */
 			goto recovery;
 
-		ret = adjust_va_to_fit_type(va, start, size, type);
-		if (unlikely(ret))
-			goto recovery;
-
 		/* Allocated area. */
 		va = vas[area];
 		va->va_start = start;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 2/4] mm/vmalloc: remove the redundant boundary check
  2022-06-07 10:59 [PATCH v2 0/4] Cleanup patches of vmalloc Baoquan He
  2022-06-07 10:59 ` [PATCH v2 1/4] mm/vmalloc: invoke classify_va_fit_type() in adjust_va_to_fit_type() Baoquan He
@ 2022-06-07 10:59 ` Baoquan He
  2022-06-07 10:59 ` [PATCH v2 3/4] mm/vmalloc: fix typo in local variable name Baoquan He
  2022-06-07 10:59 ` [PATCH v2 4/4] mm/vmalloc: Add code comment for find_vmap_area_exceed_addr() Baoquan He
  3 siblings, 0 replies; 6+ messages in thread
From: Baoquan He @ 2022-06-07 10:59 UTC (permalink / raw)
  To: akpm, urezki; +Cc: linux-mm, linux-kernel, hch, Baoquan He

In function find_va_links(), when traversing the vmap_area tree, the
comparing to check if the passed in 'va' is above or below 'tmp_va'
is redundant, assuming both 'va' and 'tmp_va' has ->va_start <= ->va_end.

Here, to simplify the checking as code change.

Signed-off-by: Baoquan He <bhe@redhat.com>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 mm/vmalloc.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index f9d45aa90b7c..b711bf82fd5d 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -874,11 +874,9 @@ find_va_links(struct vmap_area *va,
 		 * Trigger the BUG() if there are sides(left/right)
 		 * or full overlaps.
 		 */
-		if (va->va_start < tmp_va->va_end &&
-				va->va_end <= tmp_va->va_start)
+		if (va->va_end <= tmp_va->va_start)
 			link = &(*link)->rb_left;
-		else if (va->va_end > tmp_va->va_start &&
-				va->va_start >= tmp_va->va_end)
+		else if (va->va_start >= tmp_va->va_end)
 			link = &(*link)->rb_right;
 		else {
 			WARN(1, "vmalloc bug: 0x%lx-0x%lx overlaps with 0x%lx-0x%lx\n",
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 3/4] mm/vmalloc: fix typo in local variable name
  2022-06-07 10:59 [PATCH v2 0/4] Cleanup patches of vmalloc Baoquan He
  2022-06-07 10:59 ` [PATCH v2 1/4] mm/vmalloc: invoke classify_va_fit_type() in adjust_va_to_fit_type() Baoquan He
  2022-06-07 10:59 ` [PATCH v2 2/4] mm/vmalloc: remove the redundant boundary check Baoquan He
@ 2022-06-07 10:59 ` Baoquan He
  2022-06-07 10:59 ` [PATCH v2 4/4] mm/vmalloc: Add code comment for find_vmap_area_exceed_addr() Baoquan He
  3 siblings, 0 replies; 6+ messages in thread
From: Baoquan He @ 2022-06-07 10:59 UTC (permalink / raw)
  To: akpm, urezki; +Cc: linux-mm, linux-kernel, hch, Baoquan He

In __purge_vmap_area_lazy(), rename local_pure_list to local_purge_list.

Signed-off-by: Baoquan He <bhe@redhat.com>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 mm/vmalloc.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index b711bf82fd5d..b9bf7dfe71ec 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1669,32 +1669,32 @@ static void purge_fragmented_blocks_allcpus(void);
 static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)
 {
 	unsigned long resched_threshold;
-	struct list_head local_pure_list;
+	struct list_head local_purge_list;
 	struct vmap_area *va, *n_va;
 
 	lockdep_assert_held(&vmap_purge_lock);
 
 	spin_lock(&purge_vmap_area_lock);
 	purge_vmap_area_root = RB_ROOT;
-	list_replace_init(&purge_vmap_area_list, &local_pure_list);
+	list_replace_init(&purge_vmap_area_list, &local_purge_list);
 	spin_unlock(&purge_vmap_area_lock);
 
-	if (unlikely(list_empty(&local_pure_list)))
+	if (unlikely(list_empty(&local_purge_list)))
 		return false;
 
 	start = min(start,
-		list_first_entry(&local_pure_list,
+		list_first_entry(&local_purge_list,
 			struct vmap_area, list)->va_start);
 
 	end = max(end,
-		list_last_entry(&local_pure_list,
+		list_last_entry(&local_purge_list,
 			struct vmap_area, list)->va_end);
 
 	flush_tlb_kernel_range(start, end);
 	resched_threshold = lazy_max_pages() << 1;
 
 	spin_lock(&free_vmap_area_lock);
-	list_for_each_entry_safe(va, n_va, &local_pure_list, list) {
+	list_for_each_entry_safe(va, n_va, &local_purge_list, list) {
 		unsigned long nr = (va->va_end - va->va_start) >> PAGE_SHIFT;
 		unsigned long orig_start = va->va_start;
 		unsigned long orig_end = va->va_end;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 4/4] mm/vmalloc: Add code comment for find_vmap_area_exceed_addr()
  2022-06-07 10:59 [PATCH v2 0/4] Cleanup patches of vmalloc Baoquan He
                   ` (2 preceding siblings ...)
  2022-06-07 10:59 ` [PATCH v2 3/4] mm/vmalloc: fix typo in local variable name Baoquan He
@ 2022-06-07 10:59 ` Baoquan He
  3 siblings, 0 replies; 6+ messages in thread
From: Baoquan He @ 2022-06-07 10:59 UTC (permalink / raw)
  To: akpm, urezki; +Cc: linux-mm, linux-kernel, hch, Baoquan He

Its behaviour is like find_vma() which finds an area above the specified
address, add comment to make it easier to understand.

And also fix two places of grammer mistake/typo.

Signed-off-by: Baoquan He <bhe@redhat.com>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 mm/vmalloc.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index b9bf7dfe71ec..a4d8b80734fa 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -790,6 +790,7 @@ unsigned long vmalloc_nr_pages(void)
 	return atomic_long_read(&nr_vmalloc_pages);
 }
 
+/* Look up the first VA which satisfies addr < va_end, NULL if none. */
 static struct vmap_area *find_vmap_area_exceed_addr(unsigned long addr)
 {
 	struct vmap_area *va = NULL;
@@ -929,7 +930,7 @@ link_va(struct vmap_area *va, struct rb_root *root,
 		 * Some explanation here. Just perform simple insertion
 		 * to the tree. We do not set va->subtree_max_size to
 		 * its current size before calling rb_insert_augmented().
-		 * It is because of we populate the tree from the bottom
+		 * It is because we populate the tree from the bottom
 		 * to parent levels when the node _is_ in the tree.
 		 *
 		 * Therefore we set subtree_max_size to zero after insertion,
@@ -1655,7 +1656,7 @@ static atomic_long_t vmap_lazy_nr = ATOMIC_LONG_INIT(0);
 
 /*
  * Serialize vmap purging.  There is no actual critical section protected
- * by this look, but we want to avoid concurrent calls for performance
+ * by this lock, but we want to avoid concurrent calls for performance
  * reasons and to make the pcpu_get_vm_areas more deterministic.
  */
 static DEFINE_MUTEX(vmap_purge_lock);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 1/4] mm/vmalloc: invoke classify_va_fit_type() in adjust_va_to_fit_type()
  2022-06-07 10:59 ` [PATCH v2 1/4] mm/vmalloc: invoke classify_va_fit_type() in adjust_va_to_fit_type() Baoquan He
@ 2022-06-07 12:53   ` Uladzislau Rezki
  0 siblings, 0 replies; 6+ messages in thread
From: Uladzislau Rezki @ 2022-06-07 12:53 UTC (permalink / raw)
  To: Baoquan He
  Cc: Andrew Morton, Linux Memory Management List, LKML, Christoph Hellwig

>
> In function adjust_va_to_fit_type(), it checks all values of passed
> in fit type, including NOTHING_FIT in the else branch. However, the
> check of NOTHING_FIT has been done inside adjust_va_to_fit_type() and
> before it's called in all call sites.
>
> In fact, both of these two functions are coupled tightly, since
> classify_va_fit_type() is doing the preparation work for
> adjust_va_to_fit_type(). So putting invocation of classify_va_fit_type()
> inside adjust_va_to_fit_type() can simplify code logic and the redundant
> check of NOTHING_FIT issue will go away.
>
> Suggested-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> Signed-off-by: Baoquan He <bhe@redhat.com>
> ---
>  mm/vmalloc.c | 23 ++++++-----------------
>  1 file changed, 6 insertions(+), 17 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 07db42455dd4..f9d45aa90b7c 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -1335,10 +1335,10 @@ classify_va_fit_type(struct vmap_area *va,
>
>  static __always_inline int
>  adjust_va_to_fit_type(struct vmap_area *va,
> -       unsigned long nva_start_addr, unsigned long size,
> -       enum fit_type type)
> +       unsigned long nva_start_addr, unsigned long size)
>  {
>         struct vmap_area *lva = NULL;
> +       enum fit_type type = classify_va_fit_type(va, nva_start_addr, size);
>
>         if (type == FL_FIT_TYPE) {
>                 /*
> @@ -1444,7 +1444,6 @@ __alloc_vmap_area(unsigned long size, unsigned long align,
>         bool adjust_search_size = true;
>         unsigned long nva_start_addr;
>         struct vmap_area *va;
> -       enum fit_type type;
>         int ret;
>
>         /*
> @@ -1472,14 +1471,9 @@ __alloc_vmap_area(unsigned long size, unsigned long align,
>         if (nva_start_addr + size > vend)
>                 return vend;
>
> -       /* Classify what we have found. */
> -       type = classify_va_fit_type(va, nva_start_addr, size);
> -       if (WARN_ON_ONCE(type == NOTHING_FIT))
> -               return vend;
> -
>         /* Update the free vmap_area. */
> -       ret = adjust_va_to_fit_type(va, nva_start_addr, size, type);
> -       if (ret)
> +       ret = adjust_va_to_fit_type(va, nva_start_addr, size);
> +       if (WARN_ON_ONCE(ret))
>                 return vend;
>
>  #if DEBUG_AUGMENT_LOWEST_MATCH_CHECK
> @@ -3735,7 +3729,6 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
>         int area, area2, last_area, term_area;
>         unsigned long base, start, size, end, last_end, orig_start, orig_end;
>         bool purged = false;
> -       enum fit_type type;
>
>         /* verify parameters and allocate data structures */
>         BUG_ON(offset_in_page(align) || !is_power_of_2(align));
> @@ -3846,15 +3839,11 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
>                         /* It is a BUG(), but trigger recovery instead. */
>                         goto recovery;
>
> -               type = classify_va_fit_type(va, start, size);
> -               if (WARN_ON_ONCE(type == NOTHING_FIT))
> +               ret = adjust_va_to_fit_type(va, start, size);
> +               if (WARN_ON_ONCE(unlikely(ret)))
>                         /* It is a BUG(), but trigger recovery instead. */
>                         goto recovery;
>
> -               ret = adjust_va_to_fit_type(va, start, size, type);
> -               if (unlikely(ret))
> -                       goto recovery;
> -
>                 /* Allocated area. */
>                 va = vas[area];
>                 va->va_start = start;
> --
> 2.34.1
>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>

-- 
Uladzislau Rezki

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-06-07 12:54 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-07 10:59 [PATCH v2 0/4] Cleanup patches of vmalloc Baoquan He
2022-06-07 10:59 ` [PATCH v2 1/4] mm/vmalloc: invoke classify_va_fit_type() in adjust_va_to_fit_type() Baoquan He
2022-06-07 12:53   ` Uladzislau Rezki
2022-06-07 10:59 ` [PATCH v2 2/4] mm/vmalloc: remove the redundant boundary check Baoquan He
2022-06-07 10:59 ` [PATCH v2 3/4] mm/vmalloc: fix typo in local variable name Baoquan He
2022-06-07 10:59 ` [PATCH v2 4/4] mm/vmalloc: Add code comment for find_vmap_area_exceed_addr() Baoquan He

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.