From: rulinhuang <rulin.huang@intel.com>
To: urezki@gmail.com, bhe@redhat.com
Cc: colin.king@intel.com, hch@infradead.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
lstoakes@gmail.com, rulin.huang@intel.com, tianyou.li@intel.com,
tim.c.chen@intel.com, wangyang.guo@intel.com,
zhiguo.zhou@intel.com
Subject: [PATCH v5] mm/vmalloc: lock contention optimization under multi-threading
Date: Fri, 23 Feb 2024 08:03:19 -0500 [thread overview]
Message-ID: <20240223130318.112198-2-rulin.huang@intel.com> (raw)
In-Reply-To: <20240207033059.1565623-1-rulin.huang@intel.com>
When allocating a new memory area where the mapping address range is
known, it is observed that the vmap_area lock is acquired twice.
The first acquisition occurs in the alloc_vmap_area() function when
inserting the vm area into the vm mapping red-black tree. The second
acquisition occurs in the setup_vmalloc_vm() function when updating the
properties of the vm, such as flags and address, etc.
Combine these two operations together in alloc_vmap_area(), which
improves scalability when the vmap_area lock is contended. By doing so,
the need to acquire the lock twice can also be eliminated.
With the above change, tested on intel icelake platform(160 vcpu, kernel
v6.7), a 6% performance improvement and a 7% reduction in overall
spinlock hotspot are gained on
stress-ng/pthread(https://github.com/ColinIanKing/stress-ng), which is
the stress test of thread creations.
Reviewed-by: Chen Tim C <tim.c.chen@intel.com>
Reviewed-by: King Colin <colin.king@intel.com>
Signed-off-by: rulinhuang <rulin.huang@intel.com>
---
V1 -> V2: Avoided the partial initialization issue of vm and
separated insert_vmap_area() from alloc_vmap_area()
V2 -> V3: Rebased on 6.8-rc5
V3 -> V4: Rebased on mm-unstable branch
V4 -> V5: cancel the split of alloc_vmap_area()
and keep insert_vmap_area()
---
mm/vmalloc.c | 48 ++++++++++++++++++++++--------------------------
1 file changed, 22 insertions(+), 26 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 25a8df497255..6baaf08737f8 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1841,15 +1841,26 @@ node_alloc(unsigned long size, unsigned long align,
return va;
}
+static inline void setup_vmalloc_vm(struct vm_struct *vm,
+ struct vmap_area *va, unsigned long flags, const void *caller)
+{
+ vm->flags = flags;
+ vm->addr = (void *)va->va_start;
+ vm->size = va->va_end - va->va_start;
+ vm->caller = caller;
+ va->vm = vm;
+}
+
/*
* Allocate a region of KVA of the specified size and alignment, within the
- * vstart and vend.
+ * vstart and vend. If vm is passed in, the two will also be bound.
*/
static struct vmap_area *alloc_vmap_area(unsigned long size,
unsigned long align,
unsigned long vstart, unsigned long vend,
int node, gfp_t gfp_mask,
- unsigned long va_flags)
+ unsigned long va_flags, struct vm_struct *vm,
+ unsigned long flags, const void *caller)
{
struct vmap_node *vn;
struct vmap_area *va;
@@ -1912,6 +1923,9 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
va->vm = NULL;
va->flags = (va_flags | vn_id);
+ if (vm)
+ setup_vmalloc_vm(vm, va, flags, caller);
+
vn = addr_to_node(va->va_start);
spin_lock(&vn->busy.lock);
@@ -2486,7 +2500,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
va = alloc_vmap_area(VMAP_BLOCK_SIZE, VMAP_BLOCK_SIZE,
VMALLOC_START, VMALLOC_END,
node, gfp_mask,
- VMAP_RAM|VMAP_BLOCK);
+ VMAP_RAM|VMAP_BLOCK, NULL,
+ 0, NULL);
if (IS_ERR(va)) {
kfree(vb);
return ERR_CAST(va);
@@ -2843,7 +2858,8 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
struct vmap_area *va;
va = alloc_vmap_area(size, PAGE_SIZE,
VMALLOC_START, VMALLOC_END,
- node, GFP_KERNEL, VMAP_RAM);
+ node, GFP_KERNEL, VMAP_RAM,
+ NULL, 0, NULL);
if (IS_ERR(va))
return NULL;
@@ -2946,26 +2962,6 @@ void __init vm_area_register_early(struct vm_struct *vm, size_t align)
kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
}
-static inline void setup_vmalloc_vm_locked(struct vm_struct *vm,
- struct vmap_area *va, unsigned long flags, const void *caller)
-{
- vm->flags = flags;
- vm->addr = (void *)va->va_start;
- vm->size = va->va_end - va->va_start;
- vm->caller = caller;
- va->vm = vm;
-}
-
-static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va,
- unsigned long flags, const void *caller)
-{
- struct vmap_node *vn = addr_to_node(va->va_start);
-
- spin_lock(&vn->busy.lock);
- setup_vmalloc_vm_locked(vm, va, flags, caller);
- spin_unlock(&vn->busy.lock);
-}
-
static void clear_vm_uninitialized_flag(struct vm_struct *vm)
{
/*
@@ -3002,7 +2998,7 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
if (!(flags & VM_NO_GUARD))
size += PAGE_SIZE;
- va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0);
+ va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0, area, flags, caller);
if (IS_ERR(va)) {
kfree(area);
return NULL;
@@ -4584,7 +4580,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
spin_lock(&vn->busy.lock);
insert_vmap_area(vas[area], &vn->busy.root, &vn->busy.head);
- setup_vmalloc_vm_locked(vms[area], vas[area], VM_ALLOC,
+ setup_vmalloc_vm(vms[area], vas[area], VM_ALLOC,
pcpu_get_vm_areas);
spin_unlock(&vn->busy.lock);
}
base-commit: c09a8e005eff6c064e2e9f11549966c36a724fbf
--
2.43.0
next prev parent reply other threads:[~2024-02-23 13:01 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-07 3:30 [PATCH] mm/vmalloc: lock contention optimization under multi-threading rulinhuang
2024-02-07 9:24 ` Uladzislau Rezki
2024-02-09 11:51 ` rulinhuang
2024-02-20 9:05 ` [PATCH v2] " rulinhuang
2024-02-20 19:54 ` Andrew Morton
2024-02-21 3:34 ` rulinhuang
2024-02-20 9:12 ` [PATCH] " rulinhuang
2024-02-21 8:38 ` Uladzislau Rezki
2024-02-21 3:29 ` [PATCH v3] " rulinhuang
2024-02-21 8:36 ` Uladzislau Rezki
2024-02-22 12:09 ` rulinhuang
2024-02-22 12:10 ` rulinhuang
2024-02-22 12:52 ` Uladzislau Rezki
2024-02-22 15:36 ` Baoquan He
2024-02-23 13:09 ` rulinhuang
2024-02-22 12:05 ` [PATCH v4] " rulinhuang
2024-02-23 13:03 ` rulinhuang [this message]
2024-02-23 14:03 ` [PATCH v5] " Baoquan He
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240223130318.112198-2-rulin.huang@intel.com \
--to=rulin.huang@intel.com \
--cc=bhe@redhat.com \
--cc=colin.king@intel.com \
--cc=hch@infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lstoakes@gmail.com \
--cc=tianyou.li@intel.com \
--cc=tim.c.chen@intel.com \
--cc=urezki@gmail.com \
--cc=wangyang.guo@intel.com \
--cc=zhiguo.zhou@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).