* [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray
@ 2023-03-27 17:01 Uladzislau Rezki (Sony)
2023-03-27 17:01 ` [PATCH v3 2/2] lib/test_vmalloc.c: Add vm_map_ram()/vm_unmap_ram() test case Uladzislau Rezki (Sony)
` (2 more replies)
0 siblings, 3 replies; 14+ messages in thread
From: Uladzislau Rezki (Sony) @ 2023-03-27 17:01 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, LKML, Baoquan He, Lorenzo Stoakes, Christoph Hellwig,
Matthew Wilcox, Dave Chinner, Uladzislau Rezki,
Oleksiy Avramchenko
A global vmap_blocks-xarray array can be contented under
heavy usage of the vm_map_ram()/vm_unmap_ram() APIs. The
lock_stat shows that a "vmap_blocks.xa_lock" lock is a
second in a top-list when it comes to contentions:
<snip>
----------------------------------------
class name con-bounces contentions ...
----------------------------------------
vmap_area_lock: 2554079 2554276 ...
--------------
vmap_area_lock 1297948 [<00000000dd41cbaa>] alloc_vmap_area+0x1c7/0x910
vmap_area_lock 1256330 [<000000009d927bf3>] free_vmap_block+0x4a/0xe0
vmap_area_lock 1 [<00000000c95c05a7>] find_vm_area+0x16/0x70
--------------
vmap_area_lock 1738590 [<00000000dd41cbaa>] alloc_vmap_area+0x1c7/0x910
vmap_area_lock 815688 [<000000009d927bf3>] free_vmap_block+0x4a/0xe0
vmap_area_lock 1 [<00000000c1d619d7>] __get_vm_area_node+0xd2/0x170
vmap_blocks.xa_lock: 862689 862698 ...
-------------------
vmap_blocks.xa_lock 378418 [<00000000625a5626>] vm_map_ram+0x359/0x4a0
vmap_blocks.xa_lock 484280 [<00000000caa2ef03>] xa_erase+0xe/0x30
-------------------
vmap_blocks.xa_lock 576226 [<00000000caa2ef03>] xa_erase+0xe/0x30
vmap_blocks.xa_lock 286472 [<00000000625a5626>] vm_map_ram+0x359/0x4a0
...
<snip>
that is a result of running vm_map_ram()/vm_unmap_ram() in
a loop. The test creates 64(on 64 CPUs system) threads and
each one maps/unmaps 1 page.
After this change the "xa_lock" can be considered as a noise
in the same test condition:
<snip>
...
&xa->xa_lock#1: 10333 10394 ...
--------------
&xa->xa_lock#1 5349 [<00000000bbbc9751>] xa_erase+0xe/0x30
&xa->xa_lock#1 5045 [<0000000018def45d>] vm_map_ram+0x3a4/0x4f0
--------------
&xa->xa_lock#1 7326 [<0000000018def45d>] vm_map_ram+0x3a4/0x4f0
&xa->xa_lock#1 3068 [<00000000bbbc9751>] xa_erase+0xe/0x30
...
<snip>
This patch does not fix vmap_area_lock/free_vmap_area_lock and
purge_vmap_area_lock bottle-necks, it is rather a separate rework.
v1 - v2:
- Add more comments(Andrew Morton req.)
- Switch to WARN_ON_ONCE(Lorenzo Stoakes req.)
v2 -> v3:
- Fix a kernel-doc complain(Matthew Wilcox)
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
mm/vmalloc.c | 85 +++++++++++++++++++++++++++++++++++++++-------------
1 file changed, 64 insertions(+), 21 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 978194dc2bb8..821256ecf81c 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1908,9 +1908,22 @@ static struct vmap_area *find_unlink_vmap_area(unsigned long addr)
#define VMAP_BLOCK 0x2 /* mark out the vmap_block sub-type*/
#define VMAP_FLAGS_MASK 0x3
+/*
+ * We should probably have a fallback mechanism to allocate virtual memory
+ * out of partially filled vmap blocks. However vmap block sizing should be
+ * fairly reasonable according to the vmalloc size, so it shouldn't be a
+ * big problem.
+ */
struct vmap_block_queue {
spinlock_t lock;
struct list_head free;
+
+ /*
+ * An xarray requires an extra memory dynamically to
+ * be allocated. If it is an issue, we can use rb-tree
+ * instead.
+ */
+ struct xarray vmap_blocks;
};
struct vmap_block {
@@ -1928,24 +1941,46 @@ struct vmap_block {
static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue);
/*
- * XArray of vmap blocks, indexed by address, to quickly find a vmap block
- * in the free path. Could get rid of this if we change the API to return a
- * "cookie" from alloc, to be passed to free. But no big deal yet.
+ * In order to fast access to any "vmap_block" associated with a
+ * specific address, we store them into a per-cpu xarray. A hash
+ * function is addr_to_vbq() whereas a key is a vb->va->va_start
+ * value.
+ *
+ * Please note, a vmap_block_queue, which is a per-cpu, is not
+ * serialized by a raw_smp_processor_id() current CPU, instead
+ * it is chosen based on a CPU-index it belongs to, i.e. it is
+ * a hash-table.
+ *
+ * An example:
+ *
+ * CPU_1 CPU_2 CPU_0
+ * | | |
+ * V V V
+ * 0 10 20 30 40 50 60
+ * |------|------|------|------|------|------|...<vmap address space>
+ * CPU0 CPU1 CPU2 CPU0 CPU1 CPU2
+ *
+ * - CPU_1 invokes vm_unmap_ram(6), 6 belongs to CPU0 zone, thus
+ * it access: CPU0/INDEX0 -> vmap_blocks -> xa_lock;
+ *
+ * - CPU_2 invokes vm_unmap_ram(11), 11 belongs to CPU1 zone, thus
+ * it access: CPU1/INDEX1 -> vmap_blocks -> xa_lock;
+ *
+ * - CPU_0 invokes vm_unmap_ram(20), 20 belongs to CPU2 zone, thus
+ * it access: CPU2/INDEX2 -> vmap_blocks -> xa_lock.
*/
-static DEFINE_XARRAY(vmap_blocks);
+static struct vmap_block_queue *
+addr_to_vbq(unsigned long addr)
+{
+ int index = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus();
-/*
- * We should probably have a fallback mechanism to allocate virtual memory
- * out of partially filled vmap blocks. However vmap block sizing should be
- * fairly reasonable according to the vmalloc size, so it shouldn't be a
- * big problem.
- */
+ return &per_cpu(vmap_block_queue, index);
+}
-static unsigned long addr_to_vb_idx(unsigned long addr)
+static unsigned long
+addr_to_vb_va_start(unsigned long addr)
{
- addr -= VMALLOC_START & ~(VMAP_BLOCK_SIZE-1);
- addr /= VMAP_BLOCK_SIZE;
- return addr;
+ return rounddown(addr, VMAP_BLOCK_SIZE);
}
static void *vmap_block_vaddr(unsigned long va_start, unsigned long pages_off)
@@ -1953,7 +1988,7 @@ static void *vmap_block_vaddr(unsigned long va_start, unsigned long pages_off)
unsigned long addr;
addr = va_start + (pages_off << PAGE_SHIFT);
- BUG_ON(addr_to_vb_idx(addr) != addr_to_vb_idx(va_start));
+ WARN_ON_ONCE(addr_to_vb_va_start(addr) != va_start);
return (void *)addr;
}
@@ -1970,7 +2005,6 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
struct vmap_block_queue *vbq;
struct vmap_block *vb;
struct vmap_area *va;
- unsigned long vb_idx;
int node, err;
void *vaddr;
@@ -2003,8 +2037,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
bitmap_set(vb->used_map, 0, (1UL << order));
INIT_LIST_HEAD(&vb->free_list);
- vb_idx = addr_to_vb_idx(va->va_start);
- err = xa_insert(&vmap_blocks, vb_idx, vb, gfp_mask);
+ vbq = addr_to_vbq(va->va_start);
+ err = xa_insert(&vbq->vmap_blocks, va->va_start, vb, gfp_mask);
if (err) {
kfree(vb);
free_vmap_area(va);
@@ -2021,9 +2055,11 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
static void free_vmap_block(struct vmap_block *vb)
{
+ struct vmap_block_queue *vbq;
struct vmap_block *tmp;
- tmp = xa_erase(&vmap_blocks, addr_to_vb_idx(vb->va->va_start));
+ vbq = addr_to_vbq(vb->va->va_start);
+ tmp = xa_erase(&vbq->vmap_blocks, vb->va->va_start);
BUG_ON(tmp != vb);
spin_lock(&vmap_area_lock);
@@ -2135,6 +2171,7 @@ static void vb_free(unsigned long addr, unsigned long size)
unsigned long offset;
unsigned int order;
struct vmap_block *vb;
+ struct vmap_block_queue *vbq;
BUG_ON(offset_in_page(size));
BUG_ON(size > PAGE_SIZE*VMAP_MAX_ALLOC);
@@ -2143,7 +2180,10 @@ static void vb_free(unsigned long addr, unsigned long size)
order = get_order(size);
offset = (addr & (VMAP_BLOCK_SIZE - 1)) >> PAGE_SHIFT;
- vb = xa_load(&vmap_blocks, addr_to_vb_idx(addr));
+
+ vbq = addr_to_vbq(addr);
+ vb = xa_load(&vbq->vmap_blocks, addr_to_vb_va_start(addr));
+
spin_lock(&vb->lock);
bitmap_clear(vb->used_map, offset, (1UL << order));
spin_unlock(&vb->lock);
@@ -3486,6 +3526,7 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags
{
char *start;
struct vmap_block *vb;
+ struct vmap_block_queue *vbq;
unsigned long offset;
unsigned int rs, re, n;
@@ -3503,7 +3544,8 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags
* Area is split into regions and tracked with vmap_block, read out
* each region and zero fill the hole between regions.
*/
- vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr));
+ vbq = addr_to_vbq((unsigned long) addr);
+ vb = xa_load(&vbq->vmap_blocks, addr_to_vb_va_start((unsigned long) addr));
if (!vb)
goto finished;
@@ -4272,6 +4314,7 @@ void __init vmalloc_init(void)
p = &per_cpu(vfree_deferred, i);
init_llist_head(&p->list);
INIT_WORK(&p->wq, delayed_vfree_work);
+ xa_init(&vbq->vmap_blocks);
}
/* Import existing vmlist entries. */
--
2.30.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 2/2] lib/test_vmalloc.c: Add vm_map_ram()/vm_unmap_ram() test case
2023-03-27 17:01 [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray Uladzislau Rezki (Sony)
@ 2023-03-27 17:01 ` Uladzislau Rezki (Sony)
2023-03-27 20:28 ` Lorenzo Stoakes
2023-03-27 20:09 ` [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray Lorenzo Stoakes
2023-03-28 3:25 ` Baoquan He
2 siblings, 1 reply; 14+ messages in thread
From: Uladzislau Rezki (Sony) @ 2023-03-27 17:01 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, LKML, Baoquan He, Lorenzo Stoakes, Christoph Hellwig,
Matthew Wilcox, Dave Chinner, Uladzislau Rezki,
Oleksiy Avramchenko
Add vm_map_ram()/vm_unmap_ram() test case to our stress test-suite.
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
lib/test_vmalloc.c | 41 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 41 insertions(+)
diff --git a/lib/test_vmalloc.c b/lib/test_vmalloc.c
index cd2bdba6d3ed..6633eda4cd4d 100644
--- a/lib/test_vmalloc.c
+++ b/lib/test_vmalloc.c
@@ -53,6 +53,7 @@ __param(int, run_test_mask, INT_MAX,
"\t\tid: 128, name: pcpu_alloc_test\n"
"\t\tid: 256, name: kvfree_rcu_1_arg_vmalloc_test\n"
"\t\tid: 512, name: kvfree_rcu_2_arg_vmalloc_test\n"
+ "\t\tid: 1024, name: vm_map_ram_test\n"
/* Add a new test case description here. */
);
@@ -358,6 +359,45 @@ kvfree_rcu_2_arg_vmalloc_test(void)
return 0;
}
+static int
+vm_map_ram_test(void)
+{
+ unsigned int map_nr_pages;
+ unsigned char *v_ptr;
+ unsigned char *p_ptr;
+ struct page **pages;
+ struct page *page;
+ int i;
+
+ map_nr_pages = nr_pages > 0 ? nr_pages:1;
+ pages = kmalloc(map_nr_pages * sizeof(*page), GFP_KERNEL);
+ if (!pages)
+ return -1;
+
+ for (i = 0; i < map_nr_pages; i++) {
+ page = alloc_pages(GFP_KERNEL, 1);
+ if (!page)
+ return -1;
+
+ pages[i] = page;
+ }
+
+ /* Run the test loop. */
+ for (i = 0; i < test_loop_count; i++) {
+ v_ptr = vm_map_ram(pages, map_nr_pages, -1);
+ *v_ptr = 'a';
+ vm_unmap_ram(v_ptr, map_nr_pages);
+ }
+
+ for (i = 0; i < map_nr_pages; i++) {
+ p_ptr = page_address(pages[i]);
+ free_pages((unsigned long)p_ptr, 1);
+ }
+
+ kfree(pages);
+ return 0;
+}
+
struct test_case_desc {
const char *test_name;
int (*test_func)(void);
@@ -374,6 +414,7 @@ static struct test_case_desc test_case_array[] = {
{ "pcpu_alloc_test", pcpu_alloc_test },
{ "kvfree_rcu_1_arg_vmalloc_test", kvfree_rcu_1_arg_vmalloc_test },
{ "kvfree_rcu_2_arg_vmalloc_test", kvfree_rcu_2_arg_vmalloc_test },
+ { "vm_map_ram_test", vm_map_ram_test },
/* Add a new test case here. */
};
--
2.30.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray
2023-03-27 17:01 [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray Uladzislau Rezki (Sony)
2023-03-27 17:01 ` [PATCH v3 2/2] lib/test_vmalloc.c: Add vm_map_ram()/vm_unmap_ram() test case Uladzislau Rezki (Sony)
@ 2023-03-27 20:09 ` Lorenzo Stoakes
2023-03-28 12:51 ` Uladzislau Rezki
` (2 more replies)
2023-03-28 3:25 ` Baoquan He
2 siblings, 3 replies; 14+ messages in thread
From: Lorenzo Stoakes @ 2023-03-27 20:09 UTC (permalink / raw)
To: Uladzislau Rezki (Sony)
Cc: Andrew Morton, linux-mm, LKML, Baoquan He, Christoph Hellwig,
Matthew Wilcox, Dave Chinner, Oleksiy Avramchenko
On Mon, Mar 27, 2023 at 07:01:25PM +0200, Uladzislau Rezki (Sony) wrote:
> A global vmap_blocks-xarray array can be contented under
> heavy usage of the vm_map_ram()/vm_unmap_ram() APIs. The
> lock_stat shows that a "vmap_blocks.xa_lock" lock is a
> second in a top-list when it comes to contentions:
>
> <snip>
> ----------------------------------------
> class name con-bounces contentions ...
> ----------------------------------------
> vmap_area_lock: 2554079 2554276 ...
> --------------
> vmap_area_lock 1297948 [<00000000dd41cbaa>] alloc_vmap_area+0x1c7/0x910
> vmap_area_lock 1256330 [<000000009d927bf3>] free_vmap_block+0x4a/0xe0
> vmap_area_lock 1 [<00000000c95c05a7>] find_vm_area+0x16/0x70
> --------------
> vmap_area_lock 1738590 [<00000000dd41cbaa>] alloc_vmap_area+0x1c7/0x910
> vmap_area_lock 815688 [<000000009d927bf3>] free_vmap_block+0x4a/0xe0
> vmap_area_lock 1 [<00000000c1d619d7>] __get_vm_area_node+0xd2/0x170
>
> vmap_blocks.xa_lock: 862689 862698 ...
> -------------------
> vmap_blocks.xa_lock 378418 [<00000000625a5626>] vm_map_ram+0x359/0x4a0
> vmap_blocks.xa_lock 484280 [<00000000caa2ef03>] xa_erase+0xe/0x30
> -------------------
> vmap_blocks.xa_lock 576226 [<00000000caa2ef03>] xa_erase+0xe/0x30
> vmap_blocks.xa_lock 286472 [<00000000625a5626>] vm_map_ram+0x359/0x4a0
> ...
> <snip>
>
> that is a result of running vm_map_ram()/vm_unmap_ram() in
> a loop. The test creates 64(on 64 CPUs system) threads and
> each one maps/unmaps 1 page.
>
> After this change the "xa_lock" can be considered as a noise
> in the same test condition:
>
> <snip>
> ...
> &xa->xa_lock#1: 10333 10394 ...
> --------------
> &xa->xa_lock#1 5349 [<00000000bbbc9751>] xa_erase+0xe/0x30
> &xa->xa_lock#1 5045 [<0000000018def45d>] vm_map_ram+0x3a4/0x4f0
> --------------
> &xa->xa_lock#1 7326 [<0000000018def45d>] vm_map_ram+0x3a4/0x4f0
> &xa->xa_lock#1 3068 [<00000000bbbc9751>] xa_erase+0xe/0x30
> ...
> <snip>
>
> This patch does not fix vmap_area_lock/free_vmap_area_lock and
> purge_vmap_area_lock bottle-necks, it is rather a separate rework.
>
> v1 - v2:
> - Add more comments(Andrew Morton req.)
> - Switch to WARN_ON_ONCE(Lorenzo Stoakes req.)
>
> v2 -> v3:
> - Fix a kernel-doc complain(Matthew Wilcox)
>
> Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> ---
> mm/vmalloc.c | 85 +++++++++++++++++++++++++++++++++++++++-------------
> 1 file changed, 64 insertions(+), 21 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 978194dc2bb8..821256ecf81c 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -1908,9 +1908,22 @@ static struct vmap_area *find_unlink_vmap_area(unsigned long addr)
> #define VMAP_BLOCK 0x2 /* mark out the vmap_block sub-type*/
> #define VMAP_FLAGS_MASK 0x3
>
> +/*
> + * We should probably have a fallback mechanism to allocate virtual memory
> + * out of partially filled vmap blocks. However vmap block sizing should be
> + * fairly reasonable according to the vmalloc size, so it shouldn't be a
> + * big problem.
> + */
> struct vmap_block_queue {
> spinlock_t lock;
> struct list_head free;
> +
> + /*
> + * An xarray requires an extra memory dynamically to
> + * be allocated. If it is an issue, we can use rb-tree
> + * instead.
> + */
> + struct xarray vmap_blocks;
> };
>
> struct vmap_block {
> @@ -1928,24 +1941,46 @@ struct vmap_block {
> static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue);
>
> /*
> - * XArray of vmap blocks, indexed by address, to quickly find a vmap block
> - * in the free path. Could get rid of this if we change the API to return a
> - * "cookie" from alloc, to be passed to free. But no big deal yet.
> + * In order to fast access to any "vmap_block" associated with a
> + * specific address, we store them into a per-cpu xarray. A hash
> + * function is addr_to_vbq() whereas a key is a vb->va->va_start
> + * value.
> + *
> + * Please note, a vmap_block_queue, which is a per-cpu, is not
> + * serialized by a raw_smp_processor_id() current CPU, instead
> + * it is chosen based on a CPU-index it belongs to, i.e. it is
> + * a hash-table.
> + *
> + * An example:
> + *
> + * CPU_1 CPU_2 CPU_0
> + * | | |
> + * V V V
> + * 0 10 20 30 40 50 60
> + * |------|------|------|------|------|------|...<vmap address space>
> + * CPU0 CPU1 CPU2 CPU0 CPU1 CPU2
> + *
> + * - CPU_1 invokes vm_unmap_ram(6), 6 belongs to CPU0 zone, thus
> + * it access: CPU0/INDEX0 -> vmap_blocks -> xa_lock;
> + *
> + * - CPU_2 invokes vm_unmap_ram(11), 11 belongs to CPU1 zone, thus
> + * it access: CPU1/INDEX1 -> vmap_blocks -> xa_lock;
> + *
> + * - CPU_0 invokes vm_unmap_ram(20), 20 belongs to CPU2 zone, thus
> + * it access: CPU2/INDEX2 -> vmap_blocks -> xa_lock.
> */
OK so if I understand this correctly, you're overloading the per-CPU
vmap_block_queue array to use as a simple hash based on the address and
relying on the xa_lock() in xa_insert() to serialise in case of contention?
I like the general heft of your comment but I feel this could be spelled
out a little more clearly, something like:-
In order to have fast access to any vmap_block object associated with a
specific address, we use a hash.
Rather than waste space on defining a new hash table we take advantage
of the fact we already have a static per-cpu array vmap_block_queue.
This is already used for per-CPU access to the block queue, however we
overload this to _also_ act as a vmap_block hash. The hash function is
addr_to_vbq() which hashes on vb->va->va_start.
This then uses per_cpu() to lookup the _index_ rather than the
_cpu_. Each vmap_block_queue contains an xarray of vmap blocks which are
indexed on the same key as the hash (vb->va->va_start).
xarray read acceses are protected by RCU lock and inserts are protected
by a spin lock so there is no risk of a race here.
An example:
...
Feel free to cut this down as needed :) but I do feel it's important to
_explicitly_ point out that we're overloading this as it's quite confusing
at face value.
> -static DEFINE_XARRAY(vmap_blocks);
> +static struct vmap_block_queue *
> +addr_to_vbq(unsigned long addr)
> +{
> + int index = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus();
>
> -/*
> - * We should probably have a fallback mechanism to allocate virtual memory
> - * out of partially filled vmap blocks. However vmap block sizing should be
> - * fairly reasonable according to the vmalloc size, so it shouldn't be a
> - * big problem.
> - */
> + return &per_cpu(vmap_block_queue, index);
> +}
>
> -static unsigned long addr_to_vb_idx(unsigned long addr)
> +static unsigned long
> +addr_to_vb_va_start(unsigned long addr)
> {
> - addr -= VMALLOC_START & ~(VMAP_BLOCK_SIZE-1);
> - addr /= VMAP_BLOCK_SIZE;
> - return addr;
> + return rounddown(addr, VMAP_BLOCK_SIZE);
> }
>
> static void *vmap_block_vaddr(unsigned long va_start, unsigned long pages_off)
> @@ -1953,7 +1988,7 @@ static void *vmap_block_vaddr(unsigned long va_start, unsigned long pages_off)
> unsigned long addr;
>
> addr = va_start + (pages_off << PAGE_SHIFT);
> - BUG_ON(addr_to_vb_idx(addr) != addr_to_vb_idx(va_start));
> + WARN_ON_ONCE(addr_to_vb_va_start(addr) != va_start);
> return (void *)addr;
> }
>
> @@ -1970,7 +2005,6 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> struct vmap_block_queue *vbq;
> struct vmap_block *vb;
> struct vmap_area *va;
> - unsigned long vb_idx;
> int node, err;
> void *vaddr;
>
> @@ -2003,8 +2037,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> bitmap_set(vb->used_map, 0, (1UL << order));
> INIT_LIST_HEAD(&vb->free_list);
>
> - vb_idx = addr_to_vb_idx(va->va_start);
> - err = xa_insert(&vmap_blocks, vb_idx, vb, gfp_mask);
> + vbq = addr_to_vbq(va->va_start);
> + err = xa_insert(&vbq->vmap_blocks, va->va_start, vb, gfp_mask);
I might be being pedantic here, but shortly after this code you reassign vbq:-
vbq = addr_to_vbq(va->va_start);
err = xa_insert(&vbq->vmap_blocks, va->va_start, vb, gfp_mask);
if (err) {
kfree(vb);
free_vmap_area(va);
return ERR_PTR(err);
}
vbq = raw_cpu_ptr(&vmap_block_queue);
Which is confusing at a glance, as you're using it once as a hash lookup
and again for its 'true purpose'.
I wonder whether it would be better overall, since you always follow a vbq
lookup explicitly with an operation on vmap_blocks, to just add a helper
that returned a pointer to the xarray? e.g. (untested code here :):-
static struct xarray *get_vblock_array(unsigned long addr)
{
struct vmap_block_queue *vbq;
int index = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus();
vbq = &per_cpu(vmap_block_queue, index);
return &vbq->vblocks;
}
And replace addr_to_vbq() with this. That'd also make the mechanism of this
hash lookup super explicit.
> if (err) {
> kfree(vb);
> free_vmap_area(va);
> @@ -2021,9 +2055,11 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
>
> static void free_vmap_block(struct vmap_block *vb)
> {
> + struct vmap_block_queue *vbq;
> struct vmap_block *tmp;
>
> - tmp = xa_erase(&vmap_blocks, addr_to_vb_idx(vb->va->va_start));
> + vbq = addr_to_vbq(vb->va->va_start);
> + tmp = xa_erase(&vbq->vmap_blocks, vb->va->va_start);
> BUG_ON(tmp != vb);
>
> spin_lock(&vmap_area_lock);
> @@ -2135,6 +2171,7 @@ static void vb_free(unsigned long addr, unsigned long size)
> unsigned long offset;
> unsigned int order;
> struct vmap_block *vb;
> + struct vmap_block_queue *vbq;
>
> BUG_ON(offset_in_page(size));
> BUG_ON(size > PAGE_SIZE*VMAP_MAX_ALLOC);
> @@ -2143,7 +2180,10 @@ static void vb_free(unsigned long addr, unsigned long size)
>
> order = get_order(size);
> offset = (addr & (VMAP_BLOCK_SIZE - 1)) >> PAGE_SHIFT;
> - vb = xa_load(&vmap_blocks, addr_to_vb_idx(addr));
> +
> + vbq = addr_to_vbq(addr);
> + vb = xa_load(&vbq->vmap_blocks, addr_to_vb_va_start(addr));
> +
> spin_lock(&vb->lock);
> bitmap_clear(vb->used_map, offset, (1UL << order));
> spin_unlock(&vb->lock);
> @@ -3486,6 +3526,7 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags
> {
> char *start;
> struct vmap_block *vb;
> + struct vmap_block_queue *vbq;
> unsigned long offset;
> unsigned int rs, re, n;
>
> @@ -3503,7 +3544,8 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags
> * Area is split into regions and tracked with vmap_block, read out
> * each region and zero fill the hole between regions.
> */
> - vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr));
> + vbq = addr_to_vbq((unsigned long) addr);
> + vb = xa_load(&vbq->vmap_blocks, addr_to_vb_va_start((unsigned long) addr));
> if (!vb)
> goto finished;
>
> @@ -4272,6 +4314,7 @@ void __init vmalloc_init(void)
> p = &per_cpu(vfree_deferred, i);
> init_llist_head(&p->list);
> INIT_WORK(&p->wq, delayed_vfree_work);
> + xa_init(&vbq->vmap_blocks);
> }
>
> /* Import existing vmlist entries. */
> --
> 2.30.2
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 2/2] lib/test_vmalloc.c: Add vm_map_ram()/vm_unmap_ram() test case
2023-03-27 17:01 ` [PATCH v3 2/2] lib/test_vmalloc.c: Add vm_map_ram()/vm_unmap_ram() test case Uladzislau Rezki (Sony)
@ 2023-03-27 20:28 ` Lorenzo Stoakes
2023-03-28 12:29 ` Uladzislau Rezki
0 siblings, 1 reply; 14+ messages in thread
From: Lorenzo Stoakes @ 2023-03-27 20:28 UTC (permalink / raw)
To: Uladzislau Rezki (Sony)
Cc: Andrew Morton, linux-mm, LKML, Baoquan He, Christoph Hellwig,
Matthew Wilcox, Dave Chinner, Oleksiy Avramchenko
On Mon, Mar 27, 2023 at 07:01:26PM +0200, Uladzislau Rezki (Sony) wrote:
> Add vm_map_ram()/vm_unmap_ram() test case to our stress test-suite.
>
> Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> ---
> lib/test_vmalloc.c | 41 +++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 41 insertions(+)
>
> diff --git a/lib/test_vmalloc.c b/lib/test_vmalloc.c
> index cd2bdba6d3ed..6633eda4cd4d 100644
> --- a/lib/test_vmalloc.c
> +++ b/lib/test_vmalloc.c
> @@ -53,6 +53,7 @@ __param(int, run_test_mask, INT_MAX,
> "\t\tid: 128, name: pcpu_alloc_test\n"
> "\t\tid: 256, name: kvfree_rcu_1_arg_vmalloc_test\n"
> "\t\tid: 512, name: kvfree_rcu_2_arg_vmalloc_test\n"
> + "\t\tid: 1024, name: vm_map_ram_test\n"
> /* Add a new test case description here. */
> );
>
> @@ -358,6 +359,45 @@ kvfree_rcu_2_arg_vmalloc_test(void)
> return 0;
> }
>
> +static int
> +vm_map_ram_test(void)
> +{
> + unsigned int map_nr_pages;
> + unsigned char *v_ptr;
> + unsigned char *p_ptr;
> + struct page **pages;
> + struct page *page;
> + int i;
> +
> + map_nr_pages = nr_pages > 0 ? nr_pages:1;
> + pages = kmalloc(map_nr_pages * sizeof(*page), GFP_KERNEL);
> + if (!pages)
> + return -1;
> +
> + for (i = 0; i < map_nr_pages; i++) {
> + page = alloc_pages(GFP_KERNEL, 1);
Pedantry, but given I literally patched this pedantically the other day,
this could be alloc_page(GFP_KERNEL) :)
> + if (!page)
> + return -1;
We're leaking memory here right? Should jump to cleanup below.
> +
> + pages[i] = page;
> + }
You should be able to replace this with something like:-
unsigned long nr_allocated;
...
nr_allocated = alloc_pages_bulk_array(GFP_KERNEL, map_nr_pages, pages);
if (nr_allocated != map_nr_pages)
goto cleanup;
> +
> + /* Run the test loop. */
> + for (i = 0; i < test_loop_count; i++) {
> + v_ptr = vm_map_ram(pages, map_nr_pages, -1);
NIT: The -1 would be clearer as NUMA_NO_NODE
> + *v_ptr = 'a';
> + vm_unmap_ram(v_ptr, map_nr_pages);
> + }
> +
Reference to the above you'd add the cleanup label here:-
cleanup:
> + for (i = 0; i < map_nr_pages; i++) {
> + p_ptr = page_address(pages[i]);
> + free_pages((unsigned long)p_ptr, 1);
Nit, can be free_page((unsigned long)p_ptr);
> + }
> +
> + kfree(pages);
> + return 0;
> +}
> +
> struct test_case_desc {
> const char *test_name;
> int (*test_func)(void);
> @@ -374,6 +414,7 @@ static struct test_case_desc test_case_array[] = {
> { "pcpu_alloc_test", pcpu_alloc_test },
> { "kvfree_rcu_1_arg_vmalloc_test", kvfree_rcu_1_arg_vmalloc_test },
> { "kvfree_rcu_2_arg_vmalloc_test", kvfree_rcu_2_arg_vmalloc_test },
> + { "vm_map_ram_test", vm_map_ram_test },
> /* Add a new test case here. */
> };
>
> --
> 2.30.2
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray
2023-03-27 17:01 [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray Uladzislau Rezki (Sony)
2023-03-27 17:01 ` [PATCH v3 2/2] lib/test_vmalloc.c: Add vm_map_ram()/vm_unmap_ram() test case Uladzislau Rezki (Sony)
2023-03-27 20:09 ` [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray Lorenzo Stoakes
@ 2023-03-28 3:25 ` Baoquan He
2023-03-28 12:34 ` Uladzislau Rezki
2 siblings, 1 reply; 14+ messages in thread
From: Baoquan He @ 2023-03-28 3:25 UTC (permalink / raw)
To: Uladzislau Rezki (Sony)
Cc: Andrew Morton, linux-mm, LKML, Lorenzo Stoakes,
Christoph Hellwig, Matthew Wilcox, Dave Chinner,
Oleksiy Avramchenko
On 03/27/23 at 07:01pm, Uladzislau Rezki (Sony) wrote:
> A global vmap_blocks-xarray array can be contented under
> heavy usage of the vm_map_ram()/vm_unmap_ram() APIs. The
> lock_stat shows that a "vmap_blocks.xa_lock" lock is a
> second in a top-list when it comes to contentions:
>
> <snip>
> ----------------------------------------
> class name con-bounces contentions ...
> ----------------------------------------
> vmap_area_lock: 2554079 2554276 ...
> --------------
> vmap_area_lock 1297948 [<00000000dd41cbaa>] alloc_vmap_area+0x1c7/0x910
> vmap_area_lock 1256330 [<000000009d927bf3>] free_vmap_block+0x4a/0xe0
> vmap_area_lock 1 [<00000000c95c05a7>] find_vm_area+0x16/0x70
> --------------
> vmap_area_lock 1738590 [<00000000dd41cbaa>] alloc_vmap_area+0x1c7/0x910
> vmap_area_lock 815688 [<000000009d927bf3>] free_vmap_block+0x4a/0xe0
> vmap_area_lock 1 [<00000000c1d619d7>] __get_vm_area_node+0xd2/0x170
>
> vmap_blocks.xa_lock: 862689 862698 ...
> -------------------
> vmap_blocks.xa_lock 378418 [<00000000625a5626>] vm_map_ram+0x359/0x4a0
> vmap_blocks.xa_lock 484280 [<00000000caa2ef03>] xa_erase+0xe/0x30
> -------------------
> vmap_blocks.xa_lock 576226 [<00000000caa2ef03>] xa_erase+0xe/0x30
> vmap_blocks.xa_lock 286472 [<00000000625a5626>] vm_map_ram+0x359/0x4a0
> ...
> <snip>
>
> that is a result of running vm_map_ram()/vm_unmap_ram() in
> a loop. The test creates 64(on 64 CPUs system) threads and
> each one maps/unmaps 1 page.
With my understanding, the xarray will take more time when calling
xa_insert() or xa_erase() because these two will cause xa_expand() and
xa_shrink() if the index is sparse. xa_load() should be low cost to
finish. Wondering if in your testing code, the mapping address is close
or too far.
1 mm/vmalloc.c <<new_vmap_block>>
err = xa_insert(&vmap_blocks, vb_idx, vb, gfp_mask);
2 mm/vmalloc.c <<free_vmap_block>>
tmp = xa_erase(&vmap_blocks, addr_to_vb_idx(vb->va->va_start));
3 mm/vmalloc.c <<vb_free>>
vb = xa_load(&vmap_blocks, addr_to_vb_idx(addr));
4 mm/vmalloc.c <<vmap_ram_vread_iter>>
vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long )addr));
>
> After this change the "xa_lock" can be considered as a noise
> in the same test condition:
>
> <snip>
> ...
> &xa->xa_lock#1: 10333 10394 ...
> --------------
> &xa->xa_lock#1 5349 [<00000000bbbc9751>] xa_erase+0xe/0x30
> &xa->xa_lock#1 5045 [<0000000018def45d>] vm_map_ram+0x3a4/0x4f0
> --------------
> &xa->xa_lock#1 7326 [<0000000018def45d>] vm_map_ram+0x3a4/0x4f0
> &xa->xa_lock#1 3068 [<00000000bbbc9751>] xa_erase+0xe/0x30
> ...
> <snip>
>
> This patch does not fix vmap_area_lock/free_vmap_area_lock and
> purge_vmap_area_lock bottle-necks, it is rather a separate rework.
>
> v1 - v2:
> - Add more comments(Andrew Morton req.)
> - Switch to WARN_ON_ONCE(Lorenzo Stoakes req.)
>
> v2 -> v3:
> - Fix a kernel-doc complain(Matthew Wilcox)
>
> Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> ---
> mm/vmalloc.c | 85 +++++++++++++++++++++++++++++++++++++++-------------
> 1 file changed, 64 insertions(+), 21 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 978194dc2bb8..821256ecf81c 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -1908,9 +1908,22 @@ static struct vmap_area *find_unlink_vmap_area(unsigned long addr)
> #define VMAP_BLOCK 0x2 /* mark out the vmap_block sub-type*/
> #define VMAP_FLAGS_MASK 0x3
>
> +/*
> + * We should probably have a fallback mechanism to allocate virtual memory
> + * out of partially filled vmap blocks. However vmap block sizing should be
> + * fairly reasonable according to the vmalloc size, so it shouldn't be a
> + * big problem.
> + */
> struct vmap_block_queue {
> spinlock_t lock;
> struct list_head free;
> +
> + /*
> + * An xarray requires an extra memory dynamically to
> + * be allocated. If it is an issue, we can use rb-tree
> + * instead.
> + */
> + struct xarray vmap_blocks;
> };
>
> struct vmap_block {
> @@ -1928,24 +1941,46 @@ struct vmap_block {
> static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue);
>
> /*
> - * XArray of vmap blocks, indexed by address, to quickly find a vmap block
> - * in the free path. Could get rid of this if we change the API to return a
> - * "cookie" from alloc, to be passed to free. But no big deal yet.
> + * In order to fast access to any "vmap_block" associated with a
> + * specific address, we store them into a per-cpu xarray. A hash
> + * function is addr_to_vbq() whereas a key is a vb->va->va_start
> + * value.
> + *
> + * Please note, a vmap_block_queue, which is a per-cpu, is not
> + * serialized by a raw_smp_processor_id() current CPU, instead
> + * it is chosen based on a CPU-index it belongs to, i.e. it is
> + * a hash-table.
> + *
> + * An example:
> + *
> + * CPU_1 CPU_2 CPU_0
> + * | | |
> + * V V V
> + * 0 10 20 30 40 50 60
> + * |------|------|------|------|------|------|...<vmap address space>
> + * CPU0 CPU1 CPU2 CPU0 CPU1 CPU2
> + *
> + * - CPU_1 invokes vm_unmap_ram(6), 6 belongs to CPU0 zone, thus
> + * it access: CPU0/INDEX0 -> vmap_blocks -> xa_lock;
> + *
> + * - CPU_2 invokes vm_unmap_ram(11), 11 belongs to CPU1 zone, thus
> + * it access: CPU1/INDEX1 -> vmap_blocks -> xa_lock;
> + *
> + * - CPU_0 invokes vm_unmap_ram(20), 20 belongs to CPU2 zone, thus
> + * it access: CPU2/INDEX2 -> vmap_blocks -> xa_lock.
> */
> -static DEFINE_XARRAY(vmap_blocks);
> +static struct vmap_block_queue *
> +addr_to_vbq(unsigned long addr)
> +{
> + int index = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus();
>
> -/*
> - * We should probably have a fallback mechanism to allocate virtual memory
> - * out of partially filled vmap blocks. However vmap block sizing should be
> - * fairly reasonable according to the vmalloc size, so it shouldn't be a
> - * big problem.
> - */
> + return &per_cpu(vmap_block_queue, index);
> +}
>
> -static unsigned long addr_to_vb_idx(unsigned long addr)
> +static unsigned long
> +addr_to_vb_va_start(unsigned long addr)
> {
> - addr -= VMALLOC_START & ~(VMAP_BLOCK_SIZE-1);
> - addr /= VMAP_BLOCK_SIZE;
> - return addr;
> + return rounddown(addr, VMAP_BLOCK_SIZE);
> }
>
> static void *vmap_block_vaddr(unsigned long va_start, unsigned long pages_off)
> @@ -1953,7 +1988,7 @@ static void *vmap_block_vaddr(unsigned long va_start, unsigned long pages_off)
> unsigned long addr;
>
> addr = va_start + (pages_off << PAGE_SHIFT);
> - BUG_ON(addr_to_vb_idx(addr) != addr_to_vb_idx(va_start));
> + WARN_ON_ONCE(addr_to_vb_va_start(addr) != va_start);
> return (void *)addr;
> }
>
> @@ -1970,7 +2005,6 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> struct vmap_block_queue *vbq;
> struct vmap_block *vb;
> struct vmap_area *va;
> - unsigned long vb_idx;
> int node, err;
> void *vaddr;
>
> @@ -2003,8 +2037,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> bitmap_set(vb->used_map, 0, (1UL << order));
> INIT_LIST_HEAD(&vb->free_list);
>
> - vb_idx = addr_to_vb_idx(va->va_start);
> - err = xa_insert(&vmap_blocks, vb_idx, vb, gfp_mask);
> + vbq = addr_to_vbq(va->va_start);
> + err = xa_insert(&vbq->vmap_blocks, va->va_start, vb, gfp_mask);
Using va->va_start as index to access xarray may cost extra memory.
Imagine we got a virtual address at VMALLOC_START, its region is
[VMALLOC_START, VMALLOC_START+4095]. In the xarray, its sequence order
is 0. While with va->va_start, it's 0xffffc90000000000UL on x86_64 with
level4 paging mode. That means for the first page size vmalloc area,
storing it into xarray need about 10 levels of xa_node, just for the one
page size. With the old addr_to_vb_idx(), its index is 0. Only one level
height is needed. One xa_node is about 72bytes, it could take more time
and memory to access va->va_start. Not sure if my understanding is correct.
static unsigned long addr_to_vb_idx(unsigned long addr)
{
addr -= VMALLOC_START & ~(VMAP_BLOCK_SIZE-1);
addr /= VMAP_BLOCK_SIZE;
return addr;
}
> if (err) {
> kfree(vb);
> free_vmap_area(va);
> @@ -2021,9 +2055,11 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
>
> static void free_vmap_block(struct vmap_block *vb)
> {
> + struct vmap_block_queue *vbq;
> struct vmap_block *tmp;
>
> - tmp = xa_erase(&vmap_blocks, addr_to_vb_idx(vb->va->va_start));
> + vbq = addr_to_vbq(vb->va->va_start);
> + tmp = xa_erase(&vbq->vmap_blocks, vb->va->va_start);
> BUG_ON(tmp != vb);
>
> spin_lock(&vmap_area_lock);
> @@ -2135,6 +2171,7 @@ static void vb_free(unsigned long addr, unsigned long size)
> unsigned long offset;
> unsigned int order;
> struct vmap_block *vb;
> + struct vmap_block_queue *vbq;
>
> BUG_ON(offset_in_page(size));
> BUG_ON(size > PAGE_SIZE*VMAP_MAX_ALLOC);
> @@ -2143,7 +2180,10 @@ static void vb_free(unsigned long addr, unsigned long size)
>
> order = get_order(size);
> offset = (addr & (VMAP_BLOCK_SIZE - 1)) >> PAGE_SHIFT;
> - vb = xa_load(&vmap_blocks, addr_to_vb_idx(addr));
> +
> + vbq = addr_to_vbq(addr);
> + vb = xa_load(&vbq->vmap_blocks, addr_to_vb_va_start(addr));
> +
> spin_lock(&vb->lock);
> bitmap_clear(vb->used_map, offset, (1UL << order));
> spin_unlock(&vb->lock);
> @@ -3486,6 +3526,7 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags
> {
> char *start;
> struct vmap_block *vb;
> + struct vmap_block_queue *vbq;
> unsigned long offset;
> unsigned int rs, re, n;
>
> @@ -3503,7 +3544,8 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags
> * Area is split into regions and tracked with vmap_block, read out
> * each region and zero fill the hole between regions.
> */
> - vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr));
> + vbq = addr_to_vbq((unsigned long) addr);
> + vb = xa_load(&vbq->vmap_blocks, addr_to_vb_va_start((unsigned long) addr));
> if (!vb)
> goto finished;
>
> @@ -4272,6 +4314,7 @@ void __init vmalloc_init(void)
> p = &per_cpu(vfree_deferred, i);
> init_llist_head(&p->list);
> INIT_WORK(&p->wq, delayed_vfree_work);
> + xa_init(&vbq->vmap_blocks);
> }
>
> /* Import existing vmlist entries. */
> --
> 2.30.2
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 2/2] lib/test_vmalloc.c: Add vm_map_ram()/vm_unmap_ram() test case
2023-03-27 20:28 ` Lorenzo Stoakes
@ 2023-03-28 12:29 ` Uladzislau Rezki
0 siblings, 0 replies; 14+ messages in thread
From: Uladzislau Rezki @ 2023-03-28 12:29 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Uladzislau Rezki (Sony),
Andrew Morton, linux-mm, LKML, Baoquan He, Christoph Hellwig,
Matthew Wilcox, Dave Chinner, Oleksiy Avramchenko
> On Mon, Mar 27, 2023 at 07:01:26PM +0200, Uladzislau Rezki (Sony) wrote:
> > Add vm_map_ram()/vm_unmap_ram() test case to our stress test-suite.
> >
> > Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> > ---
> > lib/test_vmalloc.c | 41 +++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 41 insertions(+)
> >
> > diff --git a/lib/test_vmalloc.c b/lib/test_vmalloc.c
> > index cd2bdba6d3ed..6633eda4cd4d 100644
> > --- a/lib/test_vmalloc.c
> > +++ b/lib/test_vmalloc.c
> > @@ -53,6 +53,7 @@ __param(int, run_test_mask, INT_MAX,
> > "\t\tid: 128, name: pcpu_alloc_test\n"
> > "\t\tid: 256, name: kvfree_rcu_1_arg_vmalloc_test\n"
> > "\t\tid: 512, name: kvfree_rcu_2_arg_vmalloc_test\n"
> > + "\t\tid: 1024, name: vm_map_ram_test\n"
> > /* Add a new test case description here. */
> > );
> >
> > @@ -358,6 +359,45 @@ kvfree_rcu_2_arg_vmalloc_test(void)
> > return 0;
> > }
> >
> > +static int
> > +vm_map_ram_test(void)
> > +{
> > + unsigned int map_nr_pages;
> > + unsigned char *v_ptr;
> > + unsigned char *p_ptr;
> > + struct page **pages;
> > + struct page *page;
> > + int i;
> > +
> > + map_nr_pages = nr_pages > 0 ? nr_pages:1;
> > + pages = kmalloc(map_nr_pages * sizeof(*page), GFP_KERNEL);
> > + if (!pages)
> > + return -1;
> > +
> > + for (i = 0; i < map_nr_pages; i++) {
> > + page = alloc_pages(GFP_KERNEL, 1);
>
> Pedantry, but given I literally patched this pedantically the other day,
> this could be alloc_page(GFP_KERNEL) :)
>
> > + if (!page)
> > + return -1;
>
> We're leaking memory here right? Should jump to cleanup below.
>
> > +
> > + pages[i] = page;
> > + }
>
>
> You should be able to replace this with something like:-
>
> unsigned long nr_allocated;
>
> ...
>
> nr_allocated = alloc_pages_bulk_array(GFP_KERNEL, map_nr_pages, pages);
> if (nr_allocated != map_nr_pages)
> goto cleanup;
>
> > +
> > + /* Run the test loop. */
> > + for (i = 0; i < test_loop_count; i++) {
> > + v_ptr = vm_map_ram(pages, map_nr_pages, -1);
>
> NIT: The -1 would be clearer as NUMA_NO_NODE
>
> > + *v_ptr = 'a';
> > + vm_unmap_ram(v_ptr, map_nr_pages);
> > + }
> > +
>
> Reference to the above you'd add the cleanup label here:-
>
> cleanup:
>
> > + for (i = 0; i < map_nr_pages; i++) {
> > + p_ptr = page_address(pages[i]);
> > + free_pages((unsigned long)p_ptr, 1);
>
> Nit, can be free_page((unsigned long)p_ptr);
>
Thank you. Will fix all comments, especially switching to the
alloc_page() new API :)
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray
2023-03-28 3:25 ` Baoquan He
@ 2023-03-28 12:34 ` Uladzislau Rezki
2023-03-29 4:33 ` Baoquan He
0 siblings, 1 reply; 14+ messages in thread
From: Uladzislau Rezki @ 2023-03-28 12:34 UTC (permalink / raw)
To: Baoquan He
Cc: Uladzislau Rezki (Sony),
Andrew Morton, linux-mm, LKML, Lorenzo Stoakes,
Christoph Hellwig, Matthew Wilcox, Dave Chinner,
Oleksiy Avramchenko
On Tue, Mar 28, 2023 at 11:25:54AM +0800, Baoquan He wrote:
> On 03/27/23 at 07:01pm, Uladzislau Rezki (Sony) wrote:
> > A global vmap_blocks-xarray array can be contented under
> > heavy usage of the vm_map_ram()/vm_unmap_ram() APIs. The
> > lock_stat shows that a "vmap_blocks.xa_lock" lock is a
> > second in a top-list when it comes to contentions:
> >
> > <snip>
> > ----------------------------------------
> > class name con-bounces contentions ...
> > ----------------------------------------
> > vmap_area_lock: 2554079 2554276 ...
> > --------------
> > vmap_area_lock 1297948 [<00000000dd41cbaa>] alloc_vmap_area+0x1c7/0x910
> > vmap_area_lock 1256330 [<000000009d927bf3>] free_vmap_block+0x4a/0xe0
> > vmap_area_lock 1 [<00000000c95c05a7>] find_vm_area+0x16/0x70
> > --------------
> > vmap_area_lock 1738590 [<00000000dd41cbaa>] alloc_vmap_area+0x1c7/0x910
> > vmap_area_lock 815688 [<000000009d927bf3>] free_vmap_block+0x4a/0xe0
> > vmap_area_lock 1 [<00000000c1d619d7>] __get_vm_area_node+0xd2/0x170
> >
> > vmap_blocks.xa_lock: 862689 862698 ...
> > -------------------
> > vmap_blocks.xa_lock 378418 [<00000000625a5626>] vm_map_ram+0x359/0x4a0
> > vmap_blocks.xa_lock 484280 [<00000000caa2ef03>] xa_erase+0xe/0x30
> > -------------------
> > vmap_blocks.xa_lock 576226 [<00000000caa2ef03>] xa_erase+0xe/0x30
> > vmap_blocks.xa_lock 286472 [<00000000625a5626>] vm_map_ram+0x359/0x4a0
> > ...
> > <snip>
> >
> > that is a result of running vm_map_ram()/vm_unmap_ram() in
> > a loop. The test creates 64(on 64 CPUs system) threads and
> > each one maps/unmaps 1 page.
>
> With my understanding, the xarray will take more time when calling
> xa_insert() or xa_erase() because these two will cause xa_expand() and
> xa_shrink() if the index is sparse. xa_load() should be low cost to
> finish. Wondering if in your testing code, the mapping address is close
> or too far.
>
> 1 mm/vmalloc.c <<new_vmap_block>>
> err = xa_insert(&vmap_blocks, vb_idx, vb, gfp_mask);
> 2 mm/vmalloc.c <<free_vmap_block>>
> tmp = xa_erase(&vmap_blocks, addr_to_vb_idx(vb->va->va_start));
> 3 mm/vmalloc.c <<vb_free>>
> vb = xa_load(&vmap_blocks, addr_to_vb_idx(addr));
> 4 mm/vmalloc.c <<vmap_ram_vread_iter>>
> vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long )addr));
>
> >
> > After this change the "xa_lock" can be considered as a noise
> > in the same test condition:
> >
> > <snip>
> > ...
> > &xa->xa_lock#1: 10333 10394 ...
> > --------------
> > &xa->xa_lock#1 5349 [<00000000bbbc9751>] xa_erase+0xe/0x30
> > &xa->xa_lock#1 5045 [<0000000018def45d>] vm_map_ram+0x3a4/0x4f0
> > --------------
> > &xa->xa_lock#1 7326 [<0000000018def45d>] vm_map_ram+0x3a4/0x4f0
> > &xa->xa_lock#1 3068 [<00000000bbbc9751>] xa_erase+0xe/0x30
> > ...
> > <snip>
> >
> > This patch does not fix vmap_area_lock/free_vmap_area_lock and
> > purge_vmap_area_lock bottle-necks, it is rather a separate rework.
> >
> > v1 - v2:
> > - Add more comments(Andrew Morton req.)
> > - Switch to WARN_ON_ONCE(Lorenzo Stoakes req.)
> >
> > v2 -> v3:
> > - Fix a kernel-doc complain(Matthew Wilcox)
> >
> > Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> > ---
> > mm/vmalloc.c | 85 +++++++++++++++++++++++++++++++++++++++-------------
> > 1 file changed, 64 insertions(+), 21 deletions(-)
> >
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index 978194dc2bb8..821256ecf81c 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
> > @@ -1908,9 +1908,22 @@ static struct vmap_area *find_unlink_vmap_area(unsigned long addr)
> > #define VMAP_BLOCK 0x2 /* mark out the vmap_block sub-type*/
> > #define VMAP_FLAGS_MASK 0x3
> >
> > +/*
> > + * We should probably have a fallback mechanism to allocate virtual memory
> > + * out of partially filled vmap blocks. However vmap block sizing should be
> > + * fairly reasonable according to the vmalloc size, so it shouldn't be a
> > + * big problem.
> > + */
> > struct vmap_block_queue {
> > spinlock_t lock;
> > struct list_head free;
> > +
> > + /*
> > + * An xarray requires an extra memory dynamically to
> > + * be allocated. If it is an issue, we can use rb-tree
> > + * instead.
> > + */
> > + struct xarray vmap_blocks;
> > };
> >
> > struct vmap_block {
> > @@ -1928,24 +1941,46 @@ struct vmap_block {
> > static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue);
> >
> > /*
> > - * XArray of vmap blocks, indexed by address, to quickly find a vmap block
> > - * in the free path. Could get rid of this if we change the API to return a
> > - * "cookie" from alloc, to be passed to free. But no big deal yet.
> > + * In order to fast access to any "vmap_block" associated with a
> > + * specific address, we store them into a per-cpu xarray. A hash
> > + * function is addr_to_vbq() whereas a key is a vb->va->va_start
> > + * value.
> > + *
> > + * Please note, a vmap_block_queue, which is a per-cpu, is not
> > + * serialized by a raw_smp_processor_id() current CPU, instead
> > + * it is chosen based on a CPU-index it belongs to, i.e. it is
> > + * a hash-table.
> > + *
> > + * An example:
> > + *
> > + * CPU_1 CPU_2 CPU_0
> > + * | | |
> > + * V V V
> > + * 0 10 20 30 40 50 60
> > + * |------|------|------|------|------|------|...<vmap address space>
> > + * CPU0 CPU1 CPU2 CPU0 CPU1 CPU2
> > + *
> > + * - CPU_1 invokes vm_unmap_ram(6), 6 belongs to CPU0 zone, thus
> > + * it access: CPU0/INDEX0 -> vmap_blocks -> xa_lock;
> > + *
> > + * - CPU_2 invokes vm_unmap_ram(11), 11 belongs to CPU1 zone, thus
> > + * it access: CPU1/INDEX1 -> vmap_blocks -> xa_lock;
> > + *
> > + * - CPU_0 invokes vm_unmap_ram(20), 20 belongs to CPU2 zone, thus
> > + * it access: CPU2/INDEX2 -> vmap_blocks -> xa_lock.
> > */
> > -static DEFINE_XARRAY(vmap_blocks);
> > +static struct vmap_block_queue *
> > +addr_to_vbq(unsigned long addr)
> > +{
> > + int index = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus();
> >
> > -/*
> > - * We should probably have a fallback mechanism to allocate virtual memory
> > - * out of partially filled vmap blocks. However vmap block sizing should be
> > - * fairly reasonable according to the vmalloc size, so it shouldn't be a
> > - * big problem.
> > - */
> > + return &per_cpu(vmap_block_queue, index);
> > +}
> >
> > -static unsigned long addr_to_vb_idx(unsigned long addr)
> > +static unsigned long
> > +addr_to_vb_va_start(unsigned long addr)
> > {
> > - addr -= VMALLOC_START & ~(VMAP_BLOCK_SIZE-1);
> > - addr /= VMAP_BLOCK_SIZE;
> > - return addr;
> > + return rounddown(addr, VMAP_BLOCK_SIZE);
> > }
> >
> > static void *vmap_block_vaddr(unsigned long va_start, unsigned long pages_off)
> > @@ -1953,7 +1988,7 @@ static void *vmap_block_vaddr(unsigned long va_start, unsigned long pages_off)
> > unsigned long addr;
> >
> > addr = va_start + (pages_off << PAGE_SHIFT);
> > - BUG_ON(addr_to_vb_idx(addr) != addr_to_vb_idx(va_start));
> > + WARN_ON_ONCE(addr_to_vb_va_start(addr) != va_start);
> > return (void *)addr;
> > }
> >
> > @@ -1970,7 +2005,6 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> > struct vmap_block_queue *vbq;
> > struct vmap_block *vb;
> > struct vmap_area *va;
> > - unsigned long vb_idx;
> > int node, err;
> > void *vaddr;
> >
> > @@ -2003,8 +2037,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> > bitmap_set(vb->used_map, 0, (1UL << order));
> > INIT_LIST_HEAD(&vb->free_list);
> >
> > - vb_idx = addr_to_vb_idx(va->va_start);
> > - err = xa_insert(&vmap_blocks, vb_idx, vb, gfp_mask);
> > + vbq = addr_to_vbq(va->va_start);
> > + err = xa_insert(&vbq->vmap_blocks, va->va_start, vb, gfp_mask);
>
> Using va->va_start as index to access xarray may cost extra memory.
> Imagine we got a virtual address at VMALLOC_START, its region is
> [VMALLOC_START, VMALLOC_START+4095]. In the xarray, its sequence order
> is 0. While with va->va_start, it's 0xffffc90000000000UL on x86_64 with
> level4 paging mode. That means for the first page size vmalloc area,
> storing it into xarray need about 10 levels of xa_node, just for the one
> page size. With the old addr_to_vb_idx(), its index is 0. Only one level
> height is needed. One xa_node is about 72bytes, it could take more time
> and memory to access va->va_start. Not sure if my understanding is correct.
>
> static unsigned long addr_to_vb_idx(unsigned long addr)
> {
> addr -= VMALLOC_START & ~(VMAP_BLOCK_SIZE-1);
> addr /= VMAP_BLOCK_SIZE;
> return addr;
> }
>
If the size of array depends on index "length", then, indeed it will require
more memory. From the other hand we can keep the old addr_to_vb_idx() function
in order to "cut" a va->va_start index.
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray
2023-03-27 20:09 ` [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray Lorenzo Stoakes
@ 2023-03-28 12:51 ` Uladzislau Rezki
2023-03-28 16:37 ` Uladzislau Rezki
2023-03-29 15:01 ` Uladzislau Rezki
2 siblings, 0 replies; 14+ messages in thread
From: Uladzislau Rezki @ 2023-03-28 12:51 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Uladzislau Rezki (Sony),
Andrew Morton, linux-mm, LKML, Baoquan He, Christoph Hellwig,
Matthew Wilcox, Dave Chinner, Oleksiy Avramchenko
On Mon, Mar 27, 2023 at 09:09:32PM +0100, Lorenzo Stoakes wrote:
> On Mon, Mar 27, 2023 at 07:01:25PM +0200, Uladzislau Rezki (Sony) wrote:
> > A global vmap_blocks-xarray array can be contented under
> > heavy usage of the vm_map_ram()/vm_unmap_ram() APIs. The
> > lock_stat shows that a "vmap_blocks.xa_lock" lock is a
> > second in a top-list when it comes to contentions:
> >
> > <snip>
> > ----------------------------------------
> > class name con-bounces contentions ...
> > ----------------------------------------
> > vmap_area_lock: 2554079 2554276 ...
> > --------------
> > vmap_area_lock 1297948 [<00000000dd41cbaa>] alloc_vmap_area+0x1c7/0x910
> > vmap_area_lock 1256330 [<000000009d927bf3>] free_vmap_block+0x4a/0xe0
> > vmap_area_lock 1 [<00000000c95c05a7>] find_vm_area+0x16/0x70
> > --------------
> > vmap_area_lock 1738590 [<00000000dd41cbaa>] alloc_vmap_area+0x1c7/0x910
> > vmap_area_lock 815688 [<000000009d927bf3>] free_vmap_block+0x4a/0xe0
> > vmap_area_lock 1 [<00000000c1d619d7>] __get_vm_area_node+0xd2/0x170
> >
> > vmap_blocks.xa_lock: 862689 862698 ...
> > -------------------
> > vmap_blocks.xa_lock 378418 [<00000000625a5626>] vm_map_ram+0x359/0x4a0
> > vmap_blocks.xa_lock 484280 [<00000000caa2ef03>] xa_erase+0xe/0x30
> > -------------------
> > vmap_blocks.xa_lock 576226 [<00000000caa2ef03>] xa_erase+0xe/0x30
> > vmap_blocks.xa_lock 286472 [<00000000625a5626>] vm_map_ram+0x359/0x4a0
> > ...
> > <snip>
> >
> > that is a result of running vm_map_ram()/vm_unmap_ram() in
> > a loop. The test creates 64(on 64 CPUs system) threads and
> > each one maps/unmaps 1 page.
> >
> > After this change the "xa_lock" can be considered as a noise
> > in the same test condition:
> >
> > <snip>
> > ...
> > &xa->xa_lock#1: 10333 10394 ...
> > --------------
> > &xa->xa_lock#1 5349 [<00000000bbbc9751>] xa_erase+0xe/0x30
> > &xa->xa_lock#1 5045 [<0000000018def45d>] vm_map_ram+0x3a4/0x4f0
> > --------------
> > &xa->xa_lock#1 7326 [<0000000018def45d>] vm_map_ram+0x3a4/0x4f0
> > &xa->xa_lock#1 3068 [<00000000bbbc9751>] xa_erase+0xe/0x30
> > ...
> > <snip>
> >
> > This patch does not fix vmap_area_lock/free_vmap_area_lock and
> > purge_vmap_area_lock bottle-necks, it is rather a separate rework.
> >
> > v1 - v2:
> > - Add more comments(Andrew Morton req.)
> > - Switch to WARN_ON_ONCE(Lorenzo Stoakes req.)
> >
> > v2 -> v3:
> > - Fix a kernel-doc complain(Matthew Wilcox)
> >
> > Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> > ---
> > mm/vmalloc.c | 85 +++++++++++++++++++++++++++++++++++++++-------------
> > 1 file changed, 64 insertions(+), 21 deletions(-)
> >
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index 978194dc2bb8..821256ecf81c 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
> > @@ -1908,9 +1908,22 @@ static struct vmap_area *find_unlink_vmap_area(unsigned long addr)
> > #define VMAP_BLOCK 0x2 /* mark out the vmap_block sub-type*/
> > #define VMAP_FLAGS_MASK 0x3
> >
> > +/*
> > + * We should probably have a fallback mechanism to allocate virtual memory
> > + * out of partially filled vmap blocks. However vmap block sizing should be
> > + * fairly reasonable according to the vmalloc size, so it shouldn't be a
> > + * big problem.
> > + */
> > struct vmap_block_queue {
> > spinlock_t lock;
> > struct list_head free;
> > +
> > + /*
> > + * An xarray requires an extra memory dynamically to
> > + * be allocated. If it is an issue, we can use rb-tree
> > + * instead.
> > + */
> > + struct xarray vmap_blocks;
> > };
> >
> > struct vmap_block {
> > @@ -1928,24 +1941,46 @@ struct vmap_block {
> > static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue);
> >
> > /*
> > - * XArray of vmap blocks, indexed by address, to quickly find a vmap block
> > - * in the free path. Could get rid of this if we change the API to return a
> > - * "cookie" from alloc, to be passed to free. But no big deal yet.
> > + * In order to fast access to any "vmap_block" associated with a
> > + * specific address, we store them into a per-cpu xarray. A hash
> > + * function is addr_to_vbq() whereas a key is a vb->va->va_start
> > + * value.
> > + *
> > + * Please note, a vmap_block_queue, which is a per-cpu, is not
> > + * serialized by a raw_smp_processor_id() current CPU, instead
> > + * it is chosen based on a CPU-index it belongs to, i.e. it is
> > + * a hash-table.
> > + *
> > + * An example:
> > + *
> > + * CPU_1 CPU_2 CPU_0
> > + * | | |
> > + * V V V
> > + * 0 10 20 30 40 50 60
> > + * |------|------|------|------|------|------|...<vmap address space>
> > + * CPU0 CPU1 CPU2 CPU0 CPU1 CPU2
> > + *
> > + * - CPU_1 invokes vm_unmap_ram(6), 6 belongs to CPU0 zone, thus
> > + * it access: CPU0/INDEX0 -> vmap_blocks -> xa_lock;
> > + *
> > + * - CPU_2 invokes vm_unmap_ram(11), 11 belongs to CPU1 zone, thus
> > + * it access: CPU1/INDEX1 -> vmap_blocks -> xa_lock;
> > + *
> > + * - CPU_0 invokes vm_unmap_ram(20), 20 belongs to CPU2 zone, thus
> > + * it access: CPU2/INDEX2 -> vmap_blocks -> xa_lock.
> > */
>
> OK so if I understand this correctly, you're overloading the per-CPU
> vmap_block_queue array to use as a simple hash based on the address and
> relying on the xa_lock() in xa_insert() to serialise in case of contention?
>
> I like the general heft of your comment but I feel this could be spelled
> out a little more clearly, something like:-
>
> In order to have fast access to any vmap_block object associated with a
> specific address, we use a hash.
>
> Rather than waste space on defining a new hash table we take advantage
> of the fact we already have a static per-cpu array vmap_block_queue.
>
> This is already used for per-CPU access to the block queue, however we
> overload this to _also_ act as a vmap_block hash. The hash function is
> addr_to_vbq() which hashes on vb->va->va_start.
>
> This then uses per_cpu() to lookup the _index_ rather than the
> _cpu_. Each vmap_block_queue contains an xarray of vmap blocks which are
> indexed on the same key as the hash (vb->va->va_start).
>
> xarray read acceses are protected by RCU lock and inserts are protected
> by a spin lock so there is no risk of a race here.
>
> An example:
>
> ...
>
> Feel free to cut this down as needed :) but I do feel it's important to
> _explicitly_ point out that we're overloading this as it's quite confusing
> at face value.
>
> > -static DEFINE_XARRAY(vmap_blocks);
> > +static struct vmap_block_queue *
> > +addr_to_vbq(unsigned long addr)
> > +{
> > + int index = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus();
> >
> > -/*
> > - * We should probably have a fallback mechanism to allocate virtual memory
> > - * out of partially filled vmap blocks. However vmap block sizing should be
> > - * fairly reasonable according to the vmalloc size, so it shouldn't be a
> > - * big problem.
> > - */
> > + return &per_cpu(vmap_block_queue, index);
> > +}
> >
> > -static unsigned long addr_to_vb_idx(unsigned long addr)
> > +static unsigned long
> > +addr_to_vb_va_start(unsigned long addr)
> > {
> > - addr -= VMALLOC_START & ~(VMAP_BLOCK_SIZE-1);
> > - addr /= VMAP_BLOCK_SIZE;
> > - return addr;
> > + return rounddown(addr, VMAP_BLOCK_SIZE);
> > }
> >
> > static void *vmap_block_vaddr(unsigned long va_start, unsigned long pages_off)
> > @@ -1953,7 +1988,7 @@ static void *vmap_block_vaddr(unsigned long va_start, unsigned long pages_off)
> > unsigned long addr;
> >
> > addr = va_start + (pages_off << PAGE_SHIFT);
> > - BUG_ON(addr_to_vb_idx(addr) != addr_to_vb_idx(va_start));
> > + WARN_ON_ONCE(addr_to_vb_va_start(addr) != va_start);
> > return (void *)addr;
> > }
> >
> > @@ -1970,7 +2005,6 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> > struct vmap_block_queue *vbq;
> > struct vmap_block *vb;
> > struct vmap_area *va;
> > - unsigned long vb_idx;
> > int node, err;
> > void *vaddr;
> >
> > @@ -2003,8 +2037,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> > bitmap_set(vb->used_map, 0, (1UL << order));
> > INIT_LIST_HEAD(&vb->free_list);
> >
> > - vb_idx = addr_to_vb_idx(va->va_start);
> > - err = xa_insert(&vmap_blocks, vb_idx, vb, gfp_mask);
> > + vbq = addr_to_vbq(va->va_start);
> > + err = xa_insert(&vbq->vmap_blocks, va->va_start, vb, gfp_mask);
>
> I might be being pedantic here, but shortly after this code you reassign vbq:-
>
> vbq = addr_to_vbq(va->va_start);
> err = xa_insert(&vbq->vmap_blocks, va->va_start, vb, gfp_mask);
> if (err) {
> kfree(vb);
> free_vmap_area(va);
> return ERR_PTR(err);
> }
>
> vbq = raw_cpu_ptr(&vmap_block_queue);
>
> Which is confusing at a glance, as you're using it once as a hash lookup
> and again for its 'true purpose'.
>
> I wonder whether it would be better overall, since you always follow a vbq
> lookup explicitly with an operation on vmap_blocks, to just add a helper
> that returned a pointer to the xarray? e.g. (untested code here :):-
>
> static struct xarray *get_vblock_array(unsigned long addr)
> {
> struct vmap_block_queue *vbq;
> int index = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus();
>
> vbq = &per_cpu(vmap_block_queue, index);
> return &vbq->vblocks;
> }
>
> And replace addr_to_vbq() with this. That'd also make the mechanism of this
> hash lookup super explicit.
>
Thank you for the comments. I will go through all of them and fix
accordingly. At lease i see that i have to update the documentation in
more better way!
Thanks!
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray
2023-03-27 20:09 ` [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray Lorenzo Stoakes
2023-03-28 12:51 ` Uladzislau Rezki
@ 2023-03-28 16:37 ` Uladzislau Rezki
2023-03-29 15:01 ` Uladzislau Rezki
2 siblings, 0 replies; 14+ messages in thread
From: Uladzislau Rezki @ 2023-03-28 16:37 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Uladzislau Rezki (Sony),
Andrew Morton, linux-mm, LKML, Baoquan He, Christoph Hellwig,
Matthew Wilcox, Dave Chinner, Oleksiy Avramchenko
> > /*
> > - * XArray of vmap blocks, indexed by address, to quickly find a vmap block
> > - * in the free path. Could get rid of this if we change the API to return a
> > - * "cookie" from alloc, to be passed to free. But no big deal yet.
> > + * In order to fast access to any "vmap_block" associated with a
> > + * specific address, we store them into a per-cpu xarray. A hash
> > + * function is addr_to_vbq() whereas a key is a vb->va->va_start
> > + * value.
> > + *
> > + * Please note, a vmap_block_queue, which is a per-cpu, is not
> > + * serialized by a raw_smp_processor_id() current CPU, instead
> > + * it is chosen based on a CPU-index it belongs to, i.e. it is
> > + * a hash-table.
> > + *
> > + * An example:
> > + *
> > + * CPU_1 CPU_2 CPU_0
> > + * | | |
> > + * V V V
> > + * 0 10 20 30 40 50 60
> > + * |------|------|------|------|------|------|...<vmap address space>
> > + * CPU0 CPU1 CPU2 CPU0 CPU1 CPU2
> > + *
> > + * - CPU_1 invokes vm_unmap_ram(6), 6 belongs to CPU0 zone, thus
> > + * it access: CPU0/INDEX0 -> vmap_blocks -> xa_lock;
> > + *
> > + * - CPU_2 invokes vm_unmap_ram(11), 11 belongs to CPU1 zone, thus
> > + * it access: CPU1/INDEX1 -> vmap_blocks -> xa_lock;
> > + *
> > + * - CPU_0 invokes vm_unmap_ram(20), 20 belongs to CPU2 zone, thus
> > + * it access: CPU2/INDEX2 -> vmap_blocks -> xa_lock.
> > */
>
> OK so if I understand this correctly, you're overloading the per-CPU
> vmap_block_queue array to use as a simple hash based on the address and
> relying on the xa_lock() in xa_insert() to serialise in case of contention?
>
Sorry i missed your question. You correctly understood what i am doing.
Basically, we can associate any address with an index in per-cpu-array.
Since a CPU pre-allocates a fixed block size, which is a VMAP_BLOCK_SIZE,
we can map any address within this block to a certain index or i call
it a specific CPU zone it belongs to.
If we want fully serialize it we have to allocate a new vmap block in
CPU owner zone. According to ASCII picture, for CPU0 it is 0-20, 30-40
addresses. In fact, even though it would be "fully" serialized, in practise
id does not give a visible performance. So this is not needed and it
has extra drawbacks.
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray
2023-03-28 12:34 ` Uladzislau Rezki
@ 2023-03-29 4:33 ` Baoquan He
2023-03-29 6:54 ` Uladzislau Rezki
0 siblings, 1 reply; 14+ messages in thread
From: Baoquan He @ 2023-03-29 4:33 UTC (permalink / raw)
To: Uladzislau Rezki
Cc: Andrew Morton, linux-mm, LKML, Lorenzo Stoakes,
Christoph Hellwig, Matthew Wilcox, Dave Chinner,
Oleksiy Avramchenko
On 03/28/23 at 02:34pm, Uladzislau Rezki wrote:
......
> > > @@ -2003,8 +2037,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> > > bitmap_set(vb->used_map, 0, (1UL << order));
> > > INIT_LIST_HEAD(&vb->free_list);
> > >
> > > - vb_idx = addr_to_vb_idx(va->va_start);
> > > - err = xa_insert(&vmap_blocks, vb_idx, vb, gfp_mask);
> > > + vbq = addr_to_vbq(va->va_start);
> > > + err = xa_insert(&vbq->vmap_blocks, va->va_start, vb, gfp_mask);
> >
> > Using va->va_start as index to access xarray may cost extra memory.
> > Imagine we got a virtual address at VMALLOC_START, its region is
> > [VMALLOC_START, VMALLOC_START+4095]. In the xarray, its sequence order
> > is 0. While with va->va_start, it's 0xffffc90000000000UL on x86_64 with
> > level4 paging mode. That means for the first page size vmalloc area,
> > storing it into xarray need about 10 levels of xa_node, just for the one
> > page size. With the old addr_to_vb_idx(), its index is 0. Only one level
> > height is needed. One xa_node is about 72bytes, it could take more time
> > and memory to access va->va_start. Not sure if my understanding is correct.
> >
> > static unsigned long addr_to_vb_idx(unsigned long addr)
> > {
> > addr -= VMALLOC_START & ~(VMAP_BLOCK_SIZE-1);
> > addr /= VMAP_BLOCK_SIZE;
> > return addr;
> > }
> >
> If the size of array depends on index "length", then, indeed it will require
> more memory. From the other hand we can keep the old addr_to_vb_idx() function
> in order to "cut" a va->va_start index.
Yeah, the extra 10 levels of xa_node is unnecessary if we keep the old
addr_to_vb_idx(). And the prolonged path will cost more time to reach the
wanted leaf node. E.g on x86_64 with 4 level paging mode, vmalloc area
is 32TB. With the old calculation, its index range is [0, 8M], 4 level
heights of xa_node at most is enough to cover.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray
2023-03-29 4:33 ` Baoquan He
@ 2023-03-29 6:54 ` Uladzislau Rezki
0 siblings, 0 replies; 14+ messages in thread
From: Uladzislau Rezki @ 2023-03-29 6:54 UTC (permalink / raw)
To: Baoquan He
Cc: Uladzislau Rezki, Andrew Morton, linux-mm, LKML, Lorenzo Stoakes,
Christoph Hellwig, Matthew Wilcox, Dave Chinner,
Oleksiy Avramchenko
On Wed, Mar 29, 2023 at 12:33:05PM +0800, Baoquan He wrote:
> On 03/28/23 at 02:34pm, Uladzislau Rezki wrote:
> ......
> > > > @@ -2003,8 +2037,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> > > > bitmap_set(vb->used_map, 0, (1UL << order));
> > > > INIT_LIST_HEAD(&vb->free_list);
> > > >
> > > > - vb_idx = addr_to_vb_idx(va->va_start);
> > > > - err = xa_insert(&vmap_blocks, vb_idx, vb, gfp_mask);
> > > > + vbq = addr_to_vbq(va->va_start);
> > > > + err = xa_insert(&vbq->vmap_blocks, va->va_start, vb, gfp_mask);
> > >
> > > Using va->va_start as index to access xarray may cost extra memory.
> > > Imagine we got a virtual address at VMALLOC_START, its region is
> > > [VMALLOC_START, VMALLOC_START+4095]. In the xarray, its sequence order
> > > is 0. While with va->va_start, it's 0xffffc90000000000UL on x86_64 with
> > > level4 paging mode. That means for the first page size vmalloc area,
> > > storing it into xarray need about 10 levels of xa_node, just for the one
> > > page size. With the old addr_to_vb_idx(), its index is 0. Only one level
> > > height is needed. One xa_node is about 72bytes, it could take more time
> > > and memory to access va->va_start. Not sure if my understanding is correct.
> > >
> > > static unsigned long addr_to_vb_idx(unsigned long addr)
> > > {
> > > addr -= VMALLOC_START & ~(VMAP_BLOCK_SIZE-1);
> > > addr /= VMAP_BLOCK_SIZE;
> > > return addr;
> > > }
> > >
> > If the size of array depends on index "length", then, indeed it will require
> > more memory. From the other hand we can keep the old addr_to_vb_idx() function
> > in order to "cut" a va->va_start index.
>
> Yeah, the extra 10 levels of xa_node is unnecessary if we keep the old
> addr_to_vb_idx(). And the prolonged path will cost more time to reach the
> wanted leaf node. E.g on x86_64 with 4 level paging mode, vmalloc area
> is 32TB. With the old calculation, its index range is [0, 8M], 4 level
> heights of xa_node at most is enough to cover.
>
Good! I have not analyzed how xarray stores its indexes. I will update
the patch to cut indexes so we stay the same as we used to be before.
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray
2023-03-27 20:09 ` [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray Lorenzo Stoakes
2023-03-28 12:51 ` Uladzislau Rezki
2023-03-28 16:37 ` Uladzislau Rezki
@ 2023-03-29 15:01 ` Uladzislau Rezki
2023-03-29 16:23 ` Lorenzo Stoakes
2 siblings, 1 reply; 14+ messages in thread
From: Uladzislau Rezki @ 2023-03-29 15:01 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Uladzislau Rezki (Sony),
Andrew Morton, linux-mm, LKML, Baoquan He, Christoph Hellwig,
Matthew Wilcox, Dave Chinner, Oleksiy Avramchenko
Hello, Lorenzo!
> > /*
> > - * XArray of vmap blocks, indexed by address, to quickly find a vmap block
> > - * in the free path. Could get rid of this if we change the API to return a
> > - * "cookie" from alloc, to be passed to free. But no big deal yet.
> > + * In order to fast access to any "vmap_block" associated with a
> > + * specific address, we store them into a per-cpu xarray. A hash
> > + * function is addr_to_vbq() whereas a key is a vb->va->va_start
> > + * value.
> > + *
> > + * Please note, a vmap_block_queue, which is a per-cpu, is not
> > + * serialized by a raw_smp_processor_id() current CPU, instead
> > + * it is chosen based on a CPU-index it belongs to, i.e. it is
> > + * a hash-table.
> > + *
> > + * An example:
> > + *
> > + * CPU_1 CPU_2 CPU_0
> > + * | | |
> > + * V V V
> > + * 0 10 20 30 40 50 60
> > + * |------|------|------|------|------|------|...<vmap address space>
> > + * CPU0 CPU1 CPU2 CPU0 CPU1 CPU2
> > + *
> > + * - CPU_1 invokes vm_unmap_ram(6), 6 belongs to CPU0 zone, thus
> > + * it access: CPU0/INDEX0 -> vmap_blocks -> xa_lock;
> > + *
> > + * - CPU_2 invokes vm_unmap_ram(11), 11 belongs to CPU1 zone, thus
> > + * it access: CPU1/INDEX1 -> vmap_blocks -> xa_lock;
> > + *
> > + * - CPU_0 invokes vm_unmap_ram(20), 20 belongs to CPU2 zone, thus
> > + * it access: CPU2/INDEX2 -> vmap_blocks -> xa_lock.
> > */
>
> OK so if I understand this correctly, you're overloading the per-CPU
> vmap_block_queue array to use as a simple hash based on the address and
> relying on the xa_lock() in xa_insert() to serialise in case of contention?
>
> I like the general heft of your comment but I feel this could be spelled
> out a little more clearly, something like:-
>
> In order to have fast access to any vmap_block object associated with a
> specific address, we use a hash.
>
> Rather than waste space on defining a new hash table we take advantage
> of the fact we already have a static per-cpu array vmap_block_queue.
>
> This is already used for per-CPU access to the block queue, however we
> overload this to _also_ act as a vmap_block hash. The hash function is
> addr_to_vbq() which hashes on vb->va->va_start.
>
> This then uses per_cpu() to lookup the _index_ rather than the
> _cpu_. Each vmap_block_queue contains an xarray of vmap blocks which are
> indexed on the same key as the hash (vb->va->va_start).
>
> xarray read acceses are protected by RCU lock and inserts are protected
> by a spin lock so there is no risk of a race here.
>
/*
* In order to fast access to any "vmap_block" associated with a
* specific address, we use a hash.
*
* A per-cpu vmap_block_queue is used in both ways, to serialize
* an access to free block chains among CPUs(alloc path) and it
* also acts as a vmap_block hash(alloc/free paths). It means we
* overload it, since we already have the per-cpu array which is
* used as a hash table.
*
* A hash function is addr_to_vbq() which hashes any address to
* a specific index(in a hash) it belongs to. This then uses a
* per_cpu() macro to access the array with specific index.
*
* An example:
*
* CPU_1 CPU_2 CPU_0
* | | |
* V V V
* 0 10 20 30 40 50 60
* |------|------|------|------|------|------|...<vmap address space>
* CPU0 CPU1 CPU2 CPU0 CPU1 CPU2
*
* - CPU_1 invokes vm_unmap_ram(6), 6 belongs to CPU0 zone, thus
* it access: CPU0/INDEX0 -> vmap_blocks -> xa_lock;
*
* - CPU_2 invokes vm_unmap_ram(11), 11 belongs to CPU1 zone, thus
* it access: CPU1/INDEX1 -> vmap_blocks -> xa_lock;
*
* - CPU_0 invokes vm_unmap_ram(20), 20 belongs to CPU2 zone, thus
* it access: CPU2/INDEX2 -> vmap_blocks -> xa_lock.
*
* This technique allows almost remove a lock-contention in locking
* primitives which protect insert/remove operations.
*/
Are you find with it?
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray
2023-03-29 15:01 ` Uladzislau Rezki
@ 2023-03-29 16:23 ` Lorenzo Stoakes
2023-03-29 17:50 ` Uladzislau Rezki
0 siblings, 1 reply; 14+ messages in thread
From: Lorenzo Stoakes @ 2023-03-29 16:23 UTC (permalink / raw)
To: Uladzislau Rezki
Cc: Andrew Morton, linux-mm, LKML, Baoquan He, Christoph Hellwig,
Matthew Wilcox, Dave Chinner, Oleksiy Avramchenko
On Wed, Mar 29, 2023 at 05:01:11PM +0200, Uladzislau Rezki wrote:
> Hello, Lorenzo!
>
> > > /*
> > > - * XArray of vmap blocks, indexed by address, to quickly find a vmap block
> > > - * in the free path. Could get rid of this if we change the API to return a
> > > - * "cookie" from alloc, to be passed to free. But no big deal yet.
> > > + * In order to fast access to any "vmap_block" associated with a
> > > + * specific address, we store them into a per-cpu xarray. A hash
> > > + * function is addr_to_vbq() whereas a key is a vb->va->va_start
> > > + * value.
> > > + *
> > > + * Please note, a vmap_block_queue, which is a per-cpu, is not
> > > + * serialized by a raw_smp_processor_id() current CPU, instead
> > > + * it is chosen based on a CPU-index it belongs to, i.e. it is
> > > + * a hash-table.
> > > + *
> > > + * An example:
> > > + *
> > > + * CPU_1 CPU_2 CPU_0
> > > + * | | |
> > > + * V V V
> > > + * 0 10 20 30 40 50 60
> > > + * |------|------|------|------|------|------|...<vmap address space>
> > > + * CPU0 CPU1 CPU2 CPU0 CPU1 CPU2
> > > + *
> > > + * - CPU_1 invokes vm_unmap_ram(6), 6 belongs to CPU0 zone, thus
> > > + * it access: CPU0/INDEX0 -> vmap_blocks -> xa_lock;
> > > + *
> > > + * - CPU_2 invokes vm_unmap_ram(11), 11 belongs to CPU1 zone, thus
> > > + * it access: CPU1/INDEX1 -> vmap_blocks -> xa_lock;
> > > + *
> > > + * - CPU_0 invokes vm_unmap_ram(20), 20 belongs to CPU2 zone, thus
> > > + * it access: CPU2/INDEX2 -> vmap_blocks -> xa_lock.
> > > */
> >
> > OK so if I understand this correctly, you're overloading the per-CPU
> > vmap_block_queue array to use as a simple hash based on the address and
> > relying on the xa_lock() in xa_insert() to serialise in case of contention?
> >
> > I like the general heft of your comment but I feel this could be spelled
> > out a little more clearly, something like:-
> >
> > In order to have fast access to any vmap_block object associated with a
> > specific address, we use a hash.
> >
> > Rather than waste space on defining a new hash table we take advantage
> > of the fact we already have a static per-cpu array vmap_block_queue.
> >
> > This is already used for per-CPU access to the block queue, however we
> > overload this to _also_ act as a vmap_block hash. The hash function is
> > addr_to_vbq() which hashes on vb->va->va_start.
> >
> > This then uses per_cpu() to lookup the _index_ rather than the
> > _cpu_. Each vmap_block_queue contains an xarray of vmap blocks which are
> > indexed on the same key as the hash (vb->va->va_start).
> >
> > xarray read acceses are protected by RCU lock and inserts are protected
> > by a spin lock so there is no risk of a race here.
> >
> /*
> * In order to fast access to any "vmap_block" associated with a
> * specific address, we use a hash.
> *
> * A per-cpu vmap_block_queue is used in both ways, to serialize
> * an access to free block chains among CPUs(alloc path) and it
> * also acts as a vmap_block hash(alloc/free paths). It means we
> * overload it, since we already have the per-cpu array which is
> * used as a hash table.
Nit - it may be worth highlighting that when used as a hash it the 'cpu' is
not in fact a cpu but rather a hash key.
E.g. just add on the end of this something like:-
When used as a hash table the 'cpu' passed to per_cpu is not actually a CPU
but rather the hash key.
> *
> * A hash function is addr_to_vbq() which hashes any address to
> * a specific index(in a hash) it belongs to. This then uses a
> * per_cpu() macro to access the array with specific index.
May need a tweak if you are happy with my review that we can simply have a
helper that returns the xarray in which case we won't necessary have this
function :) but depends of course on how the respin looks!
> *
> * An example:
> *
> * CPU_1 CPU_2 CPU_0
> * | | |
> * V V V
> * 0 10 20 30 40 50 60
> * |------|------|------|------|------|------|...<vmap address space>
> * CPU0 CPU1 CPU2 CPU0 CPU1 CPU2
> *
> * - CPU_1 invokes vm_unmap_ram(6), 6 belongs to CPU0 zone, thus
> * it access: CPU0/INDEX0 -> vmap_blocks -> xa_lock;
> *
> * - CPU_2 invokes vm_unmap_ram(11), 11 belongs to CPU1 zone, thus
> * it access: CPU1/INDEX1 -> vmap_blocks -> xa_lock;
> *
> * - CPU_0 invokes vm_unmap_ram(20), 20 belongs to CPU2 zone, thus
> * it access: CPU2/INDEX2 -> vmap_blocks -> xa_lock.
> *
> * This technique allows almost remove a lock-contention in locking
> * primitives which protect insert/remove operations.
This sentence is a little confusing, perhaps rephrase a little:-
This technique almost always avoids lock contention on insert/remove,
however the xarray spinlock protects against any contention that remains.
> */
> Are you find with it?
Other than the small nits above (sorry!) it seems fine! Thanks for
updating, much appreciated :)
>
> --
> Uladzislau Rezki
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray
2023-03-29 16:23 ` Lorenzo Stoakes
@ 2023-03-29 17:50 ` Uladzislau Rezki
0 siblings, 0 replies; 14+ messages in thread
From: Uladzislau Rezki @ 2023-03-29 17:50 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Uladzislau Rezki, Andrew Morton, linux-mm, LKML, Baoquan He,
Christoph Hellwig, Matthew Wilcox, Dave Chinner,
Oleksiy Avramchenko
On Wed, Mar 29, 2023 at 05:23:04PM +0100, Lorenzo Stoakes wrote:
> On Wed, Mar 29, 2023 at 05:01:11PM +0200, Uladzislau Rezki wrote:
> > Hello, Lorenzo!
> >
> > > > /*
> > > > - * XArray of vmap blocks, indexed by address, to quickly find a vmap block
> > > > - * in the free path. Could get rid of this if we change the API to return a
> > > > - * "cookie" from alloc, to be passed to free. But no big deal yet.
> > > > + * In order to fast access to any "vmap_block" associated with a
> > > > + * specific address, we store them into a per-cpu xarray. A hash
> > > > + * function is addr_to_vbq() whereas a key is a vb->va->va_start
> > > > + * value.
> > > > + *
> > > > + * Please note, a vmap_block_queue, which is a per-cpu, is not
> > > > + * serialized by a raw_smp_processor_id() current CPU, instead
> > > > + * it is chosen based on a CPU-index it belongs to, i.e. it is
> > > > + * a hash-table.
> > > > + *
> > > > + * An example:
> > > > + *
> > > > + * CPU_1 CPU_2 CPU_0
> > > > + * | | |
> > > > + * V V V
> > > > + * 0 10 20 30 40 50 60
> > > > + * |------|------|------|------|------|------|...<vmap address space>
> > > > + * CPU0 CPU1 CPU2 CPU0 CPU1 CPU2
> > > > + *
> > > > + * - CPU_1 invokes vm_unmap_ram(6), 6 belongs to CPU0 zone, thus
> > > > + * it access: CPU0/INDEX0 -> vmap_blocks -> xa_lock;
> > > > + *
> > > > + * - CPU_2 invokes vm_unmap_ram(11), 11 belongs to CPU1 zone, thus
> > > > + * it access: CPU1/INDEX1 -> vmap_blocks -> xa_lock;
> > > > + *
> > > > + * - CPU_0 invokes vm_unmap_ram(20), 20 belongs to CPU2 zone, thus
> > > > + * it access: CPU2/INDEX2 -> vmap_blocks -> xa_lock.
> > > > */
> > >
> > > OK so if I understand this correctly, you're overloading the per-CPU
> > > vmap_block_queue array to use as a simple hash based on the address and
> > > relying on the xa_lock() in xa_insert() to serialise in case of contention?
> > >
> > > I like the general heft of your comment but I feel this could be spelled
> > > out a little more clearly, something like:-
> > >
> > > In order to have fast access to any vmap_block object associated with a
> > > specific address, we use a hash.
> > >
> > > Rather than waste space on defining a new hash table we take advantage
> > > of the fact we already have a static per-cpu array vmap_block_queue.
> > >
> > > This is already used for per-CPU access to the block queue, however we
> > > overload this to _also_ act as a vmap_block hash. The hash function is
> > > addr_to_vbq() which hashes on vb->va->va_start.
> > >
> > > This then uses per_cpu() to lookup the _index_ rather than the
> > > _cpu_. Each vmap_block_queue contains an xarray of vmap blocks which are
> > > indexed on the same key as the hash (vb->va->va_start).
> > >
> > > xarray read acceses are protected by RCU lock and inserts are protected
> > > by a spin lock so there is no risk of a race here.
> > >
> > /*
> > * In order to fast access to any "vmap_block" associated with a
> > * specific address, we use a hash.
> > *
> > * A per-cpu vmap_block_queue is used in both ways, to serialize
> > * an access to free block chains among CPUs(alloc path) and it
> > * also acts as a vmap_block hash(alloc/free paths). It means we
> > * overload it, since we already have the per-cpu array which is
> > * used as a hash table.
>
> Nit - it may be worth highlighting that when used as a hash it the 'cpu' is
> not in fact a cpu but rather a hash key.
>
> E.g. just add on the end of this something like:-
>
> When used as a hash table the 'cpu' passed to per_cpu is not actually a CPU
> but rather the hash key.
>
> > *
> > * A hash function is addr_to_vbq() which hashes any address to
> > * a specific index(in a hash) it belongs to. This then uses a
> > * per_cpu() macro to access the array with specific index.
>
> May need a tweak if you are happy with my review that we can simply have a
> helper that returns the xarray in which case we won't necessary have this
> function :) but depends of course on how the respin looks!
>
> > *
> > * An example:
> > *
> > * CPU_1 CPU_2 CPU_0
> > * | | |
> > * V V V
> > * 0 10 20 30 40 50 60
> > * |------|------|------|------|------|------|...<vmap address space>
> > * CPU0 CPU1 CPU2 CPU0 CPU1 CPU2
> > *
> > * - CPU_1 invokes vm_unmap_ram(6), 6 belongs to CPU0 zone, thus
> > * it access: CPU0/INDEX0 -> vmap_blocks -> xa_lock;
> > *
> > * - CPU_2 invokes vm_unmap_ram(11), 11 belongs to CPU1 zone, thus
> > * it access: CPU1/INDEX1 -> vmap_blocks -> xa_lock;
> > *
> > * - CPU_0 invokes vm_unmap_ram(20), 20 belongs to CPU2 zone, thus
> > * it access: CPU2/INDEX2 -> vmap_blocks -> xa_lock.
> > *
> > * This technique allows almost remove a lock-contention in locking
> > * primitives which protect insert/remove operations.
>
> This sentence is a little confusing, perhaps rephrase a little:-
>
> This technique almost always avoids lock contention on insert/remove,
> however the xarray spinlock protects against any contention that remains.
>
> > */
> > Are you find with it?
>
> Other than the small nits above (sorry!) it seems fine! Thanks for
> updating, much appreciated :)
>
Good. Made the changes. I will upload a new vX patch. Everything
that makes it more clear for readers is worth to do :)
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2023-03-29 17:50 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-27 17:01 [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray Uladzislau Rezki (Sony)
2023-03-27 17:01 ` [PATCH v3 2/2] lib/test_vmalloc.c: Add vm_map_ram()/vm_unmap_ram() test case Uladzislau Rezki (Sony)
2023-03-27 20:28 ` Lorenzo Stoakes
2023-03-28 12:29 ` Uladzislau Rezki
2023-03-27 20:09 ` [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray Lorenzo Stoakes
2023-03-28 12:51 ` Uladzislau Rezki
2023-03-28 16:37 ` Uladzislau Rezki
2023-03-29 15:01 ` Uladzislau Rezki
2023-03-29 16:23 ` Lorenzo Stoakes
2023-03-29 17:50 ` Uladzislau Rezki
2023-03-28 3:25 ` Baoquan He
2023-03-28 12:34 ` Uladzislau Rezki
2023-03-29 4:33 ` Baoquan He
2023-03-29 6:54 ` Uladzislau Rezki
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).