* [PATCH v4 0/2] vmalloc enhancements @ 2019-04-17 19:40 Roman Gushchin 2019-04-17 19:40 ` [PATCH v4 1/2] mm: refactor __vunmap() to avoid duplicated call to find_vm_area() Roman Gushchin 2019-04-17 19:40 ` [PATCH v4 2/2] mm: show number of vmalloc pages in /proc/meminfo Roman Gushchin 0 siblings, 2 replies; 9+ messages in thread From: Roman Gushchin @ 2019-04-17 19:40 UTC (permalink / raw) To: Andrew Morton Cc: linux-mm, linux-kernel, kernel-team, Matthew Wilcox, Johannes Weiner, Vlastimil Babka, Roman Gushchin The patchset removes a redundant operation in __vunmap() and exports a number of pages, used by vmalloc(), in /proc/meminfo. Patch (1) removes some redundancy on __vunmap(). Patch (2) adds vmalloc counter to /proc/meminfo. v4->v3: - rebased on top of current mm tree - dropped alloc_vmap_area() refactoring v3->v2: - switched back to atomic after more accurate perf measurements: no visible perf difference - added perf stacktraces in commmit message of (1) v2->v1: - rebased on top of current mm tree - switch from atomic to percpu vmalloc page counter RFC->v1: - removed bogus empty lines (suggested by Matthew Wilcox) - made nr_vmalloc_pages static (suggested by Matthew Wilcox) - dropped patch 3 from RFC patchset, will post later with some other changes - dropped RFC Roman Gushchin (2): mm: refactor __vunmap() to avoid duplicated call to find_vm_area() mm: show number of vmalloc pages in /proc/meminfo fs/proc/meminfo.c | 2 +- include/linux/vmalloc.h | 2 ++ mm/vmalloc.c | 57 ++++++++++++++++++++++++++--------------- 3 files changed, 40 insertions(+), 21 deletions(-) -- 2.20.1 ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v4 1/2] mm: refactor __vunmap() to avoid duplicated call to find_vm_area() 2019-04-17 19:40 [PATCH v4 0/2] vmalloc enhancements Roman Gushchin @ 2019-04-17 19:40 ` Roman Gushchin 2019-04-17 21:58 ` Andrew Morton 2019-04-17 19:40 ` [PATCH v4 2/2] mm: show number of vmalloc pages in /proc/meminfo Roman Gushchin 1 sibling, 1 reply; 9+ messages in thread From: Roman Gushchin @ 2019-04-17 19:40 UTC (permalink / raw) To: Andrew Morton Cc: linux-mm, linux-kernel, kernel-team, Matthew Wilcox, Johannes Weiner, Vlastimil Babka, Roman Gushchin __vunmap() calls find_vm_area() twice without an obvious reason: first directly to get the area pointer, second indirectly by calling remove_vm_area(), which is again searching for the area. To remove this redundancy, let's split remove_vm_area() into __remove_vm_area(struct vmap_area *), which performs the actual area removal, and remove_vm_area(const void *addr) wrapper, which can be used everywhere, where it has been used before. On my test setup, I've got 5-10% speed up on vfree()'ing 1000000 of 4-pages vmalloc blocks. Perf report before: 22.64% cat [kernel.vmlinux] [k] free_pcppages_bulk 10.30% cat [kernel.vmlinux] [k] __vunmap 9.80% cat [kernel.vmlinux] [k] find_vmap_area 8.11% cat [kernel.vmlinux] [k] vunmap_page_range 4.20% cat [kernel.vmlinux] [k] __slab_free 3.56% cat [kernel.vmlinux] [k] __list_del_entry_valid 3.46% cat [kernel.vmlinux] [k] smp_call_function_many 3.33% cat [kernel.vmlinux] [k] kfree 3.32% cat [kernel.vmlinux] [k] free_unref_page Perf report after: 23.01% cat [kernel.kallsyms] [k] free_pcppages_bulk 9.46% cat [kernel.kallsyms] [k] __vunmap 9.15% cat [kernel.kallsyms] [k] vunmap_page_range 6.17% cat [kernel.kallsyms] [k] __slab_free 5.61% cat [kernel.kallsyms] [k] kfree 4.86% cat [kernel.kallsyms] [k] bad_range 4.67% cat [kernel.kallsyms] [k] free_unref_page_commit 4.24% cat [kernel.kallsyms] [k] __list_del_entry_valid 3.68% cat [kernel.kallsyms] [k] free_unref_page 3.65% cat [kernel.kallsyms] [k] __list_add_valid 3.19% cat [kernel.kallsyms] [k] __purge_vmap_area_lazy 3.10% cat [kernel.kallsyms] [k] find_vmap_area 3.05% cat [kernel.kallsyms] [k] rcu_cblist_dequeue Signed-off-by: Roman Gushchin <guro@fb.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Matthew Wilcox <willy@infradead.org> Acked-by: Vlastimil Babka <vbabka@suse.cz> --- mm/vmalloc.c | 47 +++++++++++++++++++++++++++-------------------- 1 file changed, 27 insertions(+), 20 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 92b784d8088c..8ad8e8464e55 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2068,6 +2068,24 @@ struct vm_struct *find_vm_area(const void *addr) return NULL; } +static struct vm_struct *__remove_vm_area(struct vmap_area *va) +{ + struct vm_struct *vm = va->vm; + + might_sleep(); + + spin_lock(&vmap_area_lock); + va->vm = NULL; + va->flags &= ~VM_VM_AREA; + va->flags |= VM_LAZY_FREE; + spin_unlock(&vmap_area_lock); + + kasan_free_shadow(vm); + free_unmap_vmap_area(va); + + return vm; +} + /** * remove_vm_area - find and remove a continuous kernel virtual area * @addr: base address @@ -2080,31 +2098,20 @@ struct vm_struct *find_vm_area(const void *addr) */ struct vm_struct *remove_vm_area(const void *addr) { + struct vm_struct *vm = NULL; struct vmap_area *va; - might_sleep(); - va = find_vmap_area((unsigned long)addr); - if (va && va->flags & VM_VM_AREA) { - struct vm_struct *vm = va->vm; - - spin_lock(&vmap_area_lock); - va->vm = NULL; - va->flags &= ~VM_VM_AREA; - va->flags |= VM_LAZY_FREE; - spin_unlock(&vmap_area_lock); - - kasan_free_shadow(vm); - free_unmap_vmap_area(va); + if (va && va->flags & VM_VM_AREA) + vm = __remove_vm_area(va); - return vm; - } - return NULL; + return vm; } static void __vunmap(const void *addr, int deallocate_pages) { struct vm_struct *area; + struct vmap_area *va; if (!addr) return; @@ -2113,17 +2120,18 @@ static void __vunmap(const void *addr, int deallocate_pages) addr)) return; - area = find_vm_area(addr); - if (unlikely(!area)) { + va = find_vmap_area((unsigned long)addr); + if (unlikely(!va || !(va->flags & VM_VM_AREA))) { WARN(1, KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n", addr); return; } + area = va->vm; debug_check_no_locks_freed(area->addr, get_vm_area_size(area)); debug_check_no_obj_freed(area->addr, get_vm_area_size(area)); - remove_vm_area(addr); + __remove_vm_area(va); if (deallocate_pages) { int i; @@ -2138,7 +2146,6 @@ static void __vunmap(const void *addr, int deallocate_pages) } kfree(area); - return; } static inline void __vfree_deferred(const void *addr) -- 2.20.1 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v4 1/2] mm: refactor __vunmap() to avoid duplicated call to find_vm_area() 2019-04-17 19:40 ` [PATCH v4 1/2] mm: refactor __vunmap() to avoid duplicated call to find_vm_area() Roman Gushchin @ 2019-04-17 21:58 ` Andrew Morton 2019-04-17 23:02 ` Roman Gushchin 2019-04-18 11:18 ` Matthew Wilcox 0 siblings, 2 replies; 9+ messages in thread From: Andrew Morton @ 2019-04-17 21:58 UTC (permalink / raw) To: Roman Gushchin Cc: linux-mm, linux-kernel, kernel-team, Matthew Wilcox, Johannes Weiner, Vlastimil Babka, Roman Gushchin On Wed, 17 Apr 2019 12:40:01 -0700 Roman Gushchin <guroan@gmail.com> wrote: > __vunmap() calls find_vm_area() twice without an obvious reason: > first directly to get the area pointer, second indirectly by calling > remove_vm_area(), which is again searching for the area. > > To remove this redundancy, let's split remove_vm_area() into > __remove_vm_area(struct vmap_area *), which performs the actual area > removal, and remove_vm_area(const void *addr) wrapper, which can > be used everywhere, where it has been used before. > > On my test setup, I've got 5-10% speed up on vfree()'ing 1000000 > of 4-pages vmalloc blocks. > > Perf report before: > 22.64% cat [kernel.vmlinux] [k] free_pcppages_bulk > 10.30% cat [kernel.vmlinux] [k] __vunmap > 9.80% cat [kernel.vmlinux] [k] find_vmap_area > 8.11% cat [kernel.vmlinux] [k] vunmap_page_range > 4.20% cat [kernel.vmlinux] [k] __slab_free > 3.56% cat [kernel.vmlinux] [k] __list_del_entry_valid > 3.46% cat [kernel.vmlinux] [k] smp_call_function_many > 3.33% cat [kernel.vmlinux] [k] kfree > 3.32% cat [kernel.vmlinux] [k] free_unref_page > > Perf report after: > 23.01% cat [kernel.kallsyms] [k] free_pcppages_bulk > 9.46% cat [kernel.kallsyms] [k] __vunmap > 9.15% cat [kernel.kallsyms] [k] vunmap_page_range > 6.17% cat [kernel.kallsyms] [k] __slab_free > 5.61% cat [kernel.kallsyms] [k] kfree > 4.86% cat [kernel.kallsyms] [k] bad_range > 4.67% cat [kernel.kallsyms] [k] free_unref_page_commit > 4.24% cat [kernel.kallsyms] [k] __list_del_entry_valid > 3.68% cat [kernel.kallsyms] [k] free_unref_page > 3.65% cat [kernel.kallsyms] [k] __list_add_valid > 3.19% cat [kernel.kallsyms] [k] __purge_vmap_area_lazy > 3.10% cat [kernel.kallsyms] [k] find_vmap_area > 3.05% cat [kernel.kallsyms] [k] rcu_cblist_dequeue > > ... > > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -2068,6 +2068,24 @@ struct vm_struct *find_vm_area(const void *addr) > return NULL; > } > > +static struct vm_struct *__remove_vm_area(struct vmap_area *va) > +{ > + struct vm_struct *vm = va->vm; > + > + might_sleep(); Where might __remove_vm_area() sleep? From a quick scan I'm only seeing vfree(), and that has the might_sleep_if(!in_interrupt()). So perhaps we can remove this... > + spin_lock(&vmap_area_lock); > + va->vm = NULL; > + va->flags &= ~VM_VM_AREA; > + va->flags |= VM_LAZY_FREE; > + spin_unlock(&vmap_area_lock); > + > + kasan_free_shadow(vm); > + free_unmap_vmap_area(va); > + > + return vm; > +} > + ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v4 1/2] mm: refactor __vunmap() to avoid duplicated call to find_vm_area() 2019-04-17 21:58 ` Andrew Morton @ 2019-04-17 23:02 ` Roman Gushchin 2019-04-18 11:18 ` Matthew Wilcox 1 sibling, 0 replies; 9+ messages in thread From: Roman Gushchin @ 2019-04-17 23:02 UTC (permalink / raw) To: Andrew Morton Cc: Roman Gushchin, linux-mm, linux-kernel, Kernel Team, Matthew Wilcox, Johannes Weiner, Vlastimil Babka On Wed, Apr 17, 2019 at 02:58:27PM -0700, Andrew Morton wrote: > On Wed, 17 Apr 2019 12:40:01 -0700 Roman Gushchin <guroan@gmail.com> wrote: > > > __vunmap() calls find_vm_area() twice without an obvious reason: > > first directly to get the area pointer, second indirectly by calling > > remove_vm_area(), which is again searching for the area. > > > > To remove this redundancy, let's split remove_vm_area() into > > __remove_vm_area(struct vmap_area *), which performs the actual area > > removal, and remove_vm_area(const void *addr) wrapper, which can > > be used everywhere, where it has been used before. > > > > On my test setup, I've got 5-10% speed up on vfree()'ing 1000000 > > of 4-pages vmalloc blocks. > > > > Perf report before: > > 22.64% cat [kernel.vmlinux] [k] free_pcppages_bulk > > 10.30% cat [kernel.vmlinux] [k] __vunmap > > 9.80% cat [kernel.vmlinux] [k] find_vmap_area > > 8.11% cat [kernel.vmlinux] [k] vunmap_page_range > > 4.20% cat [kernel.vmlinux] [k] __slab_free > > 3.56% cat [kernel.vmlinux] [k] __list_del_entry_valid > > 3.46% cat [kernel.vmlinux] [k] smp_call_function_many > > 3.33% cat [kernel.vmlinux] [k] kfree > > 3.32% cat [kernel.vmlinux] [k] free_unref_page > > > > Perf report after: > > 23.01% cat [kernel.kallsyms] [k] free_pcppages_bulk > > 9.46% cat [kernel.kallsyms] [k] __vunmap > > 9.15% cat [kernel.kallsyms] [k] vunmap_page_range > > 6.17% cat [kernel.kallsyms] [k] __slab_free > > 5.61% cat [kernel.kallsyms] [k] kfree > > 4.86% cat [kernel.kallsyms] [k] bad_range > > 4.67% cat [kernel.kallsyms] [k] free_unref_page_commit > > 4.24% cat [kernel.kallsyms] [k] __list_del_entry_valid > > 3.68% cat [kernel.kallsyms] [k] free_unref_page > > 3.65% cat [kernel.kallsyms] [k] __list_add_valid > > 3.19% cat [kernel.kallsyms] [k] __purge_vmap_area_lazy > > 3.10% cat [kernel.kallsyms] [k] find_vmap_area > > 3.05% cat [kernel.kallsyms] [k] rcu_cblist_dequeue > > > > ... > > > > --- a/mm/vmalloc.c > > +++ b/mm/vmalloc.c > > @@ -2068,6 +2068,24 @@ struct vm_struct *find_vm_area(const void *addr) > > return NULL; > > } > > > > +static struct vm_struct *__remove_vm_area(struct vmap_area *va) > > +{ > > + struct vm_struct *vm = va->vm; > > + > > + might_sleep(); > > Where might __remove_vm_area() sleep? > > From a quick scan I'm only seeing vfree(), and that has the > might_sleep_if(!in_interrupt()). > > So perhaps we can remove this... Agree. Here is the patch. Thank you! -- From 4adf58e4d3ffe45a542156ca0bce3dc9f6679939 Mon Sep 17 00:00:00 2001 From: Roman Gushchin <guro@fb.com> Date: Wed, 17 Apr 2019 15:55:49 -0700 Subject: [PATCH] mm: remove might_sleep() in __remove_vm_area() __remove_vm_area() has a redundant might_sleep() call, which isn't really required, because the only place it can sleep is vfree() and it already contains might_sleep_if(!in_interrupt()). Suggested-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Roman Gushchin <guro@fb.com> --- mm/vmalloc.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 69a5673c4cd3..4a91acce4b5f 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2079,8 +2079,6 @@ static struct vm_struct *__remove_vm_area(struct vmap_area *va) { struct vm_struct *vm = va->vm; - might_sleep(); - spin_lock(&vmap_area_lock); va->vm = NULL; va->flags &= ~VM_VM_AREA; -- 2.20.1 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v4 1/2] mm: refactor __vunmap() to avoid duplicated call to find_vm_area() 2019-04-17 21:58 ` Andrew Morton 2019-04-17 23:02 ` Roman Gushchin @ 2019-04-18 11:18 ` Matthew Wilcox 2019-04-18 22:24 ` Andrew Morton 1 sibling, 1 reply; 9+ messages in thread From: Matthew Wilcox @ 2019-04-18 11:18 UTC (permalink / raw) To: Andrew Morton Cc: Roman Gushchin, linux-mm, linux-kernel, kernel-team, Johannes Weiner, Vlastimil Babka, Roman Gushchin, Christoph Hellwig, Joel Fernandes On Wed, Apr 17, 2019 at 02:58:27PM -0700, Andrew Morton wrote: > On Wed, 17 Apr 2019 12:40:01 -0700 Roman Gushchin <guroan@gmail.com> wrote: > > +static struct vm_struct *__remove_vm_area(struct vmap_area *va) > > +{ > > + struct vm_struct *vm = va->vm; > > + > > + might_sleep(); > > Where might __remove_vm_area() sleep? > > >From a quick scan I'm only seeing vfree(), and that has the > might_sleep_if(!in_interrupt()). > > So perhaps we can remove this... See commit 5803ed292e63 ("mm: mark all calls into the vmalloc subsystem as potentially sleeping") It looks like the intent is to unconditionally check might_sleep() at the entry points to the vmalloc code, rather than only catch them in the occasional place where it happens to go wrong. ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v4 1/2] mm: refactor __vunmap() to avoid duplicated call to find_vm_area() 2019-04-18 11:18 ` Matthew Wilcox @ 2019-04-18 22:24 ` Andrew Morton 2019-04-18 23:17 ` Eric Dumazet 2019-04-19 19:08 ` Al Viro 0 siblings, 2 replies; 9+ messages in thread From: Andrew Morton @ 2019-04-18 22:24 UTC (permalink / raw) To: Matthew Wilcox Cc: Roman Gushchin, linux-mm, linux-kernel, kernel-team, Johannes Weiner, Vlastimil Babka, Roman Gushchin, Christoph Hellwig, Joel Fernandes On Thu, 18 Apr 2019 04:18:34 -0700 Matthew Wilcox <willy@infradead.org> wrote: > On Wed, Apr 17, 2019 at 02:58:27PM -0700, Andrew Morton wrote: > > On Wed, 17 Apr 2019 12:40:01 -0700 Roman Gushchin <guroan@gmail.com> wrote: > > > +static struct vm_struct *__remove_vm_area(struct vmap_area *va) > > > +{ > > > + struct vm_struct *vm = va->vm; > > > + > > > + might_sleep(); > > > > Where might __remove_vm_area() sleep? > > > > >From a quick scan I'm only seeing vfree(), and that has the > > might_sleep_if(!in_interrupt()). > > > > So perhaps we can remove this... > > See commit 5803ed292e63 ("mm: mark all calls into the vmalloc subsystem as potentially sleeping") > > It looks like the intent is to unconditionally check might_sleep() at > the entry points to the vmalloc code, rather than only catch them in > the occasional place where it happens to go wrong. afaict, vfree() will only do a mutex_trylock() in try_purge_vmap_area_lazy(). So does vfree actually sleep in any situation? Whether or not local interrupts are enabled? ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v4 1/2] mm: refactor __vunmap() to avoid duplicated call to find_vm_area() 2019-04-18 22:24 ` Andrew Morton @ 2019-04-18 23:17 ` Eric Dumazet 2019-04-19 19:08 ` Al Viro 1 sibling, 0 replies; 9+ messages in thread From: Eric Dumazet @ 2019-04-18 23:17 UTC (permalink / raw) To: Andrew Morton, Matthew Wilcox Cc: Roman Gushchin, linux-mm, linux-kernel, kernel-team, Johannes Weiner, Vlastimil Babka, Roman Gushchin, Christoph Hellwig, Joel Fernandes On 04/18/2019 03:24 PM, Andrew Morton wrote: > afaict, vfree() will only do a mutex_trylock() in > try_purge_vmap_area_lazy(). So does vfree actually sleep in any > situation? Whether or not local interrupts are enabled? We would be in a big trouble if vfree() could potentially sleep... Random example : __free_fdtable() called from rcu callback. ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v4 1/2] mm: refactor __vunmap() to avoid duplicated call to find_vm_area() 2019-04-18 22:24 ` Andrew Morton 2019-04-18 23:17 ` Eric Dumazet @ 2019-04-19 19:08 ` Al Viro 1 sibling, 0 replies; 9+ messages in thread From: Al Viro @ 2019-04-19 19:08 UTC (permalink / raw) To: Andrew Morton Cc: Matthew Wilcox, Roman Gushchin, linux-mm, linux-kernel, kernel-team, Johannes Weiner, Vlastimil Babka, Roman Gushchin, Christoph Hellwig, Joel Fernandes On Thu, Apr 18, 2019 at 03:24:31PM -0700, Andrew Morton wrote: > On Thu, 18 Apr 2019 04:18:34 -0700 Matthew Wilcox <willy@infradead.org> wrote: > > > On Wed, Apr 17, 2019 at 02:58:27PM -0700, Andrew Morton wrote: > > > On Wed, 17 Apr 2019 12:40:01 -0700 Roman Gushchin <guroan@gmail.com> wrote: > > > > +static struct vm_struct *__remove_vm_area(struct vmap_area *va) > > > > +{ > > > > + struct vm_struct *vm = va->vm; > > > > + > > > > + might_sleep(); > > > > > > Where might __remove_vm_area() sleep? > > > > > > >From a quick scan I'm only seeing vfree(), and that has the > > > might_sleep_if(!in_interrupt()). > > > > > > So perhaps we can remove this... > > > > See commit 5803ed292e63 ("mm: mark all calls into the vmalloc subsystem as potentially sleeping") > > > > It looks like the intent is to unconditionally check might_sleep() at > > the entry points to the vmalloc code, rather than only catch them in > > the occasional place where it happens to go wrong. > > afaict, vfree() will only do a mutex_trylock() in > try_purge_vmap_area_lazy(). So does vfree actually sleep in any > situation? Whether or not local interrupts are enabled? IIRC, the original problem that used to prohibit vfree() in interrupts was the use of spinlocks that were used in a lot of places by plain spin_lock(). I'm not sure it could actually sleep in anything not too ancient... ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v4 2/2] mm: show number of vmalloc pages in /proc/meminfo 2019-04-17 19:40 [PATCH v4 0/2] vmalloc enhancements Roman Gushchin 2019-04-17 19:40 ` [PATCH v4 1/2] mm: refactor __vunmap() to avoid duplicated call to find_vm_area() Roman Gushchin @ 2019-04-17 19:40 ` Roman Gushchin 1 sibling, 0 replies; 9+ messages in thread From: Roman Gushchin @ 2019-04-17 19:40 UTC (permalink / raw) To: Andrew Morton Cc: linux-mm, linux-kernel, kernel-team, Matthew Wilcox, Johannes Weiner, Vlastimil Babka, Roman Gushchin Vmalloc() is getting more and more used these days (kernel stacks, bpf and percpu allocator are new top users), and the total % of memory consumed by vmalloc() can be pretty significant and changes dynamically. /proc/meminfo is the best place to display this information: its top goal is to show top consumers of the memory. Since the VmallocUsed field in /proc/meminfo is not in use for quite a long time (it has been defined to 0 by the commit a5ad88ce8c7f ("mm: get rid of 'vmalloc_info' from /proc/meminfo")), let's reuse it for showing the actual physical memory consumption of vmalloc(). Signed-off-by: Roman Gushchin <guro@fb.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Vlastimil Babka <vbabka@suse.cz> --- fs/proc/meminfo.c | 2 +- include/linux/vmalloc.h | 2 ++ mm/vmalloc.c | 10 ++++++++++ 3 files changed, 13 insertions(+), 1 deletion(-) diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index 568d90e17c17..465ea0153b2a 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -120,7 +120,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v) show_val_kb(m, "Committed_AS: ", committed); seq_printf(m, "VmallocTotal: %8lu kB\n", (unsigned long)VMALLOC_TOTAL >> 10); - show_val_kb(m, "VmallocUsed: ", 0ul); + show_val_kb(m, "VmallocUsed: ", vmalloc_nr_pages()); show_val_kb(m, "VmallocChunk: ", 0ul); show_val_kb(m, "Percpu: ", pcpu_nr_pages()); diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index ad483378fdd1..316efa31c8b8 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -67,10 +67,12 @@ extern void vm_unmap_aliases(void); #ifdef CONFIG_MMU extern void __init vmalloc_init(void); +extern unsigned long vmalloc_nr_pages(void); #else static inline void vmalloc_init(void) { } +static inline unsigned long vmalloc_nr_pages(void) { return 0; } #endif extern void *vmalloc(unsigned long size); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 8ad8e8464e55..69a5673c4cd3 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -397,6 +397,13 @@ static void purge_vmap_area_lazy(void); static BLOCKING_NOTIFIER_HEAD(vmap_notify_list); static unsigned long lazy_max_pages(void); +static atomic_long_t nr_vmalloc_pages; + +unsigned long vmalloc_nr_pages(void) +{ + return atomic_long_read(&nr_vmalloc_pages); +} + static struct vmap_area *__find_vmap_area(unsigned long addr) { struct rb_node *n = vmap_area_root.rb_node; @@ -2141,6 +2148,7 @@ static void __vunmap(const void *addr, int deallocate_pages) BUG_ON(!page); __free_pages(page, 0); } + atomic_long_sub(area->nr_pages, &nr_vmalloc_pages); kvfree(area->pages); } @@ -2317,12 +2325,14 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, if (unlikely(!page)) { /* Successfully allocated i pages, free them in __vunmap() */ area->nr_pages = i; + atomic_long_add(area->nr_pages, &nr_vmalloc_pages); goto fail; } area->pages[i] = page; if (gfpflags_allow_blocking(gfp_mask|highmem_mask)) cond_resched(); } + atomic_long_add(area->nr_pages, &nr_vmalloc_pages); if (map_vm_area(area, prot, pages)) goto fail; -- 2.20.1 ^ permalink raw reply related [flat|nested] 9+ messages in thread
end of thread, other threads:[~2019-04-19 19:08 UTC | newest] Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2019-04-17 19:40 [PATCH v4 0/2] vmalloc enhancements Roman Gushchin 2019-04-17 19:40 ` [PATCH v4 1/2] mm: refactor __vunmap() to avoid duplicated call to find_vm_area() Roman Gushchin 2019-04-17 21:58 ` Andrew Morton 2019-04-17 23:02 ` Roman Gushchin 2019-04-18 11:18 ` Matthew Wilcox 2019-04-18 22:24 ` Andrew Morton 2019-04-18 23:17 ` Eric Dumazet 2019-04-19 19:08 ` Al Viro 2019-04-17 19:40 ` [PATCH v4 2/2] mm: show number of vmalloc pages in /proc/meminfo Roman Gushchin
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.