All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/2] Fix issues with vmalloc flush flag
@ 2019-05-20 23:38 ` Rick Edgecombe
  0 siblings, 0 replies; 18+ messages in thread
From: Rick Edgecombe @ 2019-05-20 23:38 UTC (permalink / raw)
  To: linux-kernel, peterz, sparclinux, linux-mm, netdev, luto
  Cc: dave.hansen, namit, davem, Rick Edgecombe

These two patches address issues with the recently added
VM_FLUSH_RESET_PERMS vmalloc flag. It is now split into two patches, which
made sense to me, but can split it further if desired.

Patch 1 is the most critical and addresses an issue that could cause a
crash on x86.

Patch 2 is to try to reduce the work done in the free operation to push
it to allocation time where it would be more expected. This shouldn't be
a big issue most of the time, but I thought it was slightly better.

v2->v3:
 - Split into two patches

v1->v2:
 - Update commit message with more detail
 - Fix flush end range on !CONFIG_ARCH_HAS_SET_DIRECT_MAP case

Rick Edgecombe (2):
  vmalloc: Fix calculation of direct map addr range
  vmalloc: Remove work as from vfree path

 mm/vmalloc.c | 23 +++++++++++++----------
 1 file changed, 13 insertions(+), 10 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 0/2] Fix issues with vmalloc flush flag
@ 2019-05-20 23:38 ` Rick Edgecombe
  0 siblings, 0 replies; 18+ messages in thread
From: Rick Edgecombe @ 2019-05-20 23:38 UTC (permalink / raw)
  To: linux-kernel, peterz, sparclinux, linux-mm, netdev, luto
  Cc: dave.hansen, namit, davem, Rick Edgecombe

These two patches address issues with the recently added
VM_FLUSH_RESET_PERMS vmalloc flag. It is now split into two patches, which
made sense to me, but can split it further if desired.

Patch 1 is the most critical and addresses an issue that could cause a
crash on x86.

Patch 2 is to try to reduce the work done in the free operation to push
it to allocation time where it would be more expected. This shouldn't be
a big issue most of the time, but I thought it was slightly better.

v2->v3:
 - Split into two patches

v1->v2:
 - Update commit message with more detail
 - Fix flush end range on !CONFIG_ARCH_HAS_SET_DIRECT_MAP case

Rick Edgecombe (2):
  vmalloc: Fix calculation of direct map addr range
  vmalloc: Remove work as from vfree path

 mm/vmalloc.c | 23 +++++++++++++----------
 1 file changed, 13 insertions(+), 10 deletions(-)

-- 
2.20.1

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 1/2] vmalloc: Fix calculation of direct map addr range
  2019-05-20 23:38 ` Rick Edgecombe
@ 2019-05-20 23:38   ` Rick Edgecombe
  -1 siblings, 0 replies; 18+ messages in thread
From: Rick Edgecombe @ 2019-05-20 23:38 UTC (permalink / raw)
  To: linux-kernel, peterz, sparclinux, linux-mm, netdev, luto
  Cc: dave.hansen, namit, davem, Rick Edgecombe, Meelis Roos,
	Borislav Petkov, Andy Lutomirski, Ingo Molnar, Rick Edgecombe

From: Rick Edgecombe <redgecombe.lkml@gmail.com>

The calculation of the direct map address range to flush was wrong.
This could cause problems on x86 if a RO direct map alias ever got loaded
into the TLB. This shouldn't normally happen, but it could cause the
permissions to remain RO on the direct map alias, and then the page
would return from the page allocator to some other component as RO and
cause a crash.

So fix fix the address range calculation so the flush will include the
direct map range.

Fixes: 868b104d7379 ("mm/vmalloc: Add flag for freeing of special permsissions")
Cc: Meelis Roos <mroos@linux.ee>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 mm/vmalloc.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index c42872ed82ac..836888ae01f6 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2159,9 +2159,10 @@ static void vm_remove_mappings(struct vm_struct *area, int deallocate_pages)
 	 * the vm_unmap_aliases() flush includes the direct map.
 	 */
 	for (i = 0; i < area->nr_pages; i++) {
-		if (page_address(area->pages[i])) {
+		addr = (unsigned long)page_address(area->pages[i]);
+		if (addr) {
 			start = min(addr, start);
-			end = max(addr, end);
+			end = max(addr + PAGE_SIZE, end);
 		}
 	}
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 1/2] vmalloc: Fix calculation of direct map addr range
@ 2019-05-20 23:38   ` Rick Edgecombe
  0 siblings, 0 replies; 18+ messages in thread
From: Rick Edgecombe @ 2019-05-20 23:38 UTC (permalink / raw)
  To: linux-kernel, peterz, sparclinux, linux-mm, netdev, luto
  Cc: dave.hansen, namit, davem, Rick Edgecombe, Meelis Roos,
	Borislav Petkov, Andy Lutomirski, Ingo Molnar, Rick Edgecombe

From: Rick Edgecombe <redgecombe.lkml@gmail.com>

The calculation of the direct map address range to flush was wrong.
This could cause problems on x86 if a RO direct map alias ever got loaded
into the TLB. This shouldn't normally happen, but it could cause the
permissions to remain RO on the direct map alias, and then the page
would return from the page allocator to some other component as RO and
cause a crash.

So fix fix the address range calculation so the flush will include the
direct map range.

Fixes: 868b104d7379 ("mm/vmalloc: Add flag for freeing of special permsissions")
Cc: Meelis Roos <mroos@linux.ee>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 mm/vmalloc.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index c42872ed82ac..836888ae01f6 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2159,9 +2159,10 @@ static void vm_remove_mappings(struct vm_struct *area, int deallocate_pages)
 	 * the vm_unmap_aliases() flush includes the direct map.
 	 */
 	for (i = 0; i < area->nr_pages; i++) {
-		if (page_address(area->pages[i])) {
+		addr = (unsigned long)page_address(area->pages[i]);
+		if (addr) {
 			start = min(addr, start);
-			end = max(addr, end);
+			end = max(addr + PAGE_SIZE, end);
 		}
 	}
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 2/2] vmalloc: Remove work as from vfree path
  2019-05-20 23:38 ` Rick Edgecombe
@ 2019-05-20 23:38   ` Rick Edgecombe
  -1 siblings, 0 replies; 18+ messages in thread
From: Rick Edgecombe @ 2019-05-20 23:38 UTC (permalink / raw)
  To: linux-kernel, peterz, sparclinux, linux-mm, netdev, luto
  Cc: dave.hansen, namit, davem, Rick Edgecombe, Meelis Roos,
	Borislav Petkov, Andy Lutomirski, Ingo Molnar, Rick Edgecombe

From: Rick Edgecombe <redgecombe.lkml@gmail.com>

Calling vm_unmap_alias() in vm_remove_mappings() could potentially be a
lot of work to do on a free operation. Simply flushing the TLB instead of
the whole vm_unmap_alias() operation makes the frees faster and pushes
the heavy work to happen on allocation where it would be more expected.
In addition to the extra work, vm_unmap_alias() takes some locks including
a long hold of vmap_purge_lock, which will make all other
VM_FLUSH_RESET_PERMS vfrees wait while the purge operation happens.

Lastly, page_address() can involve locking and lookups on some
configurations, so skip calling this by exiting out early when
!CONFIG_ARCH_HAS_SET_DIRECT_MAP.

Cc: Meelis Roos <mroos@linux.ee>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 mm/vmalloc.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 836888ae01f6..8d03427626dc 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2122,9 +2122,10 @@ static inline void set_area_direct_map(const struct vm_struct *area,
 /* Handle removing and resetting vm mappings related to the vm_struct. */
 static void vm_remove_mappings(struct vm_struct *area, int deallocate_pages)
 {
+	const bool has_set_direct = IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP);
+	const bool flush_reset = area->flags & VM_FLUSH_RESET_PERMS;
 	unsigned long addr = (unsigned long)area->addr;
-	unsigned long start = ULONG_MAX, end = 0;
-	int flush_reset = area->flags & VM_FLUSH_RESET_PERMS;
+	unsigned long start = addr, end = addr + area->size;
 	int i;
 
 	/*
@@ -2133,7 +2134,7 @@ static void vm_remove_mappings(struct vm_struct *area, int deallocate_pages)
 	 * This is concerned with resetting the direct map any an vm alias with
 	 * execute permissions, without leaving a RW+X window.
 	 */
-	if (flush_reset && !IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
+	if (flush_reset && !has_set_direct) {
 		set_memory_nx(addr, area->nr_pages);
 		set_memory_rw(addr, area->nr_pages);
 	}
@@ -2146,17 +2147,18 @@ static void vm_remove_mappings(struct vm_struct *area, int deallocate_pages)
 
 	/*
 	 * If not deallocating pages, just do the flush of the VM area and
-	 * return.
+	 * return. If the arch doesn't have set_direct_map_(), also skip the
+	 * below work.
 	 */
-	if (!deallocate_pages) {
-		vm_unmap_aliases();
+	if (!deallocate_pages || !has_set_direct) {
+		flush_tlb_kernel_range(start, end);
 		return;
 	}
 
 	/*
 	 * If execution gets here, flush the vm mapping and reset the direct
 	 * map. Find the start and end range of the direct mappings to make sure
-	 * the vm_unmap_aliases() flush includes the direct map.
+	 * the flush_tlb_kernel_range() includes the direct map.
 	 */
 	for (i = 0; i < area->nr_pages; i++) {
 		addr = (unsigned long)page_address(area->pages[i]);
@@ -2172,7 +2174,7 @@ static void vm_remove_mappings(struct vm_struct *area, int deallocate_pages)
 	 * reset the direct map permissions to the default.
 	 */
 	set_area_direct_map(area, set_direct_map_invalid_noflush);
-	_vm_unmap_aliases(start, end, 1);
+	flush_tlb_kernel_range(start, end);
 	set_area_direct_map(area, set_direct_map_default_noflush);
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 2/2] vmalloc: Remove work as from vfree path
@ 2019-05-20 23:38   ` Rick Edgecombe
  0 siblings, 0 replies; 18+ messages in thread
From: Rick Edgecombe @ 2019-05-20 23:38 UTC (permalink / raw)
  To: linux-kernel, peterz, sparclinux, linux-mm, netdev, luto
  Cc: dave.hansen, namit, davem, Rick Edgecombe, Meelis Roos,
	Borislav Petkov, Andy Lutomirski, Ingo Molnar, Rick Edgecombe

From: Rick Edgecombe <redgecombe.lkml@gmail.com>

Calling vm_unmap_alias() in vm_remove_mappings() could potentially be a
lot of work to do on a free operation. Simply flushing the TLB instead of
the whole vm_unmap_alias() operation makes the frees faster and pushes
the heavy work to happen on allocation where it would be more expected.
In addition to the extra work, vm_unmap_alias() takes some locks including
a long hold of vmap_purge_lock, which will make all other
VM_FLUSH_RESET_PERMS vfrees wait while the purge operation happens.

Lastly, page_address() can involve locking and lookups on some
configurations, so skip calling this by exiting out early when
!CONFIG_ARCH_HAS_SET_DIRECT_MAP.

Cc: Meelis Roos <mroos@linux.ee>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Nadav Amit <namit@vmware.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 mm/vmalloc.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 836888ae01f6..8d03427626dc 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2122,9 +2122,10 @@ static inline void set_area_direct_map(const struct vm_struct *area,
 /* Handle removing and resetting vm mappings related to the vm_struct. */
 static void vm_remove_mappings(struct vm_struct *area, int deallocate_pages)
 {
+	const bool has_set_direct = IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP);
+	const bool flush_reset = area->flags & VM_FLUSH_RESET_PERMS;
 	unsigned long addr = (unsigned long)area->addr;
-	unsigned long start = ULONG_MAX, end = 0;
-	int flush_reset = area->flags & VM_FLUSH_RESET_PERMS;
+	unsigned long start = addr, end = addr + area->size;
 	int i;
 
 	/*
@@ -2133,7 +2134,7 @@ static void vm_remove_mappings(struct vm_struct *area, int deallocate_pages)
 	 * This is concerned with resetting the direct map any an vm alias with
 	 * execute permissions, without leaving a RW+X window.
 	 */
-	if (flush_reset && !IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
+	if (flush_reset && !has_set_direct) {
 		set_memory_nx(addr, area->nr_pages);
 		set_memory_rw(addr, area->nr_pages);
 	}
@@ -2146,17 +2147,18 @@ static void vm_remove_mappings(struct vm_struct *area, int deallocate_pages)
 
 	/*
 	 * If not deallocating pages, just do the flush of the VM area and
-	 * return.
+	 * return. If the arch doesn't have set_direct_map_(), also skip the
+	 * below work.
 	 */
-	if (!deallocate_pages) {
-		vm_unmap_aliases();
+	if (!deallocate_pages || !has_set_direct) {
+		flush_tlb_kernel_range(start, end);
 		return;
 	}
 
 	/*
 	 * If execution gets here, flush the vm mapping and reset the direct
 	 * map. Find the start and end range of the direct mappings to make sure
-	 * the vm_unmap_aliases() flush includes the direct map.
+	 * the flush_tlb_kernel_range() includes the direct map.
 	 */
 	for (i = 0; i < area->nr_pages; i++) {
 		addr = (unsigned long)page_address(area->pages[i]);
@@ -2172,7 +2174,7 @@ static void vm_remove_mappings(struct vm_struct *area, int deallocate_pages)
 	 * reset the direct map permissions to the default.
 	 */
 	set_area_direct_map(area, set_direct_map_invalid_noflush);
-	_vm_unmap_aliases(start, end, 1);
+	flush_tlb_kernel_range(start, end);
 	set_area_direct_map(area, set_direct_map_default_noflush);
 }
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/2] Fix issues with vmalloc flush flag
  2019-05-20 23:38 ` Rick Edgecombe
@ 2019-05-20 23:46   ` Edgecombe, Rick P
  -1 siblings, 0 replies; 18+ messages in thread
From: Edgecombe, Rick P @ 2019-05-20 23:46 UTC (permalink / raw)
  To: linux-kernel, peterz, linux-mm, netdev, sparclinux, luto
  Cc: davem, namit, Hansen, Dave

On Mon, 2019-05-20 at 16:38 -0700, Rick Edgecombe wrote:
> These two patches address issues with the recently added
> VM_FLUSH_RESET_PERMS vmalloc flag. It is now split into two patches,
> which
> made sense to me, but can split it further if desired.
> 
Oops, this was supposed to say PATCH v3. Let me know if I should
resend.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/2] Fix issues with vmalloc flush flag
@ 2019-05-20 23:46   ` Edgecombe, Rick P
  0 siblings, 0 replies; 18+ messages in thread
From: Edgecombe, Rick P @ 2019-05-20 23:46 UTC (permalink / raw)
  To: linux-kernel, peterz, linux-mm, netdev, sparclinux, luto
  Cc: davem, namit, Hansen, Dave

T24gTW9uLCAyMDE5LTA1LTIwIGF0IDE2OjM4IC0wNzAwLCBSaWNrIEVkZ2Vjb21iZSB3cm90ZToN
Cj4gVGhlc2UgdHdvIHBhdGNoZXMgYWRkcmVzcyBpc3N1ZXMgd2l0aCB0aGUgcmVjZW50bHkgYWRk
ZWQNCj4gVk1fRkxVU0hfUkVTRVRfUEVSTVMgdm1hbGxvYyBmbGFnLiBJdCBpcyBub3cgc3BsaXQg
aW50byB0d28gcGF0Y2hlcywNCj4gd2hpY2gNCj4gbWFkZSBzZW5zZSB0byBtZSwgYnV0IGNhbiBz
cGxpdCBpdCBmdXJ0aGVyIGlmIGRlc2lyZWQuDQo+IA0KT29wcywgdGhpcyB3YXMgc3VwcG9zZWQg
dG8gc2F5IFBBVENIIHYzLiBMZXQgbWUga25vdyBpZiBJIHNob3VsZA0KcmVzZW5kLg0K

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] vmalloc: Remove work as from vfree path
  2019-05-20 23:38   ` Rick Edgecombe
  (?)
@ 2019-05-21 16:17     ` Andy Lutomirski
  -1 siblings, 0 replies; 18+ messages in thread
From: Andy Lutomirski @ 2019-05-21 16:17 UTC (permalink / raw)
  To: Rick Edgecombe
  Cc: LKML, Peter Zijlstra, sparclinux, Linux-MM, Network Development,
	Dave Hansen, Nadav Amit, David S. Miller, Rick Edgecombe,
	Meelis Roos, Borislav Petkov, Andy Lutomirski, Ingo Molnar

On Mon, May 20, 2019 at 4:39 PM Rick Edgecombe
<rick.p.edgecombe@intel.com> wrote:
>
> From: Rick Edgecombe <redgecombe.lkml@gmail.com>
>
> Calling vm_unmap_alias() in vm_remove_mappings() could potentially be a
> lot of work to do on a free operation. Simply flushing the TLB instead of
> the whole vm_unmap_alias() operation makes the frees faster and pushes
> the heavy work to happen on allocation where it would be more expected.
> In addition to the extra work, vm_unmap_alias() takes some locks including
> a long hold of vmap_purge_lock, which will make all other
> VM_FLUSH_RESET_PERMS vfrees wait while the purge operation happens.
>
> Lastly, page_address() can involve locking and lookups on some
> configurations, so skip calling this by exiting out early when
> !CONFIG_ARCH_HAS_SET_DIRECT_MAP.

Hmm.  I would have expected that the major cost of vm_unmap_aliases()
would be the flush, and at least informing the code that the flush
happened seems valuable.  So would guess that this patch is actually a
loss in throughput.

--Andy

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] vmalloc: Remove work as from vfree path
@ 2019-05-21 16:17     ` Andy Lutomirski
  0 siblings, 0 replies; 18+ messages in thread
From: Andy Lutomirski @ 2019-05-21 16:17 UTC (permalink / raw)
  To: Rick Edgecombe
  Cc: LKML, Peter Zijlstra, sparclinux, Linux-MM, Network Development,
	Dave Hansen, Nadav Amit, David S. Miller, Rick Edgecombe,
	Meelis Roos, Borislav Petkov, Andy Lutomirski, Ingo Molnar

On Mon, May 20, 2019 at 4:39 PM Rick Edgecombe
<rick.p.edgecombe@intel.com> wrote:
>
> From: Rick Edgecombe <redgecombe.lkml@gmail.com>
>
> Calling vm_unmap_alias() in vm_remove_mappings() could potentially be a
> lot of work to do on a free operation. Simply flushing the TLB instead of
> the whole vm_unmap_alias() operation makes the frees faster and pushes
> the heavy work to happen on allocation where it would be more expected.
> In addition to the extra work, vm_unmap_alias() takes some locks including
> a long hold of vmap_purge_lock, which will make all other
> VM_FLUSH_RESET_PERMS vfrees wait while the purge operation happens.
>
> Lastly, page_address() can involve locking and lookups on some
> configurations, so skip calling this by exiting out early when
> !CONFIG_ARCH_HAS_SET_DIRECT_MAP.

Hmm.  I would have expected that the major cost of vm_unmap_aliases()
would be the flush, and at least informing the code that the flush
happened seems valuable.  So would guess that this patch is actually a
loss in throughput.

--Andy

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] vmalloc: Remove work as from vfree path
@ 2019-05-21 16:17     ` Andy Lutomirski
  0 siblings, 0 replies; 18+ messages in thread
From: Andy Lutomirski @ 2019-05-21 16:17 UTC (permalink / raw)
  To: Rick Edgecombe
  Cc: LKML, Peter Zijlstra, sparclinux, Linux-MM, Network Development,
	Dave Hansen, Nadav Amit, David S. Miller, Rick Edgecombe,
	Meelis Roos, Borislav Petkov, Andy Lutomirski, Ingo Molnar

On Mon, May 20, 2019 at 4:39 PM Rick Edgecombe
<rick.p.edgecombe@intel.com> wrote:
>
> From: Rick Edgecombe <redgecombe.lkml@gmail.com>
>
> Calling vm_unmap_alias() in vm_remove_mappings() could potentially be a
> lot of work to do on a free operation. Simply flushing the TLB instead of
> the whole vm_unmap_alias() operation makes the frees faster and pushes
> the heavy work to happen on allocation where it would be more expected.
> In addition to the extra work, vm_unmap_alias() takes some locks including
> a long hold of vmap_purge_lock, which will make all other
> VM_FLUSH_RESET_PERMS vfrees wait while the purge operation happens.
>
> Lastly, page_address() can involve locking and lookups on some
> configurations, so skip calling this by exiting out early when
> !CONFIG_ARCH_HAS_SET_DIRECT_MAP.

Hmm.  I would have expected that the major cost of vm_unmap_aliases()
would be the flush, and at least informing the code that the flush
happened seems valuable.  So would guess that this patch is actually a
loss in throughput.

--Andy


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] vmalloc: Remove work as from vfree path
  2019-05-21 16:17     ` Andy Lutomirski
@ 2019-05-21 16:51       ` Edgecombe, Rick P
  -1 siblings, 0 replies; 18+ messages in thread
From: Edgecombe, Rick P @ 2019-05-21 16:51 UTC (permalink / raw)
  To: luto
  Cc: linux-kernel, peterz, linux-mm, mroos, redgecombe.lkml, mingo,
	namit, netdev, Hansen, Dave, bp, davem, sparclinux

On Tue, 2019-05-21 at 09:17 -0700, Andy Lutomirski wrote:
> On Mon, May 20, 2019 at 4:39 PM Rick Edgecombe
> <rick.p.edgecombe@intel.com> wrote:
> > From: Rick Edgecombe <redgecombe.lkml@gmail.com>
> > 
> > Calling vm_unmap_alias() in vm_remove_mappings() could potentially
> > be a
> > lot of work to do on a free operation. Simply flushing the TLB
> > instead of
> > the whole vm_unmap_alias() operation makes the frees faster and
> > pushes
> > the heavy work to happen on allocation where it would be more
> > expected.
> > In addition to the extra work, vm_unmap_alias() takes some locks
> > including
> > a long hold of vmap_purge_lock, which will make all other
> > VM_FLUSH_RESET_PERMS vfrees wait while the purge operation happens.
> > 
> > Lastly, page_address() can involve locking and lookups on some
> > configurations, so skip calling this by exiting out early when
> > !CONFIG_ARCH_HAS_SET_DIRECT_MAP.
> 
> Hmm.  I would have expected that the major cost of vm_unmap_aliases()
> would be the flush, and at least informing the code that the flush
> happened seems valuable.  So would guess that this patch is actually
> a
> loss in throughput.
> 
You are probably right about the flush taking the longest. The original
idea of using it was exactly to improve throughput by saving a flush.
However with vm_unmap_aliases() the flush will be over a larger range
than before for most arch's since it will likley span from the module
space to vmalloc. From poking around the sparc tlb flush history, I
guess the lazy purges used to be (still are?) a problem for them
because it would try to flush each page individually for some CPUs. Not
sure about all of the other architectures, but for any implementation
like that, using vm_unmap_alias() would turn an occasional long
operation into a more frequent one.

On x86, it shouldn't be a problem to use it. We already used to call
this function several times around a exec permission vfree. 

I guess its a tradeoff that depends on how fast large range TLB flushes
usually are compared to small ones. I am ok dropping it, if it doesn't
seem worth it.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] vmalloc: Remove work as from vfree path
@ 2019-05-21 16:51       ` Edgecombe, Rick P
  0 siblings, 0 replies; 18+ messages in thread
From: Edgecombe, Rick P @ 2019-05-21 16:51 UTC (permalink / raw)
  To: luto
  Cc: linux-kernel, peterz, linux-mm, mroos, redgecombe.lkml, mingo,
	namit, netdev, Hansen, Dave, bp, davem, sparclinux

T24gVHVlLCAyMDE5LTA1LTIxIGF0IDA5OjE3IC0wNzAwLCBBbmR5IEx1dG9taXJza2kgd3JvdGU6
DQo+IE9uIE1vbiwgTWF5IDIwLCAyMDE5IGF0IDQ6MzkgUE0gUmljayBFZGdlY29tYmUNCj4gPHJp
Y2sucC5lZGdlY29tYmVAaW50ZWwuY29tPiB3cm90ZToNCj4gPiBGcm9tOiBSaWNrIEVkZ2Vjb21i
ZSA8cmVkZ2Vjb21iZS5sa21sQGdtYWlsLmNvbT4NCj4gPiANCj4gPiBDYWxsaW5nIHZtX3VubWFw
X2FsaWFzKCkgaW4gdm1fcmVtb3ZlX21hcHBpbmdzKCkgY291bGQgcG90ZW50aWFsbHkNCj4gPiBi
ZSBhDQo+ID4gbG90IG9mIHdvcmsgdG8gZG8gb24gYSBmcmVlIG9wZXJhdGlvbi4gU2ltcGx5IGZs
dXNoaW5nIHRoZSBUTEINCj4gPiBpbnN0ZWFkIG9mDQo+ID4gdGhlIHdob2xlIHZtX3VubWFwX2Fs
aWFzKCkgb3BlcmF0aW9uIG1ha2VzIHRoZSBmcmVlcyBmYXN0ZXIgYW5kDQo+ID4gcHVzaGVzDQo+
ID4gdGhlIGhlYXZ5IHdvcmsgdG8gaGFwcGVuIG9uIGFsbG9jYXRpb24gd2hlcmUgaXQgd291bGQg
YmUgbW9yZQ0KPiA+IGV4cGVjdGVkLg0KPiA+IEluIGFkZGl0aW9uIHRvIHRoZSBleHRyYSB3b3Jr
LCB2bV91bm1hcF9hbGlhcygpIHRha2VzIHNvbWUgbG9ja3MNCj4gPiBpbmNsdWRpbmcNCj4gPiBh
IGxvbmcgaG9sZCBvZiB2bWFwX3B1cmdlX2xvY2ssIHdoaWNoIHdpbGwgbWFrZSBhbGwgb3RoZXIN
Cj4gPiBWTV9GTFVTSF9SRVNFVF9QRVJNUyB2ZnJlZXMgd2FpdCB3aGlsZSB0aGUgcHVyZ2Ugb3Bl
cmF0aW9uIGhhcHBlbnMuDQo+ID4gDQo+ID4gTGFzdGx5LCBwYWdlX2FkZHJlc3MoKSBjYW4gaW52
b2x2ZSBsb2NraW5nIGFuZCBsb29rdXBzIG9uIHNvbWUNCj4gPiBjb25maWd1cmF0aW9ucywgc28g
c2tpcCBjYWxsaW5nIHRoaXMgYnkgZXhpdGluZyBvdXQgZWFybHkgd2hlbg0KPiA+ICFDT05GSUdf
QVJDSF9IQVNfU0VUX0RJUkVDVF9NQVAuDQo+IA0KPiBIbW0uICBJIHdvdWxkIGhhdmUgZXhwZWN0
ZWQgdGhhdCB0aGUgbWFqb3IgY29zdCBvZiB2bV91bm1hcF9hbGlhc2VzKCkNCj4gd291bGQgYmUg
dGhlIGZsdXNoLCBhbmQgYXQgbGVhc3QgaW5mb3JtaW5nIHRoZSBjb2RlIHRoYXQgdGhlIGZsdXNo
DQo+IGhhcHBlbmVkIHNlZW1zIHZhbHVhYmxlLiAgU28gd291bGQgZ3Vlc3MgdGhhdCB0aGlzIHBh
dGNoIGlzIGFjdHVhbGx5DQo+IGENCj4gbG9zcyBpbiB0aHJvdWdocHV0Lg0KPiANCllvdSBhcmUg
cHJvYmFibHkgcmlnaHQgYWJvdXQgdGhlIGZsdXNoIHRha2luZyB0aGUgbG9uZ2VzdC4gVGhlIG9y
aWdpbmFsDQppZGVhIG9mIHVzaW5nIGl0IHdhcyBleGFjdGx5IHRvIGltcHJvdmUgdGhyb3VnaHB1
dCBieSBzYXZpbmcgYSBmbHVzaC4NCkhvd2V2ZXIgd2l0aCB2bV91bm1hcF9hbGlhc2VzKCkgdGhl
IGZsdXNoIHdpbGwgYmUgb3ZlciBhIGxhcmdlciByYW5nZQ0KdGhhbiBiZWZvcmUgZm9yIG1vc3Qg
YXJjaCdzIHNpbmNlIGl0IHdpbGwgbGlrbGV5IHNwYW4gZnJvbSB0aGUgbW9kdWxlDQpzcGFjZSB0
byB2bWFsbG9jLiBGcm9tIHBva2luZyBhcm91bmQgdGhlIHNwYXJjIHRsYiBmbHVzaCBoaXN0b3J5
LCBJDQpndWVzcyB0aGUgbGF6eSBwdXJnZXMgdXNlZCB0byBiZSAoc3RpbGwgYXJlPykgYSBwcm9i
bGVtIGZvciB0aGVtDQpiZWNhdXNlIGl0IHdvdWxkIHRyeSB0byBmbHVzaCBlYWNoIHBhZ2UgaW5k
aXZpZHVhbGx5IGZvciBzb21lIENQVXMuIE5vdA0Kc3VyZSBhYm91dCBhbGwgb2YgdGhlIG90aGVy
IGFyY2hpdGVjdHVyZXMsIGJ1dCBmb3IgYW55IGltcGxlbWVudGF0aW9uDQpsaWtlIHRoYXQsIHVz
aW5nIHZtX3VubWFwX2FsaWFzKCkgd291bGQgdHVybiBhbiBvY2Nhc2lvbmFsIGxvbmcNCm9wZXJh
dGlvbiBpbnRvIGEgbW9yZSBmcmVxdWVudCBvbmUuDQoNCk9uIHg4NiwgaXQgc2hvdWxkbid0IGJl
IGEgcHJvYmxlbSB0byB1c2UgaXQuIFdlIGFscmVhZHkgdXNlZCB0byBjYWxsDQp0aGlzIGZ1bmN0
aW9uIHNldmVyYWwgdGltZXMgYXJvdW5kIGEgZXhlYyBwZXJtaXNzaW9uIHZmcmVlLiANCg0KSSBn
dWVzcyBpdHMgYSB0cmFkZW9mZiB0aGF0IGRlcGVuZHMgb24gaG93IGZhc3QgbGFyZ2UgcmFuZ2Ug
VExCIGZsdXNoZXMNCnVzdWFsbHkgYXJlIGNvbXBhcmVkIHRvIHNtYWxsIG9uZXMuIEkgYW0gb2sg
ZHJvcHBpbmcgaXQsIGlmIGl0IGRvZXNuJ3QNCnNlZW0gd29ydGggaXQuDQo

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] vmalloc: Remove work as from vfree path
  2019-05-21 16:51       ` Edgecombe, Rick P
  (?)
@ 2019-05-21 17:00         ` Andy Lutomirski
  -1 siblings, 0 replies; 18+ messages in thread
From: Andy Lutomirski @ 2019-05-21 17:00 UTC (permalink / raw)
  To: Edgecombe, Rick P
  Cc: luto, linux-kernel, peterz, linux-mm, mroos, redgecombe.lkml,
	mingo, namit, netdev, Hansen, Dave, bp, davem, sparclinux

On Tue, May 21, 2019 at 9:51 AM Edgecombe, Rick P
<rick.p.edgecombe@intel.com> wrote:
>
> On Tue, 2019-05-21 at 09:17 -0700, Andy Lutomirski wrote:
> > On Mon, May 20, 2019 at 4:39 PM Rick Edgecombe
> > <rick.p.edgecombe@intel.com> wrote:
> > > From: Rick Edgecombe <redgecombe.lkml@gmail.com>
> > >
> > > Calling vm_unmap_alias() in vm_remove_mappings() could potentially
> > > be a
> > > lot of work to do on a free operation. Simply flushing the TLB
> > > instead of
> > > the whole vm_unmap_alias() operation makes the frees faster and
> > > pushes
> > > the heavy work to happen on allocation where it would be more
> > > expected.
> > > In addition to the extra work, vm_unmap_alias() takes some locks
> > > including
> > > a long hold of vmap_purge_lock, which will make all other
> > > VM_FLUSH_RESET_PERMS vfrees wait while the purge operation happens.
> > >
> > > Lastly, page_address() can involve locking and lookups on some
> > > configurations, so skip calling this by exiting out early when
> > > !CONFIG_ARCH_HAS_SET_DIRECT_MAP.
> >
> > Hmm.  I would have expected that the major cost of vm_unmap_aliases()
> > would be the flush, and at least informing the code that the flush
> > happened seems valuable.  So would guess that this patch is actually
> > a
> > loss in throughput.
> >
> You are probably right about the flush taking the longest. The original
> idea of using it was exactly to improve throughput by saving a flush.
> However with vm_unmap_aliases() the flush will be over a larger range
> than before for most arch's since it will likley span from the module
> space to vmalloc. From poking around the sparc tlb flush history, I
> guess the lazy purges used to be (still are?) a problem for them
> because it would try to flush each page individually for some CPUs. Not
> sure about all of the other architectures, but for any implementation
> like that, using vm_unmap_alias() would turn an occasional long
> operation into a more frequent one.
>
> On x86, it shouldn't be a problem to use it. We already used to call
> this function several times around a exec permission vfree.
>
> I guess its a tradeoff that depends on how fast large range TLB flushes
> usually are compared to small ones. I am ok dropping it, if it doesn't
> seem worth it.

On x86, a full flush is probably not much slower than just flushing a
page or two -- the main cost is in the TLB refill.  I don't know about
other architectures.  I would drop this patch unless you have numbers
suggesting that it's a win.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] vmalloc: Remove work as from vfree path
@ 2019-05-21 17:00         ` Andy Lutomirski
  0 siblings, 0 replies; 18+ messages in thread
From: Andy Lutomirski @ 2019-05-21 17:00 UTC (permalink / raw)
  To: Edgecombe, Rick P
  Cc: luto, linux-kernel, peterz, linux-mm, mroos, redgecombe.lkml,
	mingo, namit, netdev, Hansen, Dave, bp, davem, sparclinux

On Tue, May 21, 2019 at 9:51 AM Edgecombe, Rick P
<rick.p.edgecombe@intel.com> wrote:
>
> On Tue, 2019-05-21 at 09:17 -0700, Andy Lutomirski wrote:
> > On Mon, May 20, 2019 at 4:39 PM Rick Edgecombe
> > <rick.p.edgecombe@intel.com> wrote:
> > > From: Rick Edgecombe <redgecombe.lkml@gmail.com>
> > >
> > > Calling vm_unmap_alias() in vm_remove_mappings() could potentially
> > > be a
> > > lot of work to do on a free operation. Simply flushing the TLB
> > > instead of
> > > the whole vm_unmap_alias() operation makes the frees faster and
> > > pushes
> > > the heavy work to happen on allocation where it would be more
> > > expected.
> > > In addition to the extra work, vm_unmap_alias() takes some locks
> > > including
> > > a long hold of vmap_purge_lock, which will make all other
> > > VM_FLUSH_RESET_PERMS vfrees wait while the purge operation happens.
> > >
> > > Lastly, page_address() can involve locking and lookups on some
> > > configurations, so skip calling this by exiting out early when
> > > !CONFIG_ARCH_HAS_SET_DIRECT_MAP.
> >
> > Hmm.  I would have expected that the major cost of vm_unmap_aliases()
> > would be the flush, and at least informing the code that the flush
> > happened seems valuable.  So would guess that this patch is actually
> > a
> > loss in throughput.
> >
> You are probably right about the flush taking the longest. The original
> idea of using it was exactly to improve throughput by saving a flush.
> However with vm_unmap_aliases() the flush will be over a larger range
> than before for most arch's since it will likley span from the module
> space to vmalloc. From poking around the sparc tlb flush history, I
> guess the lazy purges used to be (still are?) a problem for them
> because it would try to flush each page individually for some CPUs. Not
> sure about all of the other architectures, but for any implementation
> like that, using vm_unmap_alias() would turn an occasional long
> operation into a more frequent one.
>
> On x86, it shouldn't be a problem to use it. We already used to call
> this function several times around a exec permission vfree.
>
> I guess its a tradeoff that depends on how fast large range TLB flushes
> usually are compared to small ones. I am ok dropping it, if it doesn't
> seem worth it.

On x86, a full flush is probably not much slower than just flushing a
page or two -- the main cost is in the TLB refill.  I don't know about
other architectures.  I would drop this patch unless you have numbers
suggesting that it's a win.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] vmalloc: Remove work as from vfree path
@ 2019-05-21 17:00         ` Andy Lutomirski
  0 siblings, 0 replies; 18+ messages in thread
From: Andy Lutomirski @ 2019-05-21 17:00 UTC (permalink / raw)
  To: Edgecombe, Rick P
  Cc: luto, linux-kernel, peterz, linux-mm, mroos, redgecombe.lkml,
	mingo, namit, netdev, Hansen, Dave, bp, davem, sparclinux

On Tue, May 21, 2019 at 9:51 AM Edgecombe, Rick P
<rick.p.edgecombe@intel.com> wrote:
>
> On Tue, 2019-05-21 at 09:17 -0700, Andy Lutomirski wrote:
> > On Mon, May 20, 2019 at 4:39 PM Rick Edgecombe
> > <rick.p.edgecombe@intel.com> wrote:
> > > From: Rick Edgecombe <redgecombe.lkml@gmail.com>
> > >
> > > Calling vm_unmap_alias() in vm_remove_mappings() could potentially
> > > be a
> > > lot of work to do on a free operation. Simply flushing the TLB
> > > instead of
> > > the whole vm_unmap_alias() operation makes the frees faster and
> > > pushes
> > > the heavy work to happen on allocation where it would be more
> > > expected.
> > > In addition to the extra work, vm_unmap_alias() takes some locks
> > > including
> > > a long hold of vmap_purge_lock, which will make all other
> > > VM_FLUSH_RESET_PERMS vfrees wait while the purge operation happens.
> > >
> > > Lastly, page_address() can involve locking and lookups on some
> > > configurations, so skip calling this by exiting out early when
> > > !CONFIG_ARCH_HAS_SET_DIRECT_MAP.
> >
> > Hmm.  I would have expected that the major cost of vm_unmap_aliases()
> > would be the flush, and at least informing the code that the flush
> > happened seems valuable.  So would guess that this patch is actually
> > a
> > loss in throughput.
> >
> You are probably right about the flush taking the longest. The original
> idea of using it was exactly to improve throughput by saving a flush.
> However with vm_unmap_aliases() the flush will be over a larger range
> than before for most arch's since it will likley span from the module
> space to vmalloc. From poking around the sparc tlb flush history, I
> guess the lazy purges used to be (still are?) a problem for them
> because it would try to flush each page individually for some CPUs. Not
> sure about all of the other architectures, but for any implementation
> like that, using vm_unmap_alias() would turn an occasional long
> operation into a more frequent one.
>
> On x86, it shouldn't be a problem to use it. We already used to call
> this function several times around a exec permission vfree.
>
> I guess its a tradeoff that depends on how fast large range TLB flushes
> usually are compared to small ones. I am ok dropping it, if it doesn't
> seem worth it.

On x86, a full flush is probably not much slower than just flushing a
page or two -- the main cost is in the TLB refill.  I don't know about
other architectures.  I would drop this patch unless you have numbers
suggesting that it's a win.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] vmalloc: Remove work as from vfree path
  2019-05-21 17:00         ` Andy Lutomirski
@ 2019-05-21 19:47           ` Edgecombe, Rick P
  -1 siblings, 0 replies; 18+ messages in thread
From: Edgecombe, Rick P @ 2019-05-21 19:47 UTC (permalink / raw)
  To: luto
  Cc: linux-kernel, peterz, linux-mm, mroos, redgecombe.lkml, mingo,
	namit, netdev, Hansen, Dave, bp, davem, sparclinux

On Tue, 2019-05-21 at 10:00 -0700, Andy Lutomirski wrote:
> On Tue, May 21, 2019 at 9:51 AM Edgecombe, Rick P
> <rick.p.edgecombe@intel.com> wrote:
> > On Tue, 2019-05-21 at 09:17 -0700, Andy Lutomirski wrote:
> > > On Mon, May 20, 2019 at 4:39 PM Rick Edgecombe
> > > <rick.p.edgecombe@intel.com> wrote:
> > > > From: Rick Edgecombe <redgecombe.lkml@gmail.com>
> > > > 
> > > > Calling vm_unmap_alias() in vm_remove_mappings() could
> > > > potentially
> > > > be a
> > > > lot of work to do on a free operation. Simply flushing the TLB
> > > > instead of
> > > > the whole vm_unmap_alias() operation makes the frees faster and
> > > > pushes
> > > > the heavy work to happen on allocation where it would be more
> > > > expected.
> > > > In addition to the extra work, vm_unmap_alias() takes some
> > > > locks
> > > > including
> > > > a long hold of vmap_purge_lock, which will make all other
> > > > VM_FLUSH_RESET_PERMS vfrees wait while the purge operation
> > > > happens.
> > > > 
> > > > Lastly, page_address() can involve locking and lookups on some
> > > > configurations, so skip calling this by exiting out early when
> > > > !CONFIG_ARCH_HAS_SET_DIRECT_MAP.
> > > 
> > > Hmm.  I would have expected that the major cost of
> > > vm_unmap_aliases()
> > > would be the flush, and at least informing the code that the
> > > flush
> > > happened seems valuable.  So would guess that this patch is
> > > actually
> > > a
> > > loss in throughput.
> > > 
> > You are probably right about the flush taking the longest. The
> > original
> > idea of using it was exactly to improve throughput by saving a
> > flush.
> > However with vm_unmap_aliases() the flush will be over a larger
> > range
> > than before for most arch's since it will likley span from the
> > module
> > space to vmalloc. From poking around the sparc tlb flush history, I
> > guess the lazy purges used to be (still are?) a problem for them
> > because it would try to flush each page individually for some CPUs.
> > Not
> > sure about all of the other architectures, but for any
> > implementation
> > like that, using vm_unmap_alias() would turn an occasional long
> > operation into a more frequent one.
> > 
> > On x86, it shouldn't be a problem to use it. We already used to
> > call
> > this function several times around a exec permission vfree.
> > 
> > I guess its a tradeoff that depends on how fast large range TLB
> > flushes
> > usually are compared to small ones. I am ok dropping it, if it
> > doesn't
> > seem worth it.
> 
> On x86, a full flush is probably not much slower than just flushing a
> page or two -- the main cost is in the TLB refill.  I don't know
> about
> other architectures.  I would drop this patch unless you have numbers
> suggesting that it's a win.

Ok. This patch also inadvertently improved some correctness in calls to
flush_tlb_kernel_range() for a rare situation. I'll work that into a
different patch.

Thanks,

Rick

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] vmalloc: Remove work as from vfree path
@ 2019-05-21 19:47           ` Edgecombe, Rick P
  0 siblings, 0 replies; 18+ messages in thread
From: Edgecombe, Rick P @ 2019-05-21 19:47 UTC (permalink / raw)
  To: luto
  Cc: linux-kernel, peterz, linux-mm, mroos, redgecombe.lkml, mingo,
	namit, netdev, Hansen, Dave, bp, davem, sparclinux

T24gVHVlLCAyMDE5LTA1LTIxIGF0IDEwOjAwIC0wNzAwLCBBbmR5IEx1dG9taXJza2kgd3JvdGU6
DQo+IE9uIFR1ZSwgTWF5IDIxLCAyMDE5IGF0IDk6NTEgQU0gRWRnZWNvbWJlLCBSaWNrIFANCj4g
PHJpY2sucC5lZGdlY29tYmVAaW50ZWwuY29tPiB3cm90ZToNCj4gPiBPbiBUdWUsIDIwMTktMDUt
MjEgYXQgMDk6MTcgLTA3MDAsIEFuZHkgTHV0b21pcnNraSB3cm90ZToNCj4gPiA+IE9uIE1vbiwg
TWF5IDIwLCAyMDE5IGF0IDQ6MzkgUE0gUmljayBFZGdlY29tYmUNCj4gPiA+IDxyaWNrLnAuZWRn
ZWNvbWJlQGludGVsLmNvbT4gd3JvdGU6DQo+ID4gPiA+IEZyb206IFJpY2sgRWRnZWNvbWJlIDxy
ZWRnZWNvbWJlLmxrbWxAZ21haWwuY29tPg0KPiA+ID4gPiANCj4gPiA+ID4gQ2FsbGluZyB2bV91
bm1hcF9hbGlhcygpIGluIHZtX3JlbW92ZV9tYXBwaW5ncygpIGNvdWxkDQo+ID4gPiA+IHBvdGVu
dGlhbGx5DQo+ID4gPiA+IGJlIGENCj4gPiA+ID4gbG90IG9mIHdvcmsgdG8gZG8gb24gYSBmcmVl
IG9wZXJhdGlvbi4gU2ltcGx5IGZsdXNoaW5nIHRoZSBUTEINCj4gPiA+ID4gaW5zdGVhZCBvZg0K
PiA+ID4gPiB0aGUgd2hvbGUgdm1fdW5tYXBfYWxpYXMoKSBvcGVyYXRpb24gbWFrZXMgdGhlIGZy
ZWVzIGZhc3RlciBhbmQNCj4gPiA+ID4gcHVzaGVzDQo+ID4gPiA+IHRoZSBoZWF2eSB3b3JrIHRv
IGhhcHBlbiBvbiBhbGxvY2F0aW9uIHdoZXJlIGl0IHdvdWxkIGJlIG1vcmUNCj4gPiA+ID4gZXhw
ZWN0ZWQuDQo+ID4gPiA+IEluIGFkZGl0aW9uIHRvIHRoZSBleHRyYSB3b3JrLCB2bV91bm1hcF9h
bGlhcygpIHRha2VzIHNvbWUNCj4gPiA+ID4gbG9ja3MNCj4gPiA+ID4gaW5jbHVkaW5nDQo+ID4g
PiA+IGEgbG9uZyBob2xkIG9mIHZtYXBfcHVyZ2VfbG9jaywgd2hpY2ggd2lsbCBtYWtlIGFsbCBv
dGhlcg0KPiA+ID4gPiBWTV9GTFVTSF9SRVNFVF9QRVJNUyB2ZnJlZXMgd2FpdCB3aGlsZSB0aGUg
cHVyZ2Ugb3BlcmF0aW9uDQo+ID4gPiA+IGhhcHBlbnMuDQo+ID4gPiA+IA0KPiA+ID4gPiBMYXN0
bHksIHBhZ2VfYWRkcmVzcygpIGNhbiBpbnZvbHZlIGxvY2tpbmcgYW5kIGxvb2t1cHMgb24gc29t
ZQ0KPiA+ID4gPiBjb25maWd1cmF0aW9ucywgc28gc2tpcCBjYWxsaW5nIHRoaXMgYnkgZXhpdGlu
ZyBvdXQgZWFybHkgd2hlbg0KPiA+ID4gPiAhQ09ORklHX0FSQ0hfSEFTX1NFVF9ESVJFQ1RfTUFQ
Lg0KPiA+ID4gDQo+ID4gPiBIbW0uICBJIHdvdWxkIGhhdmUgZXhwZWN0ZWQgdGhhdCB0aGUgbWFq
b3IgY29zdCBvZg0KPiA+ID4gdm1fdW5tYXBfYWxpYXNlcygpDQo+ID4gPiB3b3VsZCBiZSB0aGUg
Zmx1c2gsIGFuZCBhdCBsZWFzdCBpbmZvcm1pbmcgdGhlIGNvZGUgdGhhdCB0aGUNCj4gPiA+IGZs
dXNoDQo+ID4gPiBoYXBwZW5lZCBzZWVtcyB2YWx1YWJsZS4gIFNvIHdvdWxkIGd1ZXNzIHRoYXQg
dGhpcyBwYXRjaCBpcw0KPiA+ID4gYWN0dWFsbHkNCj4gPiA+IGENCj4gPiA+IGxvc3MgaW4gdGhy
b3VnaHB1dC4NCj4gPiA+IA0KPiA+IFlvdSBhcmUgcHJvYmFibHkgcmlnaHQgYWJvdXQgdGhlIGZs
dXNoIHRha2luZyB0aGUgbG9uZ2VzdC4gVGhlDQo+ID4gb3JpZ2luYWwNCj4gPiBpZGVhIG9mIHVz
aW5nIGl0IHdhcyBleGFjdGx5IHRvIGltcHJvdmUgdGhyb3VnaHB1dCBieSBzYXZpbmcgYQ0KPiA+
IGZsdXNoLg0KPiA+IEhvd2V2ZXIgd2l0aCB2bV91bm1hcF9hbGlhc2VzKCkgdGhlIGZsdXNoIHdp
bGwgYmUgb3ZlciBhIGxhcmdlcg0KPiA+IHJhbmdlDQo+ID4gdGhhbiBiZWZvcmUgZm9yIG1vc3Qg
YXJjaCdzIHNpbmNlIGl0IHdpbGwgbGlrbGV5IHNwYW4gZnJvbSB0aGUNCj4gPiBtb2R1bGUNCj4g
PiBzcGFjZSB0byB2bWFsbG9jLiBGcm9tIHBva2luZyBhcm91bmQgdGhlIHNwYXJjIHRsYiBmbHVz
aCBoaXN0b3J5LCBJDQo+ID4gZ3Vlc3MgdGhlIGxhenkgcHVyZ2VzIHVzZWQgdG8gYmUgKHN0aWxs
IGFyZT8pIGEgcHJvYmxlbSBmb3IgdGhlbQ0KPiA+IGJlY2F1c2UgaXQgd291bGQgdHJ5IHRvIGZs
dXNoIGVhY2ggcGFnZSBpbmRpdmlkdWFsbHkgZm9yIHNvbWUgQ1BVcy4NCj4gPiBOb3QNCj4gPiBz
dXJlIGFib3V0IGFsbCBvZiB0aGUgb3RoZXIgYXJjaGl0ZWN0dXJlcywgYnV0IGZvciBhbnkNCj4g
PiBpbXBsZW1lbnRhdGlvbg0KPiA+IGxpa2UgdGhhdCwgdXNpbmcgdm1fdW5tYXBfYWxpYXMoKSB3
b3VsZCB0dXJuIGFuIG9jY2FzaW9uYWwgbG9uZw0KPiA+IG9wZXJhdGlvbiBpbnRvIGEgbW9yZSBm
cmVxdWVudCBvbmUuDQo+ID4gDQo+ID4gT24geDg2LCBpdCBzaG91bGRuJ3QgYmUgYSBwcm9ibGVt
IHRvIHVzZSBpdC4gV2UgYWxyZWFkeSB1c2VkIHRvDQo+ID4gY2FsbA0KPiA+IHRoaXMgZnVuY3Rp
b24gc2V2ZXJhbCB0aW1lcyBhcm91bmQgYSBleGVjIHBlcm1pc3Npb24gdmZyZWUuDQo+ID4gDQo+
ID4gSSBndWVzcyBpdHMgYSB0cmFkZW9mZiB0aGF0IGRlcGVuZHMgb24gaG93IGZhc3QgbGFyZ2Ug
cmFuZ2UgVExCDQo+ID4gZmx1c2hlcw0KPiA+IHVzdWFsbHkgYXJlIGNvbXBhcmVkIHRvIHNtYWxs
IG9uZXMuIEkgYW0gb2sgZHJvcHBpbmcgaXQsIGlmIGl0DQo+ID4gZG9lc24ndA0KPiA+IHNlZW0g
d29ydGggaXQuDQo+IA0KPiBPbiB4ODYsIGEgZnVsbCBmbHVzaCBpcyBwcm9iYWJseSBub3QgbXVj
aCBzbG93ZXIgdGhhbiBqdXN0IGZsdXNoaW5nIGENCj4gcGFnZSBvciB0d28gLS0gdGhlIG1haW4g
Y29zdCBpcyBpbiB0aGUgVExCIHJlZmlsbC4gIEkgZG9uJ3Qga25vdw0KPiBhYm91dA0KPiBvdGhl
ciBhcmNoaXRlY3R1cmVzLiAgSSB3b3VsZCBkcm9wIHRoaXMgcGF0Y2ggdW5sZXNzIHlvdSBoYXZl
IG51bWJlcnMNCj4gc3VnZ2VzdGluZyB0aGF0IGl0J3MgYSB3aW4uDQoNCk9rLiBUaGlzIHBhdGNo
IGFsc28gaW5hZHZlcnRlbnRseSBpbXByb3ZlZCBzb21lIGNvcnJlY3RuZXNzIGluIGNhbGxzIHRv
DQpmbHVzaF90bGJfa2VybmVsX3JhbmdlKCkgZm9yIGEgcmFyZSBzaXR1YXRpb24uIEknbGwgd29y
ayB0aGF0IGludG8gYQ0KZGlmZmVyZW50IHBhdGNoLg0KDQpUaGFua3MsDQoNClJpY2sNCg=

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2019-05-21 19:47 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-20 23:38 [PATCH v2 0/2] Fix issues with vmalloc flush flag Rick Edgecombe
2019-05-20 23:38 ` Rick Edgecombe
2019-05-20 23:38 ` [PATCH v2 1/2] vmalloc: Fix calculation of direct map addr range Rick Edgecombe
2019-05-20 23:38   ` Rick Edgecombe
2019-05-20 23:38 ` [PATCH v2 2/2] vmalloc: Remove work as from vfree path Rick Edgecombe
2019-05-20 23:38   ` Rick Edgecombe
2019-05-21 16:17   ` Andy Lutomirski
2019-05-21 16:17     ` Andy Lutomirski
2019-05-21 16:17     ` Andy Lutomirski
2019-05-21 16:51     ` Edgecombe, Rick P
2019-05-21 16:51       ` Edgecombe, Rick P
2019-05-21 17:00       ` Andy Lutomirski
2019-05-21 17:00         ` Andy Lutomirski
2019-05-21 17:00         ` Andy Lutomirski
2019-05-21 19:47         ` Edgecombe, Rick P
2019-05-21 19:47           ` Edgecombe, Rick P
2019-05-20 23:46 ` [PATCH v2 0/2] Fix issues with vmalloc flush flag Edgecombe, Rick P
2019-05-20 23:46   ` Edgecombe, Rick P

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.