linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] x86/mm/cpa: Flush direct map alias during cpa
@ 2020-04-23  3:13 Rick Edgecombe
  2020-04-23  8:41 ` Peter Zijlstra
  0 siblings, 1 reply; 5+ messages in thread
From: Rick Edgecombe @ 2020-04-23  3:13 UTC (permalink / raw)
  To: dave.hansen, luto, peterz, tglx, x86, mingo, bp, hpa, linux-kernel
  Cc: Rick Edgecombe

Change cpa_flush() to always flush_tlb_all(), as it previously did. As an
optimization, cpa_flush() was changed to optionally only flush the range
in "struct cpa_data *data" if it was small enough. However, this range
does not include any direct map aliases changed in cpa_process_alias().
So small set_memory_() calls that touch that alias don't get the direct
map changes flushed. This situation can happen when the virtual address
taking variants are passed an address in vmalloc or modules space.

The issue was verified by creating a sequence like:
/* Allocate something in vmalloc */
void *ptr = vmalloc();
/* Set vmalloc addr and direct map alias as not present */
set_memory_np((unsigned long)ptr, num_pages);
/* Try to read from direct map alias */
printk("%d\n", *(int*)page_address(vmalloc_to_page(ptr));

Which successfully read from the direct mapped page now set as not
present.

There is usually some flushing that happens before the PTE is set in this
operation, which covers up that the alias doesn't actually get flushed
after it's changed. So to reproduce, set_memory_np() was also tweaked to
touch the address right before it clears its PTE. This loads the old value
in the TLB to simulate a state that could happen in real life for a number
of reasons.

Note this issue does not extend to cases where the set_memory_() calls are
passed a direct map address, or page array, etc, as the primary target. In
those cases the direct map would be flushed.

Fixes: 935f583 ("x86/mm/cpa: Optimize cpa_flush_array() TLB invalidation")
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---

Besides the flushing that happens in most cases before the PTE is changed,
the other thing that covers this up is that stale TLB hits around RO->RW
CPA's would be silently fixed by the spurious fault fixer. It looks like
some of the set_memory_uc() calls can operate on vmapped memory addresses
though. So this would miss the flush of the UC attribute on the direct map.

So there isn't any confirmed bug, but it seems like we should be flushing
these, and there possibly is one around cache attributes.

 arch/x86/mm/pat/set_memory.c | 14 +-------------
 1 file changed, 1 insertion(+), 13 deletions(-)

diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index c4aedd00c1ba..9b6d2854b842 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -331,15 +331,6 @@ static void cpa_flush_all(unsigned long cache)
 	on_each_cpu(__cpa_flush_all, (void *) cache, 1);
 }
 
-static void __cpa_flush_tlb(void *data)
-{
-	struct cpa_data *cpa = data;
-	unsigned int i;
-
-	for (i = 0; i < cpa->numpages; i++)
-		__flush_tlb_one_kernel(fix_addr(__cpa_addr(cpa, i)));
-}
-
 static void cpa_flush(struct cpa_data *data, int cache)
 {
 	struct cpa_data *cpa = data;
@@ -352,10 +343,7 @@ static void cpa_flush(struct cpa_data *data, int cache)
 		return;
 	}
 
-	if (cpa->numpages <= tlb_single_page_flush_ceiling)
-		on_each_cpu(__cpa_flush_tlb, cpa, 1);
-	else
-		flush_tlb_all();
+	flush_tlb_all();
 
 	if (!cache)
 		return;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] x86/mm/cpa: Flush direct map alias during cpa
  2020-04-23  3:13 [PATCH] x86/mm/cpa: Flush direct map alias during cpa Rick Edgecombe
@ 2020-04-23  8:41 ` Peter Zijlstra
  2020-04-23 19:02   ` Edgecombe, Rick P
  0 siblings, 1 reply; 5+ messages in thread
From: Peter Zijlstra @ 2020-04-23  8:41 UTC (permalink / raw)
  To: Rick Edgecombe; +Cc: dave.hansen, luto, tglx, x86, mingo, bp, hpa, linux-kernel

On Wed, Apr 22, 2020 at 08:13:55PM -0700, Rick Edgecombe wrote:
> Change cpa_flush() to always flush_tlb_all(), as it previously did. As an
> optimization, cpa_flush() was changed to optionally only flush the range
> in "struct cpa_data *data" if it was small enough. However, this range
> does not include any direct map aliases changed in cpa_process_alias().
> So small set_memory_() calls that touch that alias don't get the direct
> map changes flushed. This situation can happen when the virtual address
> taking variants are passed an address in vmalloc or modules space.

Wouldn't something like so make more sense?

---
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 59eca6a94ce7..b8c55a2e402d 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -43,7 +43,8 @@ struct cpa_data {
 	unsigned long	pfn;
 	unsigned int	flags;
 	unsigned int	force_split		: 1,
-			force_static_prot	: 1;
+			force_static_prot	: 1,
+			force_flush_all		: 1;
 	struct page	**pages;
 };
 
@@ -355,10 +356,10 @@ static void cpa_flush(struct cpa_data *data, int cache)
 		return;
 	}
 
-	if (cpa->numpages <= tlb_single_page_flush_ceiling)
-		on_each_cpu(__cpa_flush_tlb, cpa, 1);
-	else
+	if (cpa->force_flush_all || cpa->numpages > tlb_single_page_flush_ceiling)
 		flush_tlb_all();
+	else
+		on_each_cpu(__cpa_flush_tlb, cpa, 1);
 
 	if (!cache)
 		return;
@@ -1598,6 +1599,8 @@ static int cpa_process_alias(struct cpa_data *cpa)
 		alias_cpa.flags &= ~(CPA_PAGES_ARRAY | CPA_ARRAY);
 		alias_cpa.curpage = 0;
 
+		cpa->force_flush_all = 1;
+
 		ret = __change_page_attr_set_clr(&alias_cpa, 0);
 		if (ret)
 			return ret;
@@ -1618,6 +1621,7 @@ static int cpa_process_alias(struct cpa_data *cpa)
 		alias_cpa.flags &= ~(CPA_PAGES_ARRAY | CPA_ARRAY);
 		alias_cpa.curpage = 0;
 
+		cpa->force_flush_all = 1;
 		/*
 		 * The high mapping range is imprecise, so ignore the
 		 * return value.

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] x86/mm/cpa: Flush direct map alias during cpa
  2020-04-23  8:41 ` Peter Zijlstra
@ 2020-04-23 19:02   ` Edgecombe, Rick P
  2020-04-24 10:53     ` Peter Zijlstra
  0 siblings, 1 reply; 5+ messages in thread
From: Edgecombe, Rick P @ 2020-04-23 19:02 UTC (permalink / raw)
  To: peterz; +Cc: linux-kernel, tglx, dave.hansen, x86, hpa, mingo, luto, bp

On Thu, 2020-04-23 at 10:41 +0200, Peter Zijlstra wrote:
> Wouldn't something like so make more sense?

Yes. Dave had commented on whether a smaller fix would be better for
backports if needed. Since that diff is the whole fix, do you want to
take it from here or should I put it in a patch?

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] x86/mm/cpa: Flush direct map alias during cpa
  2020-04-23 19:02   ` Edgecombe, Rick P
@ 2020-04-24 10:53     ` Peter Zijlstra
  2020-05-01 18:22       ` [tip: x86/urgent] " tip-bot2 for Rick Edgecombe
  0 siblings, 1 reply; 5+ messages in thread
From: Peter Zijlstra @ 2020-04-24 10:53 UTC (permalink / raw)
  To: Edgecombe, Rick P
  Cc: linux-kernel, tglx, dave.hansen, x86, hpa, mingo, luto, bp

On Thu, Apr 23, 2020 at 07:02:26PM +0000, Edgecombe, Rick P wrote:
> On Thu, 2020-04-23 at 10:41 +0200, Peter Zijlstra wrote:
> > Wouldn't something like so make more sense?
> 
> Yes. Dave had commented on whether a smaller fix would be better for
> backports if needed. Since that diff is the whole fix, do you want to
> take it from here or should I put it in a patch?

I've made it look like this. Holler if you need it changed ;-)

---
Subject: x86/mm/cpa: Flush direct map alias during cpa
From: Rick Edgecombe <rick.p.edgecombe@intel.com>
Date: Wed, 22 Apr 2020 20:13:55 -0700

From: Rick Edgecombe <rick.p.edgecombe@intel.com>

As an optimization, cpa_flush() was changed to optionally only flush
the range in @cpa if it was small enough.  However, this range does
not include any direct map aliases changed in cpa_process_alias(). So
small set_memory_() calls that touch that alias don't get the direct
map changes flushed. This situation can happen when the virtual
address taking variants are passed an address in vmalloc or modules
space.

In these cases, force a full TLB flush.

Note this issue does not extend to cases where the set_memory_() calls are
passed a direct map address, or page array, etc, as the primary target. In
those cases the direct map would be flushed.

Fixes: 935f5839827e ("x86/mm/cpa: Optimize cpa_flush_array() TLB invalidation")
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/mm/pat/set_memory.c |   12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -43,7 +43,8 @@ struct cpa_data {
 	unsigned long	pfn;
 	unsigned int	flags;
 	unsigned int	force_split		: 1,
-			force_static_prot	: 1;
+			force_static_prot	: 1,
+			force_flush_all		: 1;
 	struct page	**pages;
 };
 
@@ -355,10 +356,10 @@ static void cpa_flush(struct cpa_data *d
 		return;
 	}
 
-	if (cpa->numpages <= tlb_single_page_flush_ceiling)
-		on_each_cpu(__cpa_flush_tlb, cpa, 1);
-	else
+	if (cpa->force_flush_all || cpa->numpages > tlb_single_page_flush_ceiling)
 		flush_tlb_all();
+	else
+		on_each_cpu(__cpa_flush_tlb, cpa, 1);
 
 	if (!cache)
 		return;
@@ -1598,6 +1599,8 @@ static int cpa_process_alias(struct cpa_
 		alias_cpa.flags &= ~(CPA_PAGES_ARRAY | CPA_ARRAY);
 		alias_cpa.curpage = 0;
 
+		cpa->force_flush_all = 1;
+
 		ret = __change_page_attr_set_clr(&alias_cpa, 0);
 		if (ret)
 			return ret;
@@ -1618,6 +1621,7 @@ static int cpa_process_alias(struct cpa_
 		alias_cpa.flags &= ~(CPA_PAGES_ARRAY | CPA_ARRAY);
 		alias_cpa.curpage = 0;
 
+		cpa->force_flush_all = 1;
 		/*
 		 * The high mapping range is imprecise, so ignore the
 		 * return value.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [tip: x86/urgent] x86/mm/cpa: Flush direct map alias during cpa
  2020-04-24 10:53     ` Peter Zijlstra
@ 2020-05-01 18:22       ` tip-bot2 for Rick Edgecombe
  0 siblings, 0 replies; 5+ messages in thread
From: tip-bot2 for Rick Edgecombe @ 2020-05-01 18:22 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Rick Edgecombe, Peter Zijlstra (Intel), x86, LKML

The following commit has been merged into the x86/urgent branch of tip:

Commit-ID:     ab5130186d7476dcee0d4e787d19a521ca552ce9
Gitweb:        https://git.kernel.org/tip/ab5130186d7476dcee0d4e787d19a521ca552ce9
Author:        Rick Edgecombe <rick.p.edgecombe@intel.com>
AuthorDate:    Wed, 22 Apr 2020 20:13:55 -07:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Thu, 30 Apr 2020 20:14:30 +02:00

x86/mm/cpa: Flush direct map alias during cpa

As an optimization, cpa_flush() was changed to optionally only flush
the range in @cpa if it was small enough.  However, this range does
not include any direct map aliases changed in cpa_process_alias(). So
small set_memory_() calls that touch that alias don't get the direct
map changes flushed. This situation can happen when the virtual
address taking variants are passed an address in vmalloc or modules
space.

In these cases, force a full TLB flush.

Note this issue does not extend to cases where the set_memory_() calls are
passed a direct map address, or page array, etc, as the primary target. In
those cases the direct map would be flushed.

Fixes: 935f5839827e ("x86/mm/cpa: Optimize cpa_flush_array() TLB invalidation")
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200424105343.GA20730@hirez.programming.kicks-ass.net
---
 arch/x86/mm/pat/set_memory.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 59eca6a..b8c55a2 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -43,7 +43,8 @@ struct cpa_data {
 	unsigned long	pfn;
 	unsigned int	flags;
 	unsigned int	force_split		: 1,
-			force_static_prot	: 1;
+			force_static_prot	: 1,
+			force_flush_all		: 1;
 	struct page	**pages;
 };
 
@@ -355,10 +356,10 @@ static void cpa_flush(struct cpa_data *data, int cache)
 		return;
 	}
 
-	if (cpa->numpages <= tlb_single_page_flush_ceiling)
-		on_each_cpu(__cpa_flush_tlb, cpa, 1);
-	else
+	if (cpa->force_flush_all || cpa->numpages > tlb_single_page_flush_ceiling)
 		flush_tlb_all();
+	else
+		on_each_cpu(__cpa_flush_tlb, cpa, 1);
 
 	if (!cache)
 		return;
@@ -1598,6 +1599,8 @@ static int cpa_process_alias(struct cpa_data *cpa)
 		alias_cpa.flags &= ~(CPA_PAGES_ARRAY | CPA_ARRAY);
 		alias_cpa.curpage = 0;
 
+		cpa->force_flush_all = 1;
+
 		ret = __change_page_attr_set_clr(&alias_cpa, 0);
 		if (ret)
 			return ret;
@@ -1618,6 +1621,7 @@ static int cpa_process_alias(struct cpa_data *cpa)
 		alias_cpa.flags &= ~(CPA_PAGES_ARRAY | CPA_ARRAY);
 		alias_cpa.curpage = 0;
 
+		cpa->force_flush_all = 1;
 		/*
 		 * The high mapping range is imprecise, so ignore the
 		 * return value.

^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-05-01 18:23 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-23  3:13 [PATCH] x86/mm/cpa: Flush direct map alias during cpa Rick Edgecombe
2020-04-23  8:41 ` Peter Zijlstra
2020-04-23 19:02   ` Edgecombe, Rick P
2020-04-24 10:53     ` Peter Zijlstra
2020-05-01 18:22       ` [tip: x86/urgent] " tip-bot2 for Rick Edgecombe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).