All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RT] x86/mm/pat: disable preemption __split_large_page() after spin_lock()
@ 2018-12-13 16:44 Sebastian Andrzej Siewior
  2018-12-13 17:59 ` Steven Rostedt
  2019-01-09  0:39 ` [PATCH RT] " Scott Wood
  0 siblings, 2 replies; 6+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-13 16:44 UTC (permalink / raw)
  To: linux-rt-users; +Cc: linux-kernel, Thomas Gleixner, Steven Rostedt, stable-rt

Commit "x86/mm/pat: Disable preemption around __flush_tlb_all()" added a
warning if __flush_tlb_all() is invoked in preemptible context. On !RT
the warning does not trigger because a spin lock is acquired which
disables preemption. On RT the spin lock does not disable preemption and
so the warning is seen.

Disable preemption to avoid the warning in __flush_tlb_all().

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 arch/x86/mm/pageattr.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index e2d4b25c7aa44..abbe3e93ec266 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -687,6 +687,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
 	pgprot_t ref_prot;
 
 	spin_lock(&pgd_lock);
+	preempt_disable();
 	/*
 	 * Check for races, another CPU might have split this page
 	 * up for us already:
@@ -694,6 +695,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
 	tmp = _lookup_address_cpa(cpa, address, &level);
 	if (tmp != kpte) {
 		spin_unlock(&pgd_lock);
+		preempt_enable();
 		return 1;
 	}
 
@@ -727,6 +729,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
 
 	default:
 		spin_unlock(&pgd_lock);
+		preempt_enable();
 		return 1;
 	}
 
@@ -764,6 +767,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
 	 * going on.
 	 */
 	__flush_tlb_all();
+	preempt_enable();
 	spin_unlock(&pgd_lock);
 
 	return 0;
-- 
2.20.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH RT] x86/mm/pat: disable preemption __split_large_page() after spin_lock()
  2018-12-13 16:44 [PATCH RT] x86/mm/pat: disable preemption __split_large_page() after spin_lock() Sebastian Andrzej Siewior
@ 2018-12-13 17:59 ` Steven Rostedt
  2018-12-13 18:27   ` Sebastian Andrzej Siewior
  2018-12-13 20:37   ` [PATCH RT v2] " Sebastian Andrzej Siewior
  2019-01-09  0:39 ` [PATCH RT] " Scott Wood
  1 sibling, 2 replies; 6+ messages in thread
From: Steven Rostedt @ 2018-12-13 17:59 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-rt-users, linux-kernel, Thomas Gleixner, stable-rt

On Thu, 13 Dec 2018 17:44:31 +0100
Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:

> Commit "x86/mm/pat: Disable preemption around __flush_tlb_all()" added a
> warning if __flush_tlb_all() is invoked in preemptible context. On !RT
> the warning does not trigger because a spin lock is acquired which
> disables preemption. On RT the spin lock does not disable preemption and
> so the warning is seen.
> 
> Disable preemption to avoid the warning in __flush_tlb_all().

I'm guessing the reason for the warn on is that we don't want a task to
be scheduled in where we expected the TLB to have been flushed.


> 
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
>  arch/x86/mm/pageattr.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
> index e2d4b25c7aa44..abbe3e93ec266 100644
> --- a/arch/x86/mm/pageattr.c
> +++ b/arch/x86/mm/pageattr.c
> @@ -687,6 +687,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
>  	pgprot_t ref_prot;
>  
>  	spin_lock(&pgd_lock);

We probably should have comment explaining why we have a
preempt_disable here.

> +	preempt_disable();
>  	/*
>  	 * Check for races, another CPU might have split this page
>  	 * up for us already:
> @@ -694,6 +695,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
>  	tmp = _lookup_address_cpa(cpa, address, &level);
>  	if (tmp != kpte) {
>  		spin_unlock(&pgd_lock);
> +		preempt_enable();

Shouldn't the preempt_enable() be before the unlock?

>  		return 1;
>  	}
>  
> @@ -727,6 +729,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
>  
>  	default:
>  		spin_unlock(&pgd_lock);
> +		preempt_enable();

Here too.

-- Steve

>  		return 1;
>  	}
>  
> @@ -764,6 +767,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
>  	 * going on.
>  	 */
>  	__flush_tlb_all();
> +	preempt_enable();
>  	spin_unlock(&pgd_lock);
>  
>  	return 0;


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH RT] x86/mm/pat: disable preemption __split_large_page() after spin_lock()
  2018-12-13 17:59 ` Steven Rostedt
@ 2018-12-13 18:27   ` Sebastian Andrzej Siewior
  2018-12-13 20:37   ` [PATCH RT v2] " Sebastian Andrzej Siewior
  1 sibling, 0 replies; 6+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-13 18:27 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: linux-rt-users, linux-kernel, Thomas Gleixner, stable-rt

On 2018-12-13 12:59:07 [-0500], Steven Rostedt wrote:
> On Thu, 13 Dec 2018 17:44:31 +0100
> Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:
> 
> > Disable preemption to avoid the warning in __flush_tlb_all().
> 
> I'm guessing the reason for the warn on is that we don't want a task to
> be scheduled in where we expected the TLB to have been flushed.
during the cr3 read + write, correct.

> > --- a/arch/x86/mm/pageattr.c
> > +++ b/arch/x86/mm/pageattr.c
> > @@ -687,6 +687,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
> >  	pgprot_t ref_prot;
> >  
> >  	spin_lock(&pgd_lock);
> 
> We probably should have comment explaining why we have a
> preempt_disable here.
okay.

> > +	preempt_disable();
> >  	/*
> >  	 * Check for races, another CPU might have split this page
> >  	 * up for us already:
> > @@ -694,6 +695,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
> >  	tmp = _lookup_address_cpa(cpa, address, &level);
> >  	if (tmp != kpte) {
> >  		spin_unlock(&pgd_lock);
> > +		preempt_enable();
> 
> Shouldn't the preempt_enable() be before the unlock?

Yeah, I noticed it once I saw the patch on the list. Will flip it before
I apply it.

> >  		return 1;
> >  	}
> >  

Sebastian

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH RT v2] x86/mm/pat: disable preemption __split_large_page() after spin_lock()
  2018-12-13 17:59 ` Steven Rostedt
  2018-12-13 18:27   ` Sebastian Andrzej Siewior
@ 2018-12-13 20:37   ` Sebastian Andrzej Siewior
  1 sibling, 0 replies; 6+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-13 20:37 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: linux-rt-users, linux-kernel, Thomas Gleixner, stable-rt

Commit "x86/mm/pat: Disable preemption around __flush_tlb_all()" added a
warning if __flush_tlb_all() is invoked in preemptible context. On !RT
the warning does not trigger because a spin lock is acquired which
disables preemption. On RT the spin lock does not disable preemption and
so the warning is seen.

Disable preemption to avoid the warning __flush_tlb_all().

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
v1…v2:
	- add comment
	- enable preemption before dropping the spinlock

 arch/x86/mm/pageattr.c |    8 ++++++++
 1 file changed, 8 insertions(+)

--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -688,11 +688,17 @@ static int
 
 	spin_lock(&pgd_lock);
 	/*
+	 * Keep preemption disabled after __flush_tlb_all() which expects not be
+	 * preempted during the flush of the local TLB.
+	 */
+	preempt_disable();
+	/*
 	 * Check for races, another CPU might have split this page
 	 * up for us already:
 	 */
 	tmp = _lookup_address_cpa(cpa, address, &level);
 	if (tmp != kpte) {
+		preempt_enable();
 		spin_unlock(&pgd_lock);
 		return 1;
 	}
@@ -726,6 +732,7 @@ static int
 		break;
 
 	default:
+		preempt_enable();
 		spin_unlock(&pgd_lock);
 		return 1;
 	}
@@ -764,6 +771,7 @@ static int
 	 * going on.
 	 */
 	__flush_tlb_all();
+	preempt_enable();
 	spin_unlock(&pgd_lock);
 
 	return 0;

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH RT] x86/mm/pat: disable preemption __split_large_page() after spin_lock()
  2018-12-13 16:44 [PATCH RT] x86/mm/pat: disable preemption __split_large_page() after spin_lock() Sebastian Andrzej Siewior
  2018-12-13 17:59 ` Steven Rostedt
@ 2019-01-09  0:39 ` Scott Wood
  2019-01-09  8:48   ` Sebastian Andrzej Siewior
  1 sibling, 1 reply; 6+ messages in thread
From: Scott Wood @ 2019-01-09  0:39 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior, linux-rt-users
  Cc: linux-kernel, Thomas Gleixner, Steven Rostedt, stable-rt

On Thu, 2018-12-13 at 17:44 +0100, Sebastian Andrzej Siewior wrote:
> Commit "x86/mm/pat: Disable preemption around __flush_tlb_all()" added a
> warning if __flush_tlb_all() is invoked in preemptible context. On !RT
> the warning does not trigger because a spin lock is acquired which
> disables preemption. On RT the spin lock does not disable preemption and
> so the warning is seen.
> 
> Disable preemption to avoid the warning in __flush_tlb_all().
> 
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

FWIW, that __flush_tlb_all() can probably just be removed.  It was added as
a partial workaround for an erratum, but since it didn't completely solve
the problem, hugepages were disabled on the affected chips in commit
7a0fc404ae66377 ("x86: Disable large pages on CPUs with Atom erratum
AAE44").  I was going to send a (non-RT) patch removing the flush but in
4.20 commit c0a759abf5a68 ("x86/mm/cpa: Move flush_tlb_all()") reworked the
code to make it moot.

-Scott


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH RT] x86/mm/pat: disable preemption __split_large_page() after spin_lock()
  2019-01-09  0:39 ` [PATCH RT] " Scott Wood
@ 2019-01-09  8:48   ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 6+ messages in thread
From: Sebastian Andrzej Siewior @ 2019-01-09  8:48 UTC (permalink / raw)
  To: Scott Wood
  Cc: linux-rt-users, linux-kernel, Thomas Gleixner, Steven Rostedt, stable-rt

On 2019-01-08 18:39:15 [-0600], Scott Wood wrote:
> On Thu, 2018-12-13 at 17:44 +0100, Sebastian Andrzej Siewior wrote:
> > Commit "x86/mm/pat: Disable preemption around __flush_tlb_all()" added a
> > warning if __flush_tlb_all() is invoked in preemptible context. On !RT
> > the warning does not trigger because a spin lock is acquired which
> > disables preemption. On RT the spin lock does not disable preemption and
> > so the warning is seen.
> > 
> > Disable preemption to avoid the warning in __flush_tlb_all().
> > 
> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> 
> FWIW, that __flush_tlb_all() can probably just be removed.  It was added as
> a partial workaround for an erratum, but since it didn't completely solve
> the problem, hugepages were disabled on the affected chips in commit
> 7a0fc404ae66377 ("x86: Disable large pages on CPUs with Atom erratum
> AAE44").  I was going to send a (non-RT) patch removing the flush but in
> 4.20 commit c0a759abf5a68 ("x86/mm/cpa: Move flush_tlb_all()") reworked the
> code to make it moot.

Let me make a note for next major version.

> -Scott

Sebastian

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-01-09  8:48 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-13 16:44 [PATCH RT] x86/mm/pat: disable preemption __split_large_page() after spin_lock() Sebastian Andrzej Siewior
2018-12-13 17:59 ` Steven Rostedt
2018-12-13 18:27   ` Sebastian Andrzej Siewior
2018-12-13 20:37   ` [PATCH RT v2] " Sebastian Andrzej Siewior
2019-01-09  0:39 ` [PATCH RT] " Scott Wood
2019-01-09  8:48   ` Sebastian Andrzej Siewior

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.