All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] powerpc/64s: Fix compiler store ordering to SLB shadow area
@ 2018-05-30 10:31 Nicholas Piggin
  2018-05-31 14:22 ` Michael Ellerman
  2018-06-04 14:11 ` Michael Ellerman
  0 siblings, 2 replies; 6+ messages in thread
From: Nicholas Piggin @ 2018-05-30 10:31 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Aneesh Kumar K . V

The stores to update the SLB shadow area must be made as they appear
in the C code, so that the hypervisor does not see an entry with
mismatched vsid and esid. Use WRITE_ONCE for this.

GCC has been observed to elide the first store to esid in the update,
which means that if the hypervisor interrupts the guest after storing
to vsid, it could see an entry with old esid and new vsid, which may
possibly result in memory corruption.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/mm/slb.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index 66577cc66dc9..2f4b33b24b3b 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -63,14 +63,14 @@ static inline void slb_shadow_update(unsigned long ea, int ssize,
 	 * updating it.  No write barriers are needed here, provided
 	 * we only update the current CPU's SLB shadow buffer.
 	 */
-	p->save_area[index].esid = 0;
-	p->save_area[index].vsid = cpu_to_be64(mk_vsid_data(ea, ssize, flags));
-	p->save_area[index].esid = cpu_to_be64(mk_esid_data(ea, ssize, index));
+	WRITE_ONCE(p->save_area[index].esid, 0);
+	WRITE_ONCE(p->save_area[index].vsid, cpu_to_be64(mk_vsid_data(ea, ssize, flags)));
+	WRITE_ONCE(p->save_area[index].esid, cpu_to_be64(mk_esid_data(ea, ssize, index)));
 }
 
 static inline void slb_shadow_clear(enum slb_index index)
 {
-	get_slb_shadow()->save_area[index].esid = 0;
+	WRITE_ONCE(get_slb_shadow()->save_area[index].esid, 0);
 }
 
 static inline void create_shadowed_slbe(unsigned long ea, int ssize,
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] powerpc/64s: Fix compiler store ordering to SLB shadow area
  2018-05-30 10:31 [PATCH] powerpc/64s: Fix compiler store ordering to SLB shadow area Nicholas Piggin
@ 2018-05-31 14:22 ` Michael Ellerman
  2018-05-31 22:52   ` Nicholas Piggin
  2018-06-04 14:11 ` Michael Ellerman
  1 sibling, 1 reply; 6+ messages in thread
From: Michael Ellerman @ 2018-05-31 14:22 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev; +Cc: Aneesh Kumar K . V, Nicholas Piggin

Nicholas Piggin <npiggin@gmail.com> writes:

> The stores to update the SLB shadow area must be made as they appear
> in the C code, so that the hypervisor does not see an entry with
> mismatched vsid and esid. Use WRITE_ONCE for this.
>
> GCC has been observed to elide the first store to esid in the update,
> which means that if the hypervisor interrupts the guest after storing
> to vsid, it could see an entry with old esid and new vsid, which may
> possibly result in memory corruption.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>  arch/powerpc/mm/slb.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
> index 66577cc66dc9..2f4b33b24b3b 100644
> --- a/arch/powerpc/mm/slb.c
> +++ b/arch/powerpc/mm/slb.c
> @@ -63,14 +63,14 @@ static inline void slb_shadow_update(unsigned long ea, int ssize,
>  	 * updating it.  No write barriers are needed here, provided
>  	 * we only update the current CPU's SLB shadow buffer.
>  	 */
> -	p->save_area[index].esid = 0;
> -	p->save_area[index].vsid = cpu_to_be64(mk_vsid_data(ea, ssize, flags));
> -	p->save_area[index].esid = cpu_to_be64(mk_esid_data(ea, ssize, index));
> +	WRITE_ONCE(p->save_area[index].esid, 0);
> +	WRITE_ONCE(p->save_area[index].vsid, cpu_to_be64(mk_vsid_data(ea, ssize, flags)));
> +	WRITE_ONCE(p->save_area[index].esid, cpu_to_be64(mk_esid_data(ea, ssize, index)));

What's the code-gen for that look like? I suspect it's terrible?

Should we just do it in inline-asm I wonder?

cheers

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] powerpc/64s: Fix compiler store ordering to SLB shadow area
  2018-05-31 14:22 ` Michael Ellerman
@ 2018-05-31 22:52   ` Nicholas Piggin
  2018-06-01 11:13     ` Michael Ellerman
  2018-06-01 21:38     ` Segher Boessenkool
  0 siblings, 2 replies; 6+ messages in thread
From: Nicholas Piggin @ 2018-05-31 22:52 UTC (permalink / raw)
  To: Michael Ellerman; +Cc: linuxppc-dev, Aneesh Kumar K . V, Segher Boessenkool

On Fri, 01 Jun 2018 00:22:21 +1000
Michael Ellerman <mpe@ellerman.id.au> wrote:

> Nicholas Piggin <npiggin@gmail.com> writes:
> 
> > The stores to update the SLB shadow area must be made as they appear
> > in the C code, so that the hypervisor does not see an entry with
> > mismatched vsid and esid. Use WRITE_ONCE for this.
> >
> > GCC has been observed to elide the first store to esid in the update,
> > which means that if the hypervisor interrupts the guest after storing
> > to vsid, it could see an entry with old esid and new vsid, which may
> > possibly result in memory corruption.
> >
> > Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> > ---
> >  arch/powerpc/mm/slb.c | 8 ++++----
> >  1 file changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
> > index 66577cc66dc9..2f4b33b24b3b 100644
> > --- a/arch/powerpc/mm/slb.c
> > +++ b/arch/powerpc/mm/slb.c
> > @@ -63,14 +63,14 @@ static inline void slb_shadow_update(unsigned long ea, int ssize,
> >  	 * updating it.  No write barriers are needed here, provided
> >  	 * we only update the current CPU's SLB shadow buffer.
> >  	 */
> > -	p->save_area[index].esid = 0;
> > -	p->save_area[index].vsid = cpu_to_be64(mk_vsid_data(ea, ssize, flags));
> > -	p->save_area[index].esid = cpu_to_be64(mk_esid_data(ea, ssize, index));
> > +	WRITE_ONCE(p->save_area[index].esid, 0);
> > +	WRITE_ONCE(p->save_area[index].vsid, cpu_to_be64(mk_vsid_data(ea, ssize, flags)));
> > +	WRITE_ONCE(p->save_area[index].esid, cpu_to_be64(mk_esid_data(ea, ssize, index)));  
> 
> What's the code-gen for that look like? I suspect it's terrible?

Yeah it's not great.

> 
> Should we just do it in inline-asm I wonder?

There should be no fundamental correctness reason why we can't store
to a volatile with a byteswap store. The other option we could do is
add a compiler barrier() between each store. The reason I didn't is
that in theory we don't need to invalidate all memory contents here,
but in practice probably the end result code generation would be
better.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] powerpc/64s: Fix compiler store ordering to SLB shadow area
  2018-05-31 22:52   ` Nicholas Piggin
@ 2018-06-01 11:13     ` Michael Ellerman
  2018-06-01 21:38     ` Segher Boessenkool
  1 sibling, 0 replies; 6+ messages in thread
From: Michael Ellerman @ 2018-06-01 11:13 UTC (permalink / raw)
  To: Nicholas Piggin; +Cc: linuxppc-dev, Aneesh Kumar K . V, Segher Boessenkool

Nicholas Piggin <npiggin@gmail.com> writes:
> On Fri, 01 Jun 2018 00:22:21 +1000
> Michael Ellerman <mpe@ellerman.id.au> wrote:
>> Nicholas Piggin <npiggin@gmail.com> writes:
>> > The stores to update the SLB shadow area must be made as they appear
>> > in the C code, so that the hypervisor does not see an entry with
>> > mismatched vsid and esid. Use WRITE_ONCE for this.
>> >
>> > GCC has been observed to elide the first store to esid in the update,
>> > which means that if the hypervisor interrupts the guest after storing
>> > to vsid, it could see an entry with old esid and new vsid, which may
>> > possibly result in memory corruption.
>> >
>> > Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> > ---
>> >  arch/powerpc/mm/slb.c | 8 ++++----
>> >  1 file changed, 4 insertions(+), 4 deletions(-)
>> >
>> > diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
>> > index 66577cc66dc9..2f4b33b24b3b 100644
>> > --- a/arch/powerpc/mm/slb.c
>> > +++ b/arch/powerpc/mm/slb.c
>> > @@ -63,14 +63,14 @@ static inline void slb_shadow_update(unsigned long ea, int ssize,
>> >  	 * updating it.  No write barriers are needed here, provided
>> >  	 * we only update the current CPU's SLB shadow buffer.
>> >  	 */
>> > -	p->save_area[index].esid = 0;
>> > -	p->save_area[index].vsid = cpu_to_be64(mk_vsid_data(ea, ssize, flags));
>> > -	p->save_area[index].esid = cpu_to_be64(mk_esid_data(ea, ssize, index));
>> > +	WRITE_ONCE(p->save_area[index].esid, 0);
>> > +	WRITE_ONCE(p->save_area[index].vsid, cpu_to_be64(mk_vsid_data(ea, ssize, flags)));
>> > +	WRITE_ONCE(p->save_area[index].esid, cpu_to_be64(mk_esid_data(ea, ssize, index)));  
>> 
>> What's the code-gen for that look like? I suspect it's terrible?
>
> Yeah it's not great.

Actually with GCC 7 the WRITE_ONCE() doesn't make it any worse.

Which is a little suspicious. But it is doing the first store:

	li      r10,0			# r10 = 0
	ld      r29,56(r13)		# r29 = paca->slb_shadow_ptr
	rldicr  r8,r31,4,59		# r8 = index
	rldicr  r9,r9,32,31
	add     r29,r29,r8		# r29 = r29 + index
	oris    r9,r9,65535
	std     r10,16(r29)		# esid = r10 = 0

So I'll just merge this as-is.

cheers

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] powerpc/64s: Fix compiler store ordering to SLB shadow area
  2018-05-31 22:52   ` Nicholas Piggin
  2018-06-01 11:13     ` Michael Ellerman
@ 2018-06-01 21:38     ` Segher Boessenkool
  1 sibling, 0 replies; 6+ messages in thread
From: Segher Boessenkool @ 2018-06-01 21:38 UTC (permalink / raw)
  To: Nicholas Piggin; +Cc: Michael Ellerman, linuxppc-dev, Aneesh Kumar K . V

On Fri, Jun 01, 2018 at 08:52:27AM +1000, Nicholas Piggin wrote:
> On Fri, 01 Jun 2018 00:22:21 +1000
> Michael Ellerman <mpe@ellerman.id.au> wrote:
> > Nicholas Piggin <npiggin@gmail.com> writes:
> > > -	p->save_area[index].esid = 0;
> > > -	p->save_area[index].vsid = cpu_to_be64(mk_vsid_data(ea, ssize, flags));
> > > -	p->save_area[index].esid = cpu_to_be64(mk_esid_data(ea, ssize, index));
> > > +	WRITE_ONCE(p->save_area[index].esid, 0);
> > > +	WRITE_ONCE(p->save_area[index].vsid, cpu_to_be64(mk_vsid_data(ea, ssize, flags)));
> > > +	WRITE_ONCE(p->save_area[index].esid, cpu_to_be64(mk_esid_data(ea, ssize, index)));  
> > 
> > What's the code-gen for that look like? I suspect it's terrible?
> 
> Yeah it's not great.
> 
> > 
> > Should we just do it in inline-asm I wonder?

That is my recommendation: that will work for all compiler versions.

> There should be no fundamental correctness reason why we can't store
> to a volatile with a byteswap store.

There are may operations that are *not* correct to merge into a volatile
memory access, and which are fine is different for every arch.  GCC
simply disallows combining anything into any volatile memory by default.
This is kind of fine because volatile already means "I want this to go
slow", in common cases ;-)

I'll see what I can do to make the byteswap load/stores work with volatile
(for powerpc).

> The other option we could do is
> add a compiler barrier() between each store. The reason I didn't is
> that in theory we don't need to invalidate all memory contents here,
> but in practice probably the end result code generation would be
> better.

Something like

	p->save_area[index].esid = 0;
	asm("" : : "m"(p->save_area[index].esid));
	p->save_area[index].vsid = cpu_to_be64(mk_vsid_data(ea, ssize, flags));
	asm("" : : "m"(p->save_area[index].vsid));
	p->save_area[index].esid = cpu_to_be64(mk_esid_data(ea, ssize, index));

should do the trick (and once again for the second write to esid, if you
want to be sure it is not optimised away).


Segher

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: powerpc/64s: Fix compiler store ordering to SLB shadow area
  2018-05-30 10:31 [PATCH] powerpc/64s: Fix compiler store ordering to SLB shadow area Nicholas Piggin
  2018-05-31 14:22 ` Michael Ellerman
@ 2018-06-04 14:11 ` Michael Ellerman
  1 sibling, 0 replies; 6+ messages in thread
From: Michael Ellerman @ 2018-06-04 14:11 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev; +Cc: Aneesh Kumar K . V, Nicholas Piggin

On Wed, 2018-05-30 at 10:31:22 UTC, Nicholas Piggin wrote:
> The stores to update the SLB shadow area must be made as they appear
> in the C code, so that the hypervisor does not see an entry with
> mismatched vsid and esid. Use WRITE_ONCE for this.
> 
> GCC has been observed to elide the first store to esid in the update,
> which means that if the hypervisor interrupts the guest after storing
> to vsid, it could see an entry with old esid and new vsid, which may
> possibly result in memory corruption.
> 
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/926bc2f100c24d4842b3064b5af44a

cheers

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-06-04 14:11 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-30 10:31 [PATCH] powerpc/64s: Fix compiler store ordering to SLB shadow area Nicholas Piggin
2018-05-31 14:22 ` Michael Ellerman
2018-05-31 22:52   ` Nicholas Piggin
2018-06-01 11:13     ` Michael Ellerman
2018-06-01 21:38     ` Segher Boessenkool
2018-06-04 14:11 ` Michael Ellerman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.