linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] Fix paravirt fail
@ 2016-12-08 15:42 Peter Zijlstra
  2016-12-08 15:42 ` [PATCH 1/2] x86,paravirt: Fix native_patch() Peter Zijlstra
  2016-12-08 15:42 ` [PATCH 2/2] x86, paravirt: Fix bool return type for PVOP_CALL Peter Zijlstra
  0 siblings, 2 replies; 7+ messages in thread
From: Peter Zijlstra @ 2016-12-08 15:42 UTC (permalink / raw)
  To: linux-kernel, x86
  Cc: Peter Anvin, Ingo Molnar, Thomas Gleixner, Borislav Petkov,
	Jeremy Fitzhardinge, Chris Wright, Alok Kataria, Rusty Russell,
	virtualization, Pan Xinhui, Paolo Bonzini, kernel test robot,
	Peter Zijlstra (Intel)

Two patches that cure fallout from commit:

  3cded4179481 ("x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()")

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/2] x86,paravirt: Fix native_patch()
  2016-12-08 15:42 [PATCH 0/2] Fix paravirt fail Peter Zijlstra
@ 2016-12-08 15:42 ` Peter Zijlstra
  2016-12-12  6:49   ` [tip:locking/core] x86/paravirt: " tip-bot for Peter Zijlstra
  2016-12-08 15:42 ` [PATCH 2/2] x86, paravirt: Fix bool return type for PVOP_CALL Peter Zijlstra
  1 sibling, 1 reply; 7+ messages in thread
From: Peter Zijlstra @ 2016-12-08 15:42 UTC (permalink / raw)
  To: linux-kernel, x86
  Cc: Peter Anvin, Ingo Molnar, Thomas Gleixner, Borislav Petkov,
	Jeremy Fitzhardinge, Chris Wright, Alok Kataria, Rusty Russell,
	virtualization, Pan Xinhui, Paolo Bonzini, kernel test robot,
	Peter Zijlstra (Intel)

[-- Attachment #1: peter_zijlstra-fd6f48529f-_aim7_jobs-per-min_-26_1__regression.patch --]
[-- Type: text/plain, Size: 1715 bytes --]

While chasing a regression I noticed we potentially patch the wrong
code in native_patch().

If we do not select the native code sequence, we must use the default
patcher, not fall-through the switch case.

Fixes: 3cded4179481 ("x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/kernel/paravirt_patch_32.c |    4 ++++
 arch/x86/kernel/paravirt_patch_64.c |    4 ++++
 2 files changed, 8 insertions(+)

--- a/arch/x86/kernel/paravirt_patch_32.c
+++ b/arch/x86/kernel/paravirt_patch_32.c
@@ -56,15 +56,19 @@ unsigned native_patch(u8 type, u16 clobb
 				end   = end_pv_lock_ops_queued_spin_unlock;
 				goto patch_site;
 			}
+			goto patch_default;
+
 		case PARAVIRT_PATCH(pv_lock_ops.vcpu_is_preempted):
 			if (pv_is_native_vcpu_is_preempted()) {
 				start = start_pv_lock_ops_vcpu_is_preempted;
 				end   = end_pv_lock_ops_vcpu_is_preempted;
 				goto patch_site;
 			}
+			goto patch_default;
 #endif
 
 	default:
+patch_default:
 		ret = paravirt_patch_default(type, clobbers, ibuf, addr, len);
 		break;
 
--- a/arch/x86/kernel/paravirt_patch_64.c
+++ b/arch/x86/kernel/paravirt_patch_64.c
@@ -68,15 +68,19 @@ unsigned native_patch(u8 type, u16 clobb
 				end   = end_pv_lock_ops_queued_spin_unlock;
 				goto patch_site;
 			}
+			goto patch_default;
+
 		case PARAVIRT_PATCH(pv_lock_ops.vcpu_is_preempted):
 			if (pv_is_native_vcpu_is_preempted()) {
 				start = start_pv_lock_ops_vcpu_is_preempted;
 				end   = end_pv_lock_ops_vcpu_is_preempted;
 				goto patch_site;
 			}
+			goto patch_default;
 #endif
 
 	default:
+patch_default:
 		ret = paravirt_patch_default(type, clobbers, ibuf, addr, len);
 		break;
 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 2/2] x86, paravirt: Fix bool return type for PVOP_CALL
  2016-12-08 15:42 [PATCH 0/2] Fix paravirt fail Peter Zijlstra
  2016-12-08 15:42 ` [PATCH 1/2] x86,paravirt: Fix native_patch() Peter Zijlstra
@ 2016-12-08 15:42 ` Peter Zijlstra
  2016-12-08 16:40   ` Pan Xinhui
  2016-12-12  6:50   ` [tip:locking/core] x86/paravirt: Fix bool return type for PVOP_CALL() tip-bot for Peter Zijlstra
  1 sibling, 2 replies; 7+ messages in thread
From: Peter Zijlstra @ 2016-12-08 15:42 UTC (permalink / raw)
  To: linux-kernel, x86
  Cc: Peter Anvin, Ingo Molnar, Thomas Gleixner, Borislav Petkov,
	Jeremy Fitzhardinge, Chris Wright, Alok Kataria, Rusty Russell,
	virtualization, Pan Xinhui, Paolo Bonzini, kernel test robot,
	Peter Zijlstra (Intel)

[-- Attachment #1: peterz-fix-paravirt-bool.patch --]
[-- Type: text/plain, Size: 2945 bytes --]

Commit 3cded4179481 ("x86/paravirt: Optimize native
pv_lock_ops.vcpu_is_preempted()") introduced a paravirt op with bool
return type [*]

It turns out that the PVOP_CALL*() macros miscompile when rettype is
bool. Code that looked like:

   83 ef 01                sub    $0x1,%edi
   ff 15 32 a0 d8 00       callq  *0xd8a032(%rip)        # ffffffff81e28120 <pv_lock_ops+0x20>
   84 c0                   test   %al,%al

ended up looking like so after PVOP_CALL1() was applied:

   83 ef 01                sub    $0x1,%edi
   48 63 ff                movslq %edi,%rdi
   ff 14 25 20 81 e2 81    callq  *0xffffffff81e28120
   48 85 c0                test   %rax,%rax

Note how it tests the whole of %rax, even though a typical bool return
function only sets %al, like:

  0f 95 c0                setne  %al
  c3                      retq

This is because ____PVOP_CALL() does:

		__ret = (rettype)__eax;

and while regular integer type casts truncate the result, a cast to
bool tests for any !0 value. Fix this by explicitly truncating to
sizeof(rettype) before casting.


[*] The actual bug should've been exposed in commit 446f3dc8cc0a
("locking/core, x86/paravirt: Implement vcpu_is_preempted(cpu) for KVM
and Xen guests") but that didn't properly implement the paravirt call.

Cc: Peter Anvin <hpa@zytor.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Chris Wright <chrisw@sous-sol.org>
Cc: Alok Kataria <akataria@vmware.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: virtualization@lists.linux-foundation.org
Cc: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Fixes: 3cded4179481 ("x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()")
Reported-by: kernel test robot <xiaolong.ye@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/include/asm/paravirt_types.h |   14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -508,6 +508,18 @@ int paravirt_disable_iospace(void);
 #define PVOP_TEST_NULL(op)	((void)op)
 #endif
 
+#define PVOP_RETMASK(rettype)						\
+	({	unsigned long __mask = ~0UL;				\
+		switch (sizeof(rettype)) {				\
+		case 1: __mask =       0xffUL; break;			\
+		case 2: __mask =     0xffffUL; break;			\
+		case 4: __mask = 0xffffffffUL; break;			\
+		default: break;						\
+		}							\
+		__mask;							\
+	})
+
+
 #define ____PVOP_CALL(rettype, op, clbr, call_clbr, extra_clbr,		\
 		      pre, post, ...)					\
 	({								\
@@ -535,7 +547,7 @@ int paravirt_disable_iospace(void);
 				       paravirt_clobber(clbr),		\
 				       ##__VA_ARGS__			\
 				     : "memory", "cc" extra_clbr);	\
-			__ret = (rettype)__eax;				\
+			__ret = (rettype)(__eax & PVOP_RETMASK(rettype));	\
 		}							\
 		__ret;							\
 	})

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] x86, paravirt: Fix bool return type for PVOP_CALL
  2016-12-08 15:42 ` [PATCH 2/2] x86, paravirt: Fix bool return type for PVOP_CALL Peter Zijlstra
@ 2016-12-08 16:40   ` Pan Xinhui
  2016-12-08 16:50     ` Peter Zijlstra
  2016-12-12  6:50   ` [tip:locking/core] x86/paravirt: Fix bool return type for PVOP_CALL() tip-bot for Peter Zijlstra
  1 sibling, 1 reply; 7+ messages in thread
From: Pan Xinhui @ 2016-12-08 16:40 UTC (permalink / raw)
  To: Peter Zijlstra, linux-kernel, x86
  Cc: Peter Anvin, Ingo Molnar, Thomas Gleixner, Borislav Petkov,
	Jeremy Fitzhardinge, Chris Wright, Alok Kataria, Rusty Russell,
	virtualization, Pan Xinhui, Paolo Bonzini, kernel test robot


hi, Peter
	I think I know the point.

then could we just let __eax rettype(here is bool), not unsigned long?
I does not do tests for my thoughts.

@@ -461,7 +461,9 @@ int paravirt_disable_iospace(void);
  #define PVOP_VCALL_ARGS                                                        \
         unsigned long __eax = __eax, __edx = __edx, __ecx = __ecx;      \
         register void *__sp asm("esp")
-#define PVOP_CALL_ARGS                 PVOP_VCALL_ARGS
+#define PVOP_CALL_ARGS 					\
+       rettype __eax = __eax, __edx = __edx, __ecx = __ecx;    \
+       register void *__sp asm("esp")

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] x86, paravirt: Fix bool return type for PVOP_CALL
  2016-12-08 16:40   ` Pan Xinhui
@ 2016-12-08 16:50     ` Peter Zijlstra
  0 siblings, 0 replies; 7+ messages in thread
From: Peter Zijlstra @ 2016-12-08 16:50 UTC (permalink / raw)
  To: Pan Xinhui
  Cc: linux-kernel, x86, Peter Anvin, Ingo Molnar, Thomas Gleixner,
	Borislav Petkov, Jeremy Fitzhardinge, Chris Wright, Alok Kataria,
	Rusty Russell, virtualization, Pan Xinhui, Paolo Bonzini,
	kernel test robot

On Fri, Dec 09, 2016 at 12:40:35AM +0800, Pan Xinhui wrote:
> 
> hi, Peter
> 	I think I know the point.
> 
> then could we just let __eax rettype(here is bool), not unsigned long?
> I does not do tests for my thoughts.
> 
> @@ -461,7 +461,9 @@ int paravirt_disable_iospace(void);
>  #define PVOP_VCALL_ARGS                                                        \
>         unsigned long __eax = __eax, __edx = __edx, __ecx = __ecx;      \
>         register void *__sp asm("esp")
> -#define PVOP_CALL_ARGS                 PVOP_VCALL_ARGS
> +#define PVOP_CALL_ARGS 					\
> +       rettype __eax = __eax, __edx = __edx, __ecx = __ecx;    \
> +       register void *__sp asm("esp")

Doesn't work on i386 where eax is also an argument register.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [tip:locking/core] x86/paravirt: Fix native_patch()
  2016-12-08 15:42 ` [PATCH 1/2] x86,paravirt: Fix native_patch() Peter Zijlstra
@ 2016-12-12  6:49   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 7+ messages in thread
From: tip-bot for Peter Zijlstra @ 2016-12-12  6:49 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, akataria, hpa, peterz, rusty, chrisw, linux-kernel, jeremy,
	torvalds, pbonzini, mingo, bp, xiaolong.ye, xinhui.pan

Commit-ID:  45dbea5f55c05980cbb4c30047c71a820cd3f282
Gitweb:     http://git.kernel.org/tip/45dbea5f55c05980cbb4c30047c71a820cd3f282
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Thu, 8 Dec 2016 16:42:14 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sun, 11 Dec 2016 13:09:19 +0100

x86/paravirt: Fix native_patch()

While chasing a regression I noticed we potentially patch the wrong
code in native_patch().

If we do not select the native code sequence, we must use the default
patcher, not fall-through the switch case.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alok Kataria <akataria@vmware.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Chris Wright <chrisw@sous-sol.org>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Anvin <hpa@zytor.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel test robot <xiaolong.ye@intel.com>
Fixes: 3cded4179481 ("x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()")
Link: http://lkml.kernel.org/r/20161208154349.270616999@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/kernel/paravirt_patch_32.c | 4 ++++
 arch/x86/kernel/paravirt_patch_64.c | 4 ++++
 2 files changed, 8 insertions(+)

diff --git a/arch/x86/kernel/paravirt_patch_32.c b/arch/x86/kernel/paravirt_patch_32.c
index ff03dbd..33cdec2 100644
--- a/arch/x86/kernel/paravirt_patch_32.c
+++ b/arch/x86/kernel/paravirt_patch_32.c
@@ -58,15 +58,19 @@ unsigned native_patch(u8 type, u16 clobbers, void *ibuf,
 				end   = end_pv_lock_ops_queued_spin_unlock;
 				goto patch_site;
 			}
+			goto patch_default;
+
 		case PARAVIRT_PATCH(pv_lock_ops.vcpu_is_preempted):
 			if (pv_is_native_vcpu_is_preempted()) {
 				start = start_pv_lock_ops_vcpu_is_preempted;
 				end   = end_pv_lock_ops_vcpu_is_preempted;
 				goto patch_site;
 			}
+			goto patch_default;
 #endif
 
 	default:
+patch_default:
 		ret = paravirt_patch_default(type, clobbers, ibuf, addr, len);
 		break;
 
diff --git a/arch/x86/kernel/paravirt_patch_64.c b/arch/x86/kernel/paravirt_patch_64.c
index e61dd97..b0fceff 100644
--- a/arch/x86/kernel/paravirt_patch_64.c
+++ b/arch/x86/kernel/paravirt_patch_64.c
@@ -70,15 +70,19 @@ unsigned native_patch(u8 type, u16 clobbers, void *ibuf,
 				end   = end_pv_lock_ops_queued_spin_unlock;
 				goto patch_site;
 			}
+			goto patch_default;
+
 		case PARAVIRT_PATCH(pv_lock_ops.vcpu_is_preempted):
 			if (pv_is_native_vcpu_is_preempted()) {
 				start = start_pv_lock_ops_vcpu_is_preempted;
 				end   = end_pv_lock_ops_vcpu_is_preempted;
 				goto patch_site;
 			}
+			goto patch_default;
 #endif
 
 	default:
+patch_default:
 		ret = paravirt_patch_default(type, clobbers, ibuf, addr, len);
 		break;
 

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [tip:locking/core] x86/paravirt: Fix bool return type for PVOP_CALL()
  2016-12-08 15:42 ` [PATCH 2/2] x86, paravirt: Fix bool return type for PVOP_CALL Peter Zijlstra
  2016-12-08 16:40   ` Pan Xinhui
@ 2016-12-12  6:50   ` tip-bot for Peter Zijlstra
  1 sibling, 0 replies; 7+ messages in thread
From: tip-bot for Peter Zijlstra @ 2016-12-12  6:50 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mingo, akataria, xiaolong.ye, tglx, xinhui.pan, bp, linux-kernel,
	peterz, rusty, hpa, pbonzini, jeremy, chrisw, torvalds

Commit-ID:  11f254dbb3a2e3f0d8552d0dd37f4faa432b6b16
Gitweb:     http://git.kernel.org/tip/11f254dbb3a2e3f0d8552d0dd37f4faa432b6b16
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Thu, 8 Dec 2016 16:42:15 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sun, 11 Dec 2016 13:09:20 +0100

x86/paravirt: Fix bool return type for PVOP_CALL()

Commit:

  3cded4179481 ("x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()")

introduced a paravirt op with bool return type [*]

It turns out that the PVOP_CALL*() macros miscompile when rettype is
bool. Code that looked like:

   83 ef 01                sub    $0x1,%edi
   ff 15 32 a0 d8 00       callq  *0xd8a032(%rip)        # ffffffff81e28120 <pv_lock_ops+0x20>
   84 c0                   test   %al,%al

ended up looking like so after PVOP_CALL1() was applied:

   83 ef 01                sub    $0x1,%edi
   48 63 ff                movslq %edi,%rdi
   ff 14 25 20 81 e2 81    callq  *0xffffffff81e28120
   48 85 c0                test   %rax,%rax

Note how it tests the whole of %rax, even though a typical bool return
function only sets %al, like:

  0f 95 c0                setne  %al
  c3                      retq

This is because ____PVOP_CALL() does:

		__ret = (rettype)__eax;

and while regular integer type casts truncate the result, a cast to
bool tests for any !0 value. Fix this by explicitly truncating to
sizeof(rettype) before casting.

[*] The actual bug should've been exposed in commit:
      446f3dc8cc0a ("locking/core, x86/paravirt: Implement vcpu_is_preempted(cpu) for KVM and Xen guests")
    but that didn't properly implement the paravirt call.

Reported-by: kernel test robot <xiaolong.ye@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alok Kataria <akataria@vmware.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Chris Wright <chrisw@sous-sol.org>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Anvin <hpa@zytor.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 3cded4179481 ("x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()")
Link: http://lkml.kernel.org/r/20161208154349.346057680@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/paravirt_types.h | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 2614bd7..3f2bc0f 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -510,6 +510,18 @@ int paravirt_disable_iospace(void);
 #define PVOP_TEST_NULL(op)	((void)op)
 #endif
 
+#define PVOP_RETMASK(rettype)						\
+	({	unsigned long __mask = ~0UL;				\
+		switch (sizeof(rettype)) {				\
+		case 1: __mask =       0xffUL; break;			\
+		case 2: __mask =     0xffffUL; break;			\
+		case 4: __mask = 0xffffffffUL; break;			\
+		default: break;						\
+		}							\
+		__mask;							\
+	})
+
+
 #define ____PVOP_CALL(rettype, op, clbr, call_clbr, extra_clbr,		\
 		      pre, post, ...)					\
 	({								\
@@ -537,7 +549,7 @@ int paravirt_disable_iospace(void);
 				       paravirt_clobber(clbr),		\
 				       ##__VA_ARGS__			\
 				     : "memory", "cc" extra_clbr);	\
-			__ret = (rettype)__eax;				\
+			__ret = (rettype)(__eax & PVOP_RETMASK(rettype));	\
 		}							\
 		__ret;							\
 	})

^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-12-12  6:54 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-08 15:42 [PATCH 0/2] Fix paravirt fail Peter Zijlstra
2016-12-08 15:42 ` [PATCH 1/2] x86,paravirt: Fix native_patch() Peter Zijlstra
2016-12-12  6:49   ` [tip:locking/core] x86/paravirt: " tip-bot for Peter Zijlstra
2016-12-08 15:42 ` [PATCH 2/2] x86, paravirt: Fix bool return type for PVOP_CALL Peter Zijlstra
2016-12-08 16:40   ` Pan Xinhui
2016-12-08 16:50     ` Peter Zijlstra
2016-12-12  6:50   ` [tip:locking/core] x86/paravirt: Fix bool return type for PVOP_CALL() tip-bot for Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).