From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933857AbcLBAf1 (ORCPT ); Thu, 1 Dec 2016 19:35:27 -0500 Received: from mail.kernel.org ([198.145.29.136]:45762 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933022AbcLBAfX (ORCPT ); Thu, 1 Dec 2016 19:35:23 -0500 From: Andy Lutomirski To: x86@kernel.org Cc: One Thousand Gnomes , Borislav Petkov , "linux-kernel@vger.kernel.org" , Brian Gerst , Matthew Whitehead , Henrique de Moraes Holschuh , Peter Zijlstra , Andy Lutomirski , Andrew Cooper Subject: [PATCH v2 5/6] x86/xen: Add a Xen-specific sync_core() implementation Date: Thu, 1 Dec 2016 16:35:01 -0800 Message-Id: <0a21157c2233ba7d0781bbf07866b3f2d7e7c25d.1480638597.git.luto@kernel.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Xen PV, CPUID is likely to trap, and Xen hypercalls aren't guaranteed to serialize. (Even CPUID isn't *really* guaranteed to serialize on Xen PV, but, in practice, any trap it generates will serialize.) On my laptop, CPUID(eax=1, ecx=0) is ~83ns and IRET-to-self is ~110ns. But Xen PV will trap CPUID if possible, so IRET-to-self should end up being a nice speedup. Cc: Andrew Cooper Signed-off-by: Andy Lutomirski --- arch/x86/xen/enlighten.c | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c index bdd855685403..1f765b41eee7 100644 --- a/arch/x86/xen/enlighten.c +++ b/arch/x86/xen/enlighten.c @@ -311,6 +311,39 @@ static __read_mostly unsigned int cpuid_leaf1_ecx_set_mask; static __read_mostly unsigned int cpuid_leaf5_ecx_val; static __read_mostly unsigned int cpuid_leaf5_edx_val; +static void xen_sync_core(void) +{ + register void *__sp asm(_ASM_SP); + +#ifdef CONFIG_X86_32 + asm volatile ( + "pushl %%ss\n\t" + "pushl %%esp\n\t" + "addl $4, (%%esp)\n\t" + "pushfl\n\t" + "pushl %%cs\n\t" + "pushl $1f\n\t" + "iret\n\t" + "1:" + : "+r" (__sp) : : "cc"); +#else + unsigned long tmp; + + asm volatile ( + "movq %%ss, %0\n\t" + "pushq %0\n\t" + "pushq %%rsp\n\t" + "addq $8, (%%rsp)\n\t" + "pushfq\n\t" + "movq %%cs, %0\n\t" + "pushq %0\n\t" + "pushq $1f\n\t" + "iretq\n\t" + "1:" + : "=r" (tmp), "+r" (__sp) : : "cc"); +#endif +} + static void xen_cpuid(unsigned int *ax, unsigned int *bx, unsigned int *cx, unsigned int *dx) { @@ -1289,6 +1322,8 @@ static const struct pv_cpu_ops xen_cpu_ops __initconst = { .start_context_switch = paravirt_start_context_switch, .end_context_switch = xen_end_context_switch, + + .sync_core = xen_sync_core, }; static void xen_reboot(int reason) -- 2.9.3