From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758380Ab3BFUJX (ORCPT ); Wed, 6 Feb 2013 15:09:23 -0500 Received: from mx1.redhat.com ([209.132.183.28]:56326 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758210Ab3BFUJO (ORCPT ); Wed, 6 Feb 2013 15:09:14 -0500 Date: Wed, 6 Feb 2013 15:07:48 -0500 From: Rik van Riel To: linux-kernel@vger.kernel.org Cc: aquini@redhat.com, eric.dumazet@gmail.com, lwoodman@redhat.com, knoel@redhat.com, chegu_vinod@hp.com, raghavendra.kt@linux.vnet.ibm.com, mingo@redhat.com Subject: [PATCH -v5 5/5] x86,smp: limit spinlock delay on virtual machines Message-ID: <20130206150748.486d7bf8@cuia.bos.redhat.com> In-Reply-To: <20130206150311.19cd1e52@cuia.bos.redhat.com> References: <20130206150311.19cd1e52@cuia.bos.redhat.com> Organization: Red Hat, Inc Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Modern Intel and AMD CPUs will trap to the host when the guest is spinning on a spinlock, allowing the host to schedule in something else. This effectively means the host is taking care of spinlock backoff for virtual machines. It also means that doing the spinlock backoff in the guest anyway can lead to totally unpredictable results, extremely large backoffs, and performance regressions. To prevent those problems, we limit the spinlock backoff delay, when running in a virtual machine, to a small value. Signed-off-by: Rik van Riel --- arch/x86/include/asm/processor.h | 2 ++ arch/x86/kernel/cpu/hypervisor.c | 2 ++ arch/x86/kernel/smp.c | 21 +++++++++++++++++++-- 3 files changed, 23 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 888184b..4118fd8 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -997,6 +997,8 @@ extern bool cpu_has_amd_erratum(const int *); extern unsigned long arch_align_stack(unsigned long sp); extern void free_init_pages(char *what, unsigned long begin, unsigned long end); +extern void init_guest_spinlock_delay(void); + void default_idle(void); bool set_pm_idle_to_default(void); diff --git a/arch/x86/kernel/cpu/hypervisor.c b/arch/x86/kernel/cpu/hypervisor.c index a8f8fa9..4a53724 100644 --- a/arch/x86/kernel/cpu/hypervisor.c +++ b/arch/x86/kernel/cpu/hypervisor.c @@ -76,6 +76,8 @@ void __init init_hypervisor_platform(void) init_hypervisor(&boot_cpu_data); + init_guest_spinlock_delay(); + if (x86_hyper->init_platform) x86_hyper->init_platform(); } diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c index 64e33ef..fbc5ff3 100644 --- a/arch/x86/kernel/smp.c +++ b/arch/x86/kernel/smp.c @@ -116,8 +116,25 @@ static bool smp_no_nmi_ipi = false; #define DELAY_SHIFT 8 #define DELAY_FIXED_1 (1<> DELAY_SHIFT;