From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751022AbdIEG2R (ORCPT ); Tue, 5 Sep 2017 02:28:17 -0400 Received: from mx2.suse.de ([195.135.220.15]:59052 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750766AbdIEG2P (ORCPT ); Tue, 5 Sep 2017 02:28:15 -0400 Subject: Re: [PATCH resend] x86,kvm: Add a kernel parameter to disable PV spinlock To: Davidlohr Bueso , Peter Zijlstra Cc: Oscar Salvador , Ingo Molnar , Paolo Bonzini , "H . Peter Anvin" , Thomas Gleixner , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, Waiman Long References: <20170904142836.15446-1-osalvador@suse.de> <20170904144011.gp7hpis6usjehbuf@hirez.programming.kicks-ass.net> <20170904222157.GD17982@linux-80c1.suse> From: Juergen Gross Message-ID: <0869e8a5-4abd-8f7f-0135-aab3e72e2d01@suse.com> Date: Tue, 5 Sep 2017 08:28:10 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0 MIME-Version: 1.0 In-Reply-To: <20170904222157.GD17982@linux-80c1.suse> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 05/09/17 00:21, Davidlohr Bueso wrote: > On Mon, 04 Sep 2017, Peter Zijlstra wrote: > >> For testing its trivial to hack your kernel and I don't feel this is >> something an Admin can make reasonable decisions about. >> >> So why? In general less knobs is better. > > +1. > > Also, note how b8fa70b51aa (xen, pvticketlocks: Add xen_nopvspin parameter > to disable xen pv ticketlocks) has no justification as to why its wanted > in the first place. The only thing I could find was from 15a3eac0784 > (xen/spinlock: Document the xen_nopvspin parameter): > > "Useful for diagnosing issues and comparing benchmarks in over-commit > CPU scenarios." Hmm, I think I should clarify the Xen knob, as I was the one requesting it: In my previous employment we had a configuration where dom0 ran exclusively on a dedicated set of physical cpus. We experienced scalability problems when doing I/O performance tests: with a decent number of dom0 cpus we achieved throughput of 700 MB/s with only 20% cpu load in dom0. A higher dom0 cpu count let the throughput drop to about 150 MB/s and cpu load was up to 100%. Reason was the additional load due to hypervisor interactions on a high frequency lock. So in special configurations at least for Xen the knob is useful for production environment. Juergen