From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757890Ab2GKOD6 (ORCPT ); Wed, 11 Jul 2012 10:03:58 -0400 Received: from e23smtp03.au.ibm.com ([202.81.31.145]:36715 "EHLO e23smtp03.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757734Ab2GKOD4 (ORCPT ); Wed, 11 Jul 2012 10:03:56 -0400 Message-ID: <4FFD874B.4090606@linux.vnet.ibm.com> Date: Wed, 11 Jul 2012 19:31:47 +0530 From: Raghavendra K T Organization: IBM User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.1) Gecko/20120216 Thunderbird/10.0.1 MIME-Version: 1.0 To: Avi Kivity CC: habanero@linux.vnet.ibm.com, "H. Peter Anvin" , Thomas Gleixner , Marcelo Tosatti , Ingo Molnar , Rik van Riel , S390 , Carsten Otte , Christian Borntraeger , KVM , chegu vinod , LKML , X86 , Gleb Natapov , linux390@de.ibm.com, Srivatsa Vaddagiri , Joerg Roedel Subject: Re: [PATCH RFC 0/2] kvm: Improving directed yield in PLE handler References: <20120709062012.24030.37154.sendpatchset@codeblue> <1341870457.2909.27.camel@oc2024037011.ibm.com> <4FFD4091.8040804@redhat.com> <4FFD86CE.9040501@linux.vnet.ibm.com> In-Reply-To: <4FFD86CE.9040501@linux.vnet.ibm.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit x-cbid: 12071103-6102-0000-0000-000001DC8511 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 07/11/2012 07:29 PM, Raghavendra K T wrote: > On 07/11/2012 02:30 PM, Avi Kivity wrote: >> On 07/10/2012 12:47 AM, Andrew Theurer wrote: >>> >>> For the cpu threads in the host that are actually active (in this case >>> 1/2 of them), ~50% of their time is in kernel and ~43% in guest. This >>> is for a no-IO workload, so that's just incredible to see so much cpu >>> wasted. I feel that 2 important areas to tackle are a more scalable >>> yield_to() and reducing the number of pause exits itself (hopefully by >>> just tuning ple_window for the latter). >> >> One thing we can do is autotune ple_window. If a ple exit fails to wake >> anybody (because all vcpus are either running, sleeping, or in ple >> exits) then we deduce we are not overcommitted and we can increase the >> ple window. There's the question of how to decrease it again though. >> > > I see some problem here, If I interpret situation correctly. What > happens if we have two guests with one VM having no over-commit and > other with high over-commit. (except when we have gang scheduling). > Sorry, I meant less load and high load inside the guest. > Rather we should have something tied to VM rather than rigid PLE > window.