From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753643AbdGNJhs (ORCPT ); Fri, 14 Jul 2017 05:37:48 -0400 Received: from mx2.suse.de ([195.135.220.15]:54806 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751470AbdGNJhq (ORCPT ); Fri, 14 Jul 2017 05:37:46 -0400 Subject: Re: [PATCH 2/2] x86/idle: use dynamic halt poll To: Yang Zhang , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= Cc: Paolo Bonzini , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , the arch/x86 maintainers , Jonathan Corbet , tony.luck@intel.com, Borislav Petkov , Peter Zijlstra , mchehab@kernel.org, Andrew Morton , krzk@kernel.org, jpoimboe@redhat.com, Andy Lutomirski , Christian Borntraeger , Thomas Garnier , Robert Gerst , Mathias Krause , douly.fnst@cn.fujitsu.com, Nicolai Stange , Frederic Weisbecker , dvlasenk@redhat.com, Daniel Bristot de Oliveira , yamada.masahiro@socionext.com, mika.westerberg@linux.intel.com, Chen Yu , aaron.lu@intel.com, Steven Rostedt , Kyle Huey , Len Brown , Prarit Bhargava , hidehiro.kawai.ez@hitachi.com, fengtiantian@huawei.com, pmladek@suse.com, jeyu@redhat.com, Larry.Finger@lwfinger.net, zijun_hu@htc.com, luisbg@osg.samsung.com, johannes.berg@intel.com, niklas.soderlund+renesas@ragnatech.se, zlpnobody@gmail.com, Alexey Dobriyan , fgao@48lvckh6395k16k5.yundunddos.com, ebiederm@xmission.com, Subash Abhinov Kasiviswanathan , Arnd Bergmann , Matt Fleming , Mel Gorman , "linux-kernel@vger.kernel.org" , linux-doc@vger.kernel.org, linux-edac@vger.kernel.org, kvm References: <4444ffc8-9e7b-5bd2-20da-af422fe834cc@redhat.com> <2245bef7-b668-9265-f3f8-3b63d71b1033@gmail.com> <7d085956-2573-212f-44f4-86104beba9bb@gmail.com> <05ec7efc-fb9c-ae24-5770-66fc472545a4@redhat.com> <20170627134043.GA1487@potion> <2771f905-d1b0-b118-9ae9-db5fb87f877c@redhat.com> <20170627142251.GB1487@potion> <20170704141322.GC30880@potion> From: Alexander Graf Message-ID: Date: Fri, 14 Jul 2017 11:37:20 +0200 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:52.0) Gecko/20100101 Thunderbird/52.2.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 13.07.17 13:49, Yang Zhang wrote: > On 2017/7/4 22:13, Radim Krčmář wrote: >> 2017-07-03 17:28+0800, Yang Zhang: >>> The background is that we(Alibaba Cloud) do get more and more complaints >>> from our customers in both KVM and Xen compare to bare-mental.After >>> investigations, the root cause is known to us: big cost in message >>> passing >>> workload(David show it in KVM forum 2015) >>> >>> A typical message workload like below: >>> vcpu 0 vcpu 1 >>> 1. send ipi 2. doing hlt >>> 3. go into idle 4. receive ipi and wake up from hlt >>> 5. write APIC time twice 6. write APIC time twice to >>> to stop sched timer reprogram sched timer >> >> One write is enough to disable/re-enable the APIC timer -- why does >> Linux use two? > > One is to remove the timer and another one is to reprogram the timer. > Normally, only one write to remove the timer.But in some cases, it will > reprogram it. > >> >>> 7. doing hlt 8. handle task and send ipi to >>> vcpu 0 >>> 9. same to 4. 10. same to 3 >>> >>> One transaction will introduce about 12 vmexits(2 hlt and 10 msr >>> write). The >>> cost of such vmexits will degrades performance severely. >> >> Yeah, sounds like too much ... I understood that there are >> >> IPI from 1 to 2 >> 4 * APIC timer >> IPI from 2 to 1 >> >> which adds to 6 MSR writes -- what are the other 4? > > In the worst case, each timer will touch APIC timer twice.So it will add > additional 4 msr writse. But this is not always true. > >> >>> Linux kernel >>> already provide idle=poll to mitigate the trend. But it only >>> eliminates the >>> IPI and hlt vmexit. It has nothing to do with start/stop sched timer. A >>> compromise would be to turn off NOHZ kernel, but it is not the default >>> config for new distributions. Same for halt-poll in KVM, it only >>> solve the >>> cost from schedule in/out in host and can not help such workload much. >>> >>> The purpose of this patch we want to improve current idle=poll >>> mechanism to >> >> Please aim to allow MWAIT instead of idle=poll -- MWAIT doesn't slow >> down the sibling hyperthread. MWAIT solves the IPI problem, but doesn't >> get rid of the timer one. > > Yes, i can try it. But MWAIT will not yield CPU, it only helps the > sibling hyperthread as you mentioned. If you implement proper MWAIT emulation that conditionally gets en- or disabled depending on the same halt poll dynamics that we already have for in-host HLT handling, it will also yield the CPU. As for the timer - are you sure the problem is really the overhead of the timer configuration, not the latency that it takes to actually fire the guest timer? One major problem I see is that we configure the host hrtimer to fire at the point in time when the guest wants to see a timer event. But in a virtual environment, the point in time when we have to start switching to the VM really should be a bit *before* the guest wants to be woken up, as it takes quite some time to switch back into the VM context. Alex From mboxrd@z Thu Jan 1 00:00:00 1970 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Subject: [2/2] x86/idle: use dynamic halt poll From: Alexander Graf Message-Id: Date: Fri, 14 Jul 2017 11:37:20 +0200 To: Yang Zhang , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= Cc: Paolo Bonzini , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , the arch/x86 maintainers , Jonathan Corbet , tony.luck@intel.com, Borislav Petkov , Peter Zijlstra , mchehab@kernel.org, Andrew Morton , krzk@kernel.org, jpoimboe@redhat.com, Andy Lutomirski , Christian Borntraeger , Thomas Garnier , Robert Gerst , Mathias Krause , douly.fnst@cn.fujitsu.com, Nicolai Stange , Frederic Weisbecker , dvlasenk@redhat.com, Daniel Bristot de Oliveira , yamada.masahiro@socionext.com, mika.westerberg@linux.intel.com, Chen Yu , aaron.lu@intel.com, Steven Rostedt , Kyle Huey , Len Brown , Prarit Bhargava , hidehiro.kawai.ez@hitachi.com, fengtiantian@huawei.com, pmladek@suse.com, jeyu@redhat.com, Larry.Finger@lwfinger.net, zijun_hu@htc.com, luisbg@osg.samsung.com, johannes.berg@intel.com, niklas.soderlund+renesas@ragnatech.se, zlpnobody@gmail.com, Alexey Dobriyan , fgao@ikuai8.com, ebiederm@xmission.com, Subash Abhinov Kasiviswanathan , Arnd Bergmann , Matt Fleming , Mel Gorman , "linux-kernel@vger.kernel.org" , linux-doc@vger.kernel.org, linux-edac@vger.kernel.org, kvm List-ID: T24gMTMuMDcuMTcgMTM6NDksIFlhbmcgWmhhbmcgd3JvdGU6Cj4gT24gMjAxNy83LzQgMjI6MTMs IFJhZGltIEtyxI1tw6HFmSB3cm90ZToKPj4gMjAxNy0wNy0wMyAxNzoyOCswODAwLCBZYW5nIFpo YW5nOgo+Pj4gVGhlIGJhY2tncm91bmQgaXMgdGhhdCB3ZShBbGliYWJhIENsb3VkKSBkbyBnZXQg bW9yZSBhbmQgbW9yZSBjb21wbGFpbnRzCj4+PiBmcm9tIG91ciBjdXN0b21lcnMgaW4gYm90aCBL Vk0gYW5kIFhlbiBjb21wYXJlIHRvIGJhcmUtbWVudGFsLkFmdGVyCj4+PiBpbnZlc3RpZ2F0aW9u cywgdGhlIHJvb3QgY2F1c2UgaXMga25vd24gdG8gdXM6IGJpZyBjb3N0IGluIG1lc3NhZ2UgCj4+ PiBwYXNzaW5nCj4+PiB3b3JrbG9hZChEYXZpZCBzaG93IGl0IGluIEtWTSBmb3J1bSAyMDE1KQo+ Pj4KPj4+IEEgdHlwaWNhbCBtZXNzYWdlIHdvcmtsb2FkIGxpa2UgYmVsb3c6Cj4+PiB2Y3B1IDAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHZjcHUgMQo+Pj4gMS4gc2VuZCBpcGkgICAgICAg ICAgICAgICAgICAgICAyLiAgZG9pbmcgaGx0Cj4+PiAzLiBnbyBpbnRvIGlkbGUgICAgICAgICAg ICAgICAgIDQuICByZWNlaXZlIGlwaSBhbmQgd2FrZSB1cCBmcm9tIGhsdAo+Pj4gNS4gd3JpdGUg QVBJQyB0aW1lIHR3aWNlICAgICAgICA2LiAgd3JpdGUgQVBJQyB0aW1lIHR3aWNlIHRvCj4+PiAg ICB0byBzdG9wIHNjaGVkIHRpbWVyICAgICAgICAgICAgICByZXByb2dyYW0gc2NoZWQgdGltZXIK Pj4KPj4gT25lIHdyaXRlIGlzIGVub3VnaCB0byBkaXNhYmxlL3JlLWVuYWJsZSB0aGUgQVBJQyB0 aW1lciAtLSB3aHkgZG9lcwo+PiBMaW51eCB1c2UgdHdvPwo+IAo+IE9uZSBpcyB0byByZW1vdmUg dGhlIHRpbWVyIGFuZCBhbm90aGVyIG9uZSBpcyB0byByZXByb2dyYW0gdGhlIHRpbWVyLiAKPiBO b3JtYWxseSwgb25seSBvbmUgd3JpdGUgdG8gcmVtb3ZlIHRoZSB0aW1lci5CdXQgaW4gc29tZSBj YXNlcywgaXQgd2lsbCAKPiByZXByb2dyYW0gaXQuCj4gCj4+Cj4+PiA3LiBkb2luZyBobHQgICAg ICAgICAgICAgICAgICAgIDguICBoYW5kbGUgdGFzayBhbmQgc2VuZCBpcGkgdG8KPj4+ICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHZjcHUgMAo+Pj4gOS4gc2FtZSB0byA0LiAg ICAgICAgICAgICAgICAgICAxMC4gc2FtZSB0byAzCj4+Pgo+Pj4gT25lIHRyYW5zYWN0aW9uIHdp bGwgaW50cm9kdWNlIGFib3V0IDEyIHZtZXhpdHMoMiBobHQgYW5kIDEwIG1zciAKPj4+IHdyaXRl KS4gVGhlCj4+PiBjb3N0IG9mIHN1Y2ggdm1leGl0cyB3aWxsIGRlZ3JhZGVzIHBlcmZvcm1hbmNl IHNldmVyZWx5Lgo+Pgo+PiBZZWFoLCBzb3VuZHMgbGlrZSB0b28gbXVjaCAuLi4gSSB1bmRlcnN0 b29kIHRoYXQgdGhlcmUgYXJlCj4+Cj4+ICAgSVBJIGZyb20gMSB0byAyCj4+ICAgNCAqIEFQSUMg dGltZXIKPj4gICBJUEkgZnJvbSAyIHRvIDEKPj4KPj4gd2hpY2ggYWRkcyB0byA2IE1TUiB3cml0 ZXMgLS0gd2hhdCBhcmUgdGhlIG90aGVyIDQ/Cj4gCj4gSW4gdGhlIHdvcnN0IGNhc2UsIGVhY2gg dGltZXIgd2lsbCB0b3VjaCBBUElDIHRpbWVyIHR3aWNlLlNvIGl0IHdpbGwgYWRkIAo+IGFkZGl0 aW9uYWwgNCBtc3Igd3JpdHNlLiBCdXQgdGhpcyBpcyAgbm90IGFsd2F5cyB0cnVlLgo+IAo+Pgo+ Pj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgTGludXgga2VybmVsCj4+PiBhbHJlYWR5IHByb3ZpZGUgaWRsZT1wb2xsIHRvIG1pdGlnYXRl IHRoZSB0cmVuZC4gQnV0IGl0IG9ubHkgCj4+PiBlbGltaW5hdGVzIHRoZQo+Pj4gSVBJIGFuZCBo bHQgdm1leGl0LiBJdCBoYXMgbm90aGluZyB0byBkbyB3aXRoIHN0YXJ0L3N0b3Agc2NoZWQgdGlt ZXIuIEEKPj4+IGNvbXByb21pc2Ugd291bGQgYmUgdG8gdHVybiBvZmYgTk9IWiBrZXJuZWwsIGJ1 dCBpdCBpcyBub3QgdGhlIGRlZmF1bHQKPj4+IGNvbmZpZyBmb3IgbmV3IGRpc3RyaWJ1dGlvbnMu IFNhbWUgZm9yIGhhbHQtcG9sbCBpbiBLVk0sIGl0IG9ubHkgCj4+PiBzb2x2ZSB0aGUKPj4+IGNv c3QgZnJvbSBzY2hlZHVsZSBpbi9vdXQgaW4gaG9zdCBhbmQgY2FuIG5vdCBoZWxwIHN1Y2ggd29y a2xvYWQgbXVjaC4KPj4+Cj4+PiBUaGUgcHVycG9zZSBvZiB0aGlzIHBhdGNoIHdlIHdhbnQgdG8g aW1wcm92ZSBjdXJyZW50IGlkbGU9cG9sbCAKPj4+IG1lY2hhbmlzbSB0bwo+Pgo+PiBQbGVhc2Ug YWltIHRvIGFsbG93IE1XQUlUIGluc3RlYWQgb2YgaWRsZT1wb2xsIC0tIE1XQUlUIGRvZXNuJ3Qg c2xvdwo+PiBkb3duIHRoZSBzaWJsaW5nIGh5cGVydGhyZWFkLiAgTVdBSVQgc29sdmVzIHRoZSBJ UEkgcHJvYmxlbSwgYnV0IGRvZXNuJ3QKPj4gZ2V0IHJpZCBvZiB0aGUgdGltZXIgb25lLgo+IAo+ IFllcywgaSBjYW4gdHJ5IGl0LiBCdXQgTVdBSVQgd2lsbCBub3QgeWllbGQgQ1BVLCBpdCBvbmx5 IGhlbHBzIHRoZSAKPiBzaWJsaW5nIGh5cGVydGhyZWFkIGFzIHlvdSBtZW50aW9uZWQuCgpJZiB5 b3UgaW1wbGVtZW50IHByb3BlciBNV0FJVCBlbXVsYXRpb24gdGhhdCBjb25kaXRpb25hbGx5IGdl dHMgZW4tIG9yIApkaXNhYmxlZCBkZXBlbmRpbmcgb24gdGhlIHNhbWUgaGFsdCBwb2xsIGR5bmFt aWNzIHRoYXQgd2UgYWxyZWFkeSBoYXZlIApmb3IgaW4taG9zdCBITFQgaGFuZGxpbmcsIGl0IHdp bGwgYWxzbyB5aWVsZCB0aGUgQ1BVLgoKQXMgZm9yIHRoZSB0aW1lciAtIGFyZSB5b3Ugc3VyZSB0 aGUgcHJvYmxlbSBpcyByZWFsbHkgdGhlIG92ZXJoZWFkIG9mIAp0aGUgdGltZXIgY29uZmlndXJh dGlvbiwgbm90IHRoZSBsYXRlbmN5IHRoYXQgaXQgdGFrZXMgdG8gYWN0dWFsbHkgZmlyZSAKdGhl IGd1ZXN0IHRpbWVyPwoKT25lIG1ham9yIHByb2JsZW0gSSBzZWUgaXMgdGhhdCB3ZSBjb25maWd1 cmUgdGhlIGhvc3QgaHJ0aW1lciB0byBmaXJlIGF0IAp0aGUgcG9pbnQgaW4gdGltZSB3aGVuIHRo ZSBndWVzdCB3YW50cyB0byBzZWUgYSB0aW1lciBldmVudC4gQnV0IGluIGEgCnZpcnR1YWwgZW52 aXJvbm1lbnQsIHRoZSBwb2ludCBpbiB0aW1lIHdoZW4gd2UgaGF2ZSB0byBzdGFydCBzd2l0Y2hp bmcgCnRvIHRoZSBWTSByZWFsbHkgc2hvdWxkIGJlIGEgYml0ICpiZWZvcmUqIHRoZSBndWVzdCB3 YW50cyB0byBiZSB3b2tlbiAKdXAsIGFzIGl0IHRha2VzIHF1aXRlIHNvbWUgdGltZSB0byBzd2l0 Y2ggYmFjayBpbnRvIHRoZSBWTSBjb250ZXh0LgoKCkFsZXgKLS0tClRvIHVuc3Vic2NyaWJlIGZy b20gdGhpcyBsaXN0OiBzZW5kIHRoZSBsaW5lICJ1bnN1YnNjcmliZSBsaW51eC1lZGFjIiBpbgp0 aGUgYm9keSBvZiBhIG1lc3NhZ2UgdG8gbWFqb3Jkb21vQHZnZXIua2VybmVsLm9yZwpNb3JlIG1h am9yZG9tbyBpbmZvIGF0ICBodHRwOi8vdmdlci5rZXJuZWwub3JnL21ham9yZG9tby1pbmZvLmh0 bWwK From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alexander Graf Subject: Re: [PATCH 2/2] x86/idle: use dynamic halt poll Date: Fri, 14 Jul 2017 11:37:20 +0200 Message-ID: References: <4444ffc8-9e7b-5bd2-20da-af422fe834cc@redhat.com> <2245bef7-b668-9265-f3f8-3b63d71b1033@gmail.com> <7d085956-2573-212f-44f4-86104beba9bb@gmail.com> <05ec7efc-fb9c-ae24-5770-66fc472545a4@redhat.com> <20170627134043.GA1487@potion> <2771f905-d1b0-b118-9ae9-db5fb87f877c@redhat.com> <20170627142251.GB1487@potion> <20170704141322.GC30880@potion> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Cc: Paolo Bonzini , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , the arch/x86 maintainers , Jonathan Corbet , tony.luck@intel.com, Borislav Petkov , Peter Zijlstra , mchehab@kernel.org, Andrew Morton , krzk@kernel.org, jpoimboe@redhat.com, Andy Lutomirski , Christian Borntraeger , Thomas Garnier , Robert Gerst , Mathias Krause , douly.fnst@cn.fujitsu.com, Nicolai Stange , Frederic Weisbecker , dvlasenk@redhat.com, To: Yang Zhang , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= Return-path: In-Reply-To: Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On 13.07.17 13:49, Yang Zhang wrote: > On 2017/7/4 22:13, Radim Krčmář wrote: >> 2017-07-03 17:28+0800, Yang Zhang: >>> The background is that we(Alibaba Cloud) do get more and more complaints >>> from our customers in both KVM and Xen compare to bare-mental.After >>> investigations, the root cause is known to us: big cost in message >>> passing >>> workload(David show it in KVM forum 2015) >>> >>> A typical message workload like below: >>> vcpu 0 vcpu 1 >>> 1. send ipi 2. doing hlt >>> 3. go into idle 4. receive ipi and wake up from hlt >>> 5. write APIC time twice 6. write APIC time twice to >>> to stop sched timer reprogram sched timer >> >> One write is enough to disable/re-enable the APIC timer -- why does >> Linux use two? > > One is to remove the timer and another one is to reprogram the timer. > Normally, only one write to remove the timer.But in some cases, it will > reprogram it. > >> >>> 7. doing hlt 8. handle task and send ipi to >>> vcpu 0 >>> 9. same to 4. 10. same to 3 >>> >>> One transaction will introduce about 12 vmexits(2 hlt and 10 msr >>> write). The >>> cost of such vmexits will degrades performance severely. >> >> Yeah, sounds like too much ... I understood that there are >> >> IPI from 1 to 2 >> 4 * APIC timer >> IPI from 2 to 1 >> >> which adds to 6 MSR writes -- what are the other 4? > > In the worst case, each timer will touch APIC timer twice.So it will add > additional 4 msr writse. But this is not always true. > >> >>> Linux kernel >>> already provide idle=poll to mitigate the trend. But it only >>> eliminates the >>> IPI and hlt vmexit. It has nothing to do with start/stop sched timer. A >>> compromise would be to turn off NOHZ kernel, but it is not the default >>> config for new distributions. Same for halt-poll in KVM, it only >>> solve the >>> cost from schedule in/out in host and can not help such workload much. >>> >>> The purpose of this patch we want to improve current idle=poll >>> mechanism to >> >> Please aim to allow MWAIT instead of idle=poll -- MWAIT doesn't slow >> down the sibling hyperthread. MWAIT solves the IPI problem, but doesn't >> get rid of the timer one. > > Yes, i can try it. But MWAIT will not yield CPU, it only helps the > sibling hyperthread as you mentioned. If you implement proper MWAIT emulation that conditionally gets en- or disabled depending on the same halt poll dynamics that we already have for in-host HLT handling, it will also yield the CPU. As for the timer - are you sure the problem is really the overhead of the timer configuration, not the latency that it takes to actually fire the guest timer? One major problem I see is that we configure the host hrtimer to fire at the point in time when the guest wants to see a timer event. But in a virtual environment, the point in time when we have to start switching to the VM really should be a bit *before* the guest wants to be woken up, as it takes quite some time to switch back into the VM context. Alex