From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751356AbdGQJ0f (ORCPT ); Mon, 17 Jul 2017 05:26:35 -0400 Received: from mail-oi0-f65.google.com ([209.85.218.65]:32809 "EHLO mail-oi0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751178AbdGQJ0b (ORCPT ); Mon, 17 Jul 2017 05:26:31 -0400 Subject: Re: [PATCH 2/2] x86/idle: use dynamic halt poll To: Alexander Graf , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= References: <4444ffc8-9e7b-5bd2-20da-af422fe834cc@redhat.com> <2245bef7-b668-9265-f3f8-3b63d71b1033@gmail.com> <7d085956-2573-212f-44f4-86104beba9bb@gmail.com> <05ec7efc-fb9c-ae24-5770-66fc472545a4@redhat.com> <20170627134043.GA1487@potion> <2771f905-d1b0-b118-9ae9-db5fb87f877c@redhat.com> <20170627142251.GB1487@potion> <20170704141322.GC30880@potion> Cc: Paolo Bonzini , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , the arch/x86 maintainers , Jonathan Corbet , tony.luck@intel.com, Borislav Petkov , Peter Zijlstra , mchehab@kernel.org, Andrew Morton , krzk@kernel.org, jpoimboe@redhat.com, Andy Lutomirski , Christian Borntraeger , Thomas Garnier , Robert Gerst , Mathias Krause , douly.fnst@cn.fujitsu.com, Nicolai Stange , Frederic Weisbecker , dvlasenk@redhat.com, Daniel Bristot de Oliveira , yamada.masahiro@socionext.com, mika.westerberg@linux.intel.com, Chen Yu , aaron.lu@intel.com, Steven Rostedt , Kyle Huey , Len Brown , Prarit Bhargava , hidehiro.kawai.ez@hitachi.com, fengtiantian@huawei.com, pmladek@suse.com, jeyu@redhat.com, Larry.Finger@lwfinger.net, zijun_hu@htc.com, luisbg@osg.samsung.com, johannes.berg@intel.com, niklas.soderlund+renesas@ragnatech.se, zlpnobody@gmail.com, Alexey Dobriyan , fgao@48lvckh6395k16k5.yundunddos.com, ebiederm@xmission.com, Subash Abhinov Kasiviswanathan , Arnd Bergmann , Matt Fleming , Mel Gorman , "linux-kernel@vger.kernel.org" , linux-doc@vger.kernel.org, linux-edac@vger.kernel.org, kvm From: Yang Zhang Message-ID: Date: Mon, 17 Jul 2017 17:26:13 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2017/7/14 17:37, Alexander Graf wrote: > > > On 13.07.17 13:49, Yang Zhang wrote: >> On 2017/7/4 22:13, Radim Krčmář wrote: >>> 2017-07-03 17:28+0800, Yang Zhang: >>>> The background is that we(Alibaba Cloud) do get more and more >>>> complaints >>>> from our customers in both KVM and Xen compare to bare-mental.After >>>> investigations, the root cause is known to us: big cost in message >>>> passing >>>> workload(David show it in KVM forum 2015) >>>> >>>> A typical message workload like below: >>>> vcpu 0 vcpu 1 >>>> 1. send ipi 2. doing hlt >>>> 3. go into idle 4. receive ipi and wake up from hlt >>>> 5. write APIC time twice 6. write APIC time twice to >>>> to stop sched timer reprogram sched timer >>> >>> One write is enough to disable/re-enable the APIC timer -- why does >>> Linux use two? >> >> One is to remove the timer and another one is to reprogram the timer. >> Normally, only one write to remove the timer.But in some cases, it >> will reprogram it. >> >>> >>>> 7. doing hlt 8. handle task and send ipi to >>>> vcpu 0 >>>> 9. same to 4. 10. same to 3 >>>> >>>> One transaction will introduce about 12 vmexits(2 hlt and 10 msr >>>> write). The >>>> cost of such vmexits will degrades performance severely. >>> >>> Yeah, sounds like too much ... I understood that there are >>> >>> IPI from 1 to 2 >>> 4 * APIC timer >>> IPI from 2 to 1 >>> >>> which adds to 6 MSR writes -- what are the other 4? >> >> In the worst case, each timer will touch APIC timer twice.So it will >> add additional 4 msr writse. But this is not always true. >> >>> >>>> Linux kernel >>>> already provide idle=poll to mitigate the trend. But it only >>>> eliminates the >>>> IPI and hlt vmexit. It has nothing to do with start/stop sched timer. A >>>> compromise would be to turn off NOHZ kernel, but it is not the default >>>> config for new distributions. Same for halt-poll in KVM, it only >>>> solve the >>>> cost from schedule in/out in host and can not help such workload much. >>>> >>>> The purpose of this patch we want to improve current idle=poll >>>> mechanism to >>> >>> Please aim to allow MWAIT instead of idle=poll -- MWAIT doesn't slow >>> down the sibling hyperthread. MWAIT solves the IPI problem, but doesn't >>> get rid of the timer one. >> >> Yes, i can try it. But MWAIT will not yield CPU, it only helps the >> sibling hyperthread as you mentioned. > > If you implement proper MWAIT emulation that conditionally gets en- or > disabled depending on the same halt poll dynamics that we already have > for in-host HLT handling, it will also yield the CPU. It is hard to do . If we not intercept MWAIT instruction, there is no chance to wake up the CPU unless an interrupt arrived or a store to the address armed by MONITOR which is the same with idle=polling. > > As for the timer - are you sure the problem is really the overhead of > the timer configuration, not the latency that it takes to actually fire > the guest timer? No, the main cost is introduced by vmexit, includes IPIs, Timer program, HLT. David detailed it in KVM forum, you can search "Message Passing Workloads in KVM" in google and the first link give the whole analysis of the problem. > > One major problem I see is that we configure the host hrtimer to fire at > the point in time when the guest wants to see a timer event. But in a > virtual environment, the point in time when we have to start switching > to the VM really should be a bit *before* the guest wants to be woken > up, as it takes quite some time to switch back into the VM context. > > > Alex -- Yang Alibaba Cloud Computing From mboxrd@z Thu Jan 1 00:00:00 1970 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Subject: [2/2] x86/idle: use dynamic halt poll From: Yang Zhang Message-Id: Date: Mon, 17 Jul 2017 17:26:13 +0800 To: Alexander Graf , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= Cc: Paolo Bonzini , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , the arch/x86 maintainers , Jonathan Corbet , tony.luck@intel.com, Borislav Petkov , Peter Zijlstra , mchehab@kernel.org, Andrew Morton , krzk@kernel.org, jpoimboe@redhat.com, Andy Lutomirski , Christian Borntraeger , Thomas Garnier , Robert Gerst , Mathias Krause , douly.fnst@cn.fujitsu.com, Nicolai Stange , Frederic Weisbecker , dvlasenk@redhat.com, Daniel Bristot de Oliveira , yamada.masahiro@socionext.com, mika.westerberg@linux.intel.com, Chen Yu , aaron.lu@intel.com, Steven Rostedt , Kyle Huey , Len Brown , Prarit Bhargava , hidehiro.kawai.ez@hitachi.com, fengtiantian@huawei.com, pmladek@suse.com, jeyu@redhat.com, Larry.Finger@lwfinger.net, zijun_hu@htc.com, luisbg@osg.samsung.com, johannes.berg@intel.com, niklas.soderlund+renesas@ragnatech.se, zlpnobody@gmail.com, Alexey Dobriyan , fgao@ikuai8.com, ebiederm@xmission.com, Subash Abhinov Kasiviswanathan , Arnd Bergmann , Matt Fleming , Mel Gorman , "linux-kernel@vger.kernel.org" , linux-doc@vger.kernel.org, linux-edac@vger.kernel.org, kvm List-ID: T24gMjAxNy83LzE0IDE3OjM3LCBBbGV4YW5kZXIgR3JhZiB3cm90ZToKPgo+Cj4gT24gMTMuMDcu MTcgMTM6NDksIFlhbmcgWmhhbmcgd3JvdGU6Cj4+IE9uIDIwMTcvNy80IDIyOjEzLCBSYWRpbSBL csSNbcOhxZkgd3JvdGU6Cj4+PiAyMDE3LTA3LTAzIDE3OjI4KzA4MDAsIFlhbmcgWmhhbmc6Cj4+ Pj4gVGhlIGJhY2tncm91bmQgaXMgdGhhdCB3ZShBbGliYWJhIENsb3VkKSBkbyBnZXQgbW9yZSBh bmQgbW9yZQo+Pj4+IGNvbXBsYWludHMKPj4+PiBmcm9tIG91ciBjdXN0b21lcnMgaW4gYm90aCBL Vk0gYW5kIFhlbiBjb21wYXJlIHRvIGJhcmUtbWVudGFsLkFmdGVyCj4+Pj4gaW52ZXN0aWdhdGlv bnMsIHRoZSByb290IGNhdXNlIGlzIGtub3duIHRvIHVzOiBiaWcgY29zdCBpbiBtZXNzYWdlCj4+ Pj4gcGFzc2luZwo+Pj4+IHdvcmtsb2FkKERhdmlkIHNob3cgaXQgaW4gS1ZNIGZvcnVtIDIwMTUp Cj4+Pj4KPj4+PiBBIHR5cGljYWwgbWVzc2FnZSB3b3JrbG9hZCBsaWtlIGJlbG93Ogo+Pj4+IHZj cHUgMCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdmNwdSAxCj4+Pj4gMS4gc2VuZCBpcGkg ICAgICAgICAgICAgICAgICAgICAyLiAgZG9pbmcgaGx0Cj4+Pj4gMy4gZ28gaW50byBpZGxlICAg ICAgICAgICAgICAgICA0LiAgcmVjZWl2ZSBpcGkgYW5kIHdha2UgdXAgZnJvbSBobHQKPj4+PiA1 LiB3cml0ZSBBUElDIHRpbWUgdHdpY2UgICAgICAgIDYuICB3cml0ZSBBUElDIHRpbWUgdHdpY2Ug dG8KPj4+PiAgICB0byBzdG9wIHNjaGVkIHRpbWVyICAgICAgICAgICAgICByZXByb2dyYW0gc2No ZWQgdGltZXIKPj4+Cj4+PiBPbmUgd3JpdGUgaXMgZW5vdWdoIHRvIGRpc2FibGUvcmUtZW5hYmxl IHRoZSBBUElDIHRpbWVyIC0tIHdoeSBkb2VzCj4+PiBMaW51eCB1c2UgdHdvPwo+Pgo+PiBPbmUg aXMgdG8gcmVtb3ZlIHRoZSB0aW1lciBhbmQgYW5vdGhlciBvbmUgaXMgdG8gcmVwcm9ncmFtIHRo ZSB0aW1lci4KPj4gTm9ybWFsbHksIG9ubHkgb25lIHdyaXRlIHRvIHJlbW92ZSB0aGUgdGltZXIu QnV0IGluIHNvbWUgY2FzZXMsIGl0Cj4+IHdpbGwgcmVwcm9ncmFtIGl0Lgo+Pgo+Pj4KPj4+PiA3 LiBkb2luZyBobHQgICAgICAgICAgICAgICAgICAgIDguICBoYW5kbGUgdGFzayBhbmQgc2VuZCBp cGkgdG8KPj4+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB2Y3B1IDAKPj4+ PiA5LiBzYW1lIHRvIDQuICAgICAgICAgICAgICAgICAgIDEwLiBzYW1lIHRvIDMKPj4+Pgo+Pj4+ IE9uZSB0cmFuc2FjdGlvbiB3aWxsIGludHJvZHVjZSBhYm91dCAxMiB2bWV4aXRzKDIgaGx0IGFu ZCAxMCBtc3IKPj4+PiB3cml0ZSkuIFRoZQo+Pj4+IGNvc3Qgb2Ygc3VjaCB2bWV4aXRzIHdpbGwg ZGVncmFkZXMgcGVyZm9ybWFuY2Ugc2V2ZXJlbHkuCj4+Pgo+Pj4gWWVhaCwgc291bmRzIGxpa2Ug dG9vIG11Y2ggLi4uIEkgdW5kZXJzdG9vZCB0aGF0IHRoZXJlIGFyZQo+Pj4KPj4+ICAgSVBJIGZy b20gMSB0byAyCj4+PiAgIDQgKiBBUElDIHRpbWVyCj4+PiAgIElQSSBmcm9tIDIgdG8gMQo+Pj4K Pj4+IHdoaWNoIGFkZHMgdG8gNiBNU1Igd3JpdGVzIC0tIHdoYXQgYXJlIHRoZSBvdGhlciA0Pwo+ Pgo+PiBJbiB0aGUgd29yc3QgY2FzZSwgZWFjaCB0aW1lciB3aWxsIHRvdWNoIEFQSUMgdGltZXIg dHdpY2UuU28gaXQgd2lsbAo+PiBhZGQgYWRkaXRpb25hbCA0IG1zciB3cml0c2UuIEJ1dCB0aGlz IGlzICBub3QgYWx3YXlzIHRydWUuCj4+Cj4+Pgo+Pj4+ICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIExpbnV4IGtlcm5lbAo+Pj4+IGFscmVh ZHkgcHJvdmlkZSBpZGxlPXBvbGwgdG8gbWl0aWdhdGUgdGhlIHRyZW5kLiBCdXQgaXQgb25seQo+ Pj4+IGVsaW1pbmF0ZXMgdGhlCj4+Pj4gSVBJIGFuZCBobHQgdm1leGl0LiBJdCBoYXMgbm90aGlu ZyB0byBkbyB3aXRoIHN0YXJ0L3N0b3Agc2NoZWQgdGltZXIuIEEKPj4+PiBjb21wcm9taXNlIHdv dWxkIGJlIHRvIHR1cm4gb2ZmIE5PSFoga2VybmVsLCBidXQgaXQgaXMgbm90IHRoZSBkZWZhdWx0 Cj4+Pj4gY29uZmlnIGZvciBuZXcgZGlzdHJpYnV0aW9ucy4gU2FtZSBmb3IgaGFsdC1wb2xsIGlu IEtWTSwgaXQgb25seQo+Pj4+IHNvbHZlIHRoZQo+Pj4+IGNvc3QgZnJvbSBzY2hlZHVsZSBpbi9v dXQgaW4gaG9zdCBhbmQgY2FuIG5vdCBoZWxwIHN1Y2ggd29ya2xvYWQgbXVjaC4KPj4+Pgo+Pj4+ IFRoZSBwdXJwb3NlIG9mIHRoaXMgcGF0Y2ggd2Ugd2FudCB0byBpbXByb3ZlIGN1cnJlbnQgaWRs ZT1wb2xsCj4+Pj4gbWVjaGFuaXNtIHRvCj4+Pgo+Pj4gUGxlYXNlIGFpbSB0byBhbGxvdyBNV0FJ VCBpbnN0ZWFkIG9mIGlkbGU9cG9sbCAtLSBNV0FJVCBkb2Vzbid0IHNsb3cKPj4+IGRvd24gdGhl IHNpYmxpbmcgaHlwZXJ0aHJlYWQuICBNV0FJVCBzb2x2ZXMgdGhlIElQSSBwcm9ibGVtLCBidXQg ZG9lc24ndAo+Pj4gZ2V0IHJpZCBvZiB0aGUgdGltZXIgb25lLgo+Pgo+PiBZZXMsIGkgY2FuIHRy eSBpdC4gQnV0IE1XQUlUIHdpbGwgbm90IHlpZWxkIENQVSwgaXQgb25seSBoZWxwcyB0aGUKPj4g c2libGluZyBoeXBlcnRocmVhZCBhcyB5b3UgbWVudGlvbmVkLgo+Cj4gSWYgeW91IGltcGxlbWVu dCBwcm9wZXIgTVdBSVQgZW11bGF0aW9uIHRoYXQgY29uZGl0aW9uYWxseSBnZXRzIGVuLSBvcgo+ IGRpc2FibGVkIGRlcGVuZGluZyBvbiB0aGUgc2FtZSBoYWx0IHBvbGwgZHluYW1pY3MgdGhhdCB3 ZSBhbHJlYWR5IGhhdmUKPiBmb3IgaW4taG9zdCBITFQgaGFuZGxpbmcsIGl0IHdpbGwgYWxzbyB5 aWVsZCB0aGUgQ1BVLgoKSXQgaXMgaGFyZCB0byBkbyAuIElmIHdlIG5vdCBpbnRlcmNlcHQgTVdB SVQgaW5zdHJ1Y3Rpb24sIHRoZXJlIGlzIG5vIApjaGFuY2UgdG8gd2FrZSB1cCB0aGUgQ1BVIHVu bGVzcyBhbiBpbnRlcnJ1cHQgYXJyaXZlZCBvciBhIHN0b3JlIHRvIHRoZSAKYWRkcmVzcyBhcm1l ZCBieSBNT05JVE9SIHdoaWNoIGlzIHRoZSBzYW1lIHdpdGggaWRsZT1wb2xsaW5nLgoKPgo+IEFz IGZvciB0aGUgdGltZXIgLSBhcmUgeW91IHN1cmUgdGhlIHByb2JsZW0gaXMgcmVhbGx5IHRoZSBv dmVyaGVhZCBvZgo+IHRoZSB0aW1lciBjb25maWd1cmF0aW9uLCBub3QgdGhlIGxhdGVuY3kgdGhh dCBpdCB0YWtlcyB0byBhY3R1YWxseSBmaXJlCj4gdGhlIGd1ZXN0IHRpbWVyPwoKTm8sIHRoZSBt YWluIGNvc3QgaXMgaW50cm9kdWNlZCBieSB2bWV4aXQsIGluY2x1ZGVzIElQSXMsIFRpbWVyIHBy b2dyYW0sIApITFQuIERhdmlkIGRldGFpbGVkIGl0IGluIEtWTSBmb3J1bSwgeW91IGNhbiBzZWFy Y2ggIk1lc3NhZ2UgUGFzc2luZyAKV29ya2xvYWRzIGluIEtWTSIgaW4gZ29vZ2xlIGFuZCB0aGUg Zmlyc3QgbGluayBnaXZlIHRoZSB3aG9sZSBhbmFseXNpcyAKb2YgdGhlIHByb2JsZW0uCgo+Cj4g T25lIG1ham9yIHByb2JsZW0gSSBzZWUgaXMgdGhhdCB3ZSBjb25maWd1cmUgdGhlIGhvc3QgaHJ0 aW1lciB0byBmaXJlIGF0Cj4gdGhlIHBvaW50IGluIHRpbWUgd2hlbiB0aGUgZ3Vlc3Qgd2FudHMg dG8gc2VlIGEgdGltZXIgZXZlbnQuIEJ1dCBpbiBhCj4gdmlydHVhbCBlbnZpcm9ubWVudCwgdGhl IHBvaW50IGluIHRpbWUgd2hlbiB3ZSBoYXZlIHRvIHN0YXJ0IHN3aXRjaGluZwo+IHRvIHRoZSBW TSByZWFsbHkgc2hvdWxkIGJlIGEgYml0ICpiZWZvcmUqIHRoZSBndWVzdCB3YW50cyB0byBiZSB3 b2tlbgo+IHVwLCBhcyBpdCB0YWtlcyBxdWl0ZSBzb21lIHRpbWUgdG8gc3dpdGNoIGJhY2sgaW50 byB0aGUgVk0gY29udGV4dC4KPgo+Cj4gQWxleAo= From mboxrd@z Thu Jan 1 00:00:00 1970 From: Yang Zhang Subject: Re: [PATCH 2/2] x86/idle: use dynamic halt poll Date: Mon, 17 Jul 2017 17:26:13 +0800 Message-ID: References: <4444ffc8-9e7b-5bd2-20da-af422fe834cc@redhat.com> <2245bef7-b668-9265-f3f8-3b63d71b1033@gmail.com> <7d085956-2573-212f-44f4-86104beba9bb@gmail.com> <05ec7efc-fb9c-ae24-5770-66fc472545a4@redhat.com> <20170627134043.GA1487@potion> <2771f905-d1b0-b118-9ae9-db5fb87f877c@redhat.com> <20170627142251.GB1487@potion> <20170704141322.GC30880@potion> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Cc: Paolo Bonzini , Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , the arch/x86 maintainers , Jonathan Corbet , tony.luck@intel.com, Borislav Petkov , Peter Zijlstra , mchehab@kernel.org, Andrew Morton , krzk@kernel.org, jpoimboe@redhat.com, Andy Lutomirski , Christian Borntraeger , Thomas Garnier , Robert Gerst , Mathias Krause , douly.fnst@cn.fujitsu.com, Nicolai Stange , Frederic Weisbecker , dvlasenk@redhat.com, To: Alexander Graf , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= Return-path: In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On 2017/7/14 17:37, Alexander Graf wrote: > > > On 13.07.17 13:49, Yang Zhang wrote: >> On 2017/7/4 22:13, Radim Krčmář wrote: >>> 2017-07-03 17:28+0800, Yang Zhang: >>>> The background is that we(Alibaba Cloud) do get more and more >>>> complaints >>>> from our customers in both KVM and Xen compare to bare-mental.After >>>> investigations, the root cause is known to us: big cost in message >>>> passing >>>> workload(David show it in KVM forum 2015) >>>> >>>> A typical message workload like below: >>>> vcpu 0 vcpu 1 >>>> 1. send ipi 2. doing hlt >>>> 3. go into idle 4. receive ipi and wake up from hlt >>>> 5. write APIC time twice 6. write APIC time twice to >>>> to stop sched timer reprogram sched timer >>> >>> One write is enough to disable/re-enable the APIC timer -- why does >>> Linux use two? >> >> One is to remove the timer and another one is to reprogram the timer. >> Normally, only one write to remove the timer.But in some cases, it >> will reprogram it. >> >>> >>>> 7. doing hlt 8. handle task and send ipi to >>>> vcpu 0 >>>> 9. same to 4. 10. same to 3 >>>> >>>> One transaction will introduce about 12 vmexits(2 hlt and 10 msr >>>> write). The >>>> cost of such vmexits will degrades performance severely. >>> >>> Yeah, sounds like too much ... I understood that there are >>> >>> IPI from 1 to 2 >>> 4 * APIC timer >>> IPI from 2 to 1 >>> >>> which adds to 6 MSR writes -- what are the other 4? >> >> In the worst case, each timer will touch APIC timer twice.So it will >> add additional 4 msr writse. But this is not always true. >> >>> >>>> Linux kernel >>>> already provide idle=poll to mitigate the trend. But it only >>>> eliminates the >>>> IPI and hlt vmexit. It has nothing to do with start/stop sched timer. A >>>> compromise would be to turn off NOHZ kernel, but it is not the default >>>> config for new distributions. Same for halt-poll in KVM, it only >>>> solve the >>>> cost from schedule in/out in host and can not help such workload much. >>>> >>>> The purpose of this patch we want to improve current idle=poll >>>> mechanism to >>> >>> Please aim to allow MWAIT instead of idle=poll -- MWAIT doesn't slow >>> down the sibling hyperthread. MWAIT solves the IPI problem, but doesn't >>> get rid of the timer one. >> >> Yes, i can try it. But MWAIT will not yield CPU, it only helps the >> sibling hyperthread as you mentioned. > > If you implement proper MWAIT emulation that conditionally gets en- or > disabled depending on the same halt poll dynamics that we already have > for in-host HLT handling, it will also yield the CPU. It is hard to do . If we not intercept MWAIT instruction, there is no chance to wake up the CPU unless an interrupt arrived or a store to the address armed by MONITOR which is the same with idle=polling. > > As for the timer - are you sure the problem is really the overhead of > the timer configuration, not the latency that it takes to actually fire > the guest timer? No, the main cost is introduced by vmexit, includes IPIs, Timer program, HLT. David detailed it in KVM forum, you can search "Message Passing Workloads in KVM" in google and the first link give the whole analysis of the problem. > > One major problem I see is that we configure the host hrtimer to fire at > the point in time when the guest wants to see a timer event. But in a > virtual environment, the point in time when we have to start switching > to the VM really should be a bit *before* the guest wants to be woken > up, as it takes quite some time to switch back into the VM context. > > > Alex -- Yang Alibaba Cloud Computing