All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xuquan (Quan Xu)" <xuquan8@huawei.com>
Cc: "yang.zhang.wz@gmail.com" <yang.zhang.wz@gmail.com>,
	Tianyu Lan <tianyu.lan@intel.com>,
	Kevin Tian <kevin.tian@intel.com>,
	AndrewCooper <andrew.cooper3@citrix.com>,
	George Dunlap <George.Dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jun Nakajima <jun.nakajima@intel.com>
Subject: Re: [PATCH v3] x86/apicv: fix RTC periodic timer and apicv issue
Date: Tue, 20 Dec 2016 02:57:19 -0700	[thread overview]
Message-ID: <58590E8F020000780012AD6C@prv-mh.provo.novell.com> (raw)
In-Reply-To: <E0A769A898ADB6449596C41F51EF62C6AD7D41@SZXEMI506-MBX.china.huawei.com>

>>> On 20.12.16 at 10:38, <xuquan8@huawei.com> wrote:
> On December 20, 2016 4:32 PM, Jan Beulich wrote:
>>>>> On 20.12.16 at 06:54, <xuquan8@huawei.com> wrote:
>>> On December 20, 2016 1:37 PM, Tian, Kevin wrote:
>>>>> From: Xuquan (Quan Xu) [mailto:xuquan8@huawei.com]
>>>>> Sent: Friday, December 16, 2016 5:40 PM
>>>>>
>>>>> From 89fffdd6b563b2723e24d17231715bb8c9f24f90 Mon Sep 17
>>00:00:00
>>>>2001
>>>>> From: Quan Xu <xuquan8@huawei.com>
>>>>> Date: Fri, 16 Dec 2016 17:24:01 +0800
>>>>> Subject: [PATCH v3] x86/apicv: fix RTC periodic timer and apicv
>>>>> issue
>>>>>
>>>>> When Xen apicv is enabled, wall clock time is faster on Windows7-32
>>>>> guest with high payload (with 2vCPU, captured from xentrace, in high
>>>>> payload, the count of IPI interrupt increases rapidly between these
>>>>> vCPUs).
>>>>>
>>>>> If IPI intrrupt (vector 0xe1) and periodic timer interrupt (vector
>>>>> 0xd1) are both pending (index of bit set in vIRR), unfortunately,
>>>>> the IPI intrrupt is high priority than periodic timer interrupt. Xen
>>>>> updates IPI interrupt bit set in vIRR to guest interrupt status
>>>>> (RVI) as a high priority and apicv (Virtual-Interrupt Delivery)
>>>>> delivers IPI interrupt within VMX non-root operation without a
>>>>> VM-Exit. Within VMX non-root operation, if periodic timer interrupt
>>>>> index of bit is set in vIRR and highest, the apicv delivers periodic
>>>>> timer interrupt within VMX non-root operation as well.
>>>>>
>>>>> But in current code, if Xen doesn't update periodic timer interrupt
>>>>> bit set in vIRR to guest interrupt status (RVI) directly, Xen is not
>>>>> aware of this case to decrease the count (pending_intr_nr) of
>>>>> pending periodic timer interrupt, then Xen will deliver a periodic
>>>>> timer interrupt
>>>>again.
>>>>>
>>>>> And that we update periodic timer interrupt in every VM-entry, there
>>>>> is a chance that already-injected instance (before EOI-induced exit
>>>>> happens) will incur another pending IRR setting if there is a
>>>>> VM-exit happens between virtual interrupt injection (vIRR->0,
>>>>> vISR->1) and EOI-induced exit (vISR->0), since pt_intr_post hasn't
>>>>> been invoked yet, then the guest receives more periodic timer
>>interrupt.
>>>>>
>>>>> So we set eoi_exit_bitmap for intack.vector when it's higher than
>>>>> pending periodic time interrupts. This way we can guarantee there's
>>>>> always a chance to post periodic time interrupts when periodic time
>>>>> interrupts becomes the highest one.
>>>>>
>>>>> Signed-off-by: Quan Xu <xuquan8@huawei.com>
>>>>
>>>>I suppose you've verified this new version, but still would like get
>>>>your explicit confirmation - did you still see time accuracy issue in your
>>side?
>>>>Have you tried other guest OS types other than Win7-32?
>>>>
>>>
>>> I only verified it on win7-32 guest..
>>> I will continue to verify it on other windows guest (I think windows
>>> is enough, right?)
>>
>>No, I don't think Windows alone is sufficient for verification. People run all
>>kinds of OSes as HVM guests, and your change should not negatively impact
>>them. At the very least you want to also try Linux.
> 
> Cloud I use 'date' command to test it? As I only have server version of 
> LINUX, no desktop version...

Well - I'm really not sure how to best test this.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2016-12-20  9:57 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-12-16  9:40 [PATCH v3] x86/apicv: fix RTC periodic timer and apicv issue Xuquan (Quan Xu)
2016-12-20  5:37 ` Tian, Kevin
2016-12-20  5:54   ` Xuquan (Quan Xu)
2016-12-20  8:32     ` Jan Beulich
2016-12-20  9:38       ` Xuquan (Quan Xu)
2016-12-20  9:57         ` Jan Beulich [this message]
2016-12-21  2:32         ` Tian, Kevin
2016-12-20  8:34   ` Jan Beulich
2016-12-20  8:53     ` Tian, Kevin
2016-12-20  8:57       ` Jan Beulich
2016-12-20  9:33       ` Xuquan (Quan Xu)
2016-12-20 13:12   ` Xuquan (Quan Xu)
2016-12-21  2:29     ` Tian, Kevin
2016-12-21  4:59       ` Xuquan (Quan Xu)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=58590E8F020000780012AD6C@prv-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=George.Dunlap@citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=jun.nakajima@intel.com \
    --cc=kevin.tian@intel.com \
    --cc=tianyu.lan@intel.com \
    --cc=xen-devel@lists.xen.org \
    --cc=xuquan8@huawei.com \
    --cc=yang.zhang.wz@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.