From: Paul Durrant <Paul.Durrant@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: x86/vMSI-X emulation issue
Date: Thu, 24 Mar 2016 09:53:57 +0000 [thread overview]
Message-ID: <d38ffd0da5bf4ddda3d9e7b2f6a254c4@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <56F3C58C02000078000DFF7B@prv-mh.provo.novell.com>
> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 24 March 2016 09:47
> To: Paul Durrant
> Cc: Andrew Cooper; xen-devel
> Subject: RE: [Xen-devel] x86/vMSI-X emulation issue
>
> >>> On 24.03.16 at 10:39, <Paul.Durrant@citrix.com> wrote:
> >> -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 24 March 2016 09:35
> >> To: Paul Durrant
> >> Cc: Andrew Cooper; xen-devel
> >> Subject: RE: [Xen-devel] x86/vMSI-X emulation issue
> >>
> >> >>> On 24.03.16 at 10:09, <Paul.Durrant@citrix.com> wrote:
> >> >> From: Xen-devel [mailto:xen-devel-bounces@lists.xen.org] On Behalf
> Of
> >> Jan
> >> >> Beulich
> >> >> Sent: 24 March 2016 07:52
> >> >> > 2) Do aforementioned chopping automatically on seeing
> >> >> > X86EMUL_UNHANDLEABLE, on the basis that the .check
> >> >> > handler had indicated that the full range was acceptable. That
> >> >> > would at once cover other similarly undesirable cases like the
> >> >> > vLAPIC code returning this error. However, any stdvga like
> >> >> > emulated device would clearly not want such to happen, and
> >> >> > would instead prefer the entire batch to get forwarded in one
> >> >> > go (stdvga itself sits on a different path). Otoh, with the
> >> >> > devices we have currently, this would seem to be the least
> >> >> > intrusive solution.
> >> >>
> >> >> Having thought about it more over night, I think this indeed is
> >> >> the most reasonable route, not just because it's least intrusive:
> >> >> For non-buffered internally handled I/O requests, no good can
> >> >> come from forwarding full batches to qemu, when the respective
> >> >> range checking function has indicated that this is an acceptable
> >> >> request. And in fact neither vHPET not vIO-APIC code generate
> >> >> X86EMUL_UNHANDLEABLE. And vLAPIC code doing so is also
> >> >> just apparently so - I'll submit a patch to make this obvious once
> >> >> tested.
> >> >>
> >> >> Otoh stdvga_intercept_pio() uses X86EMUL_UNHANDLEABLE in
> >> >> a manner similar to the vMSI-X code - for internal caching and
> >> >> then forwarding to qemu. Clearly that is also broken for
> >> >> REP OUTS, and hence a similar rep count reduction is going to
> >> >> be needed for the port I/O case.
> >> >
> >> > It suggests that such cache-and/or-forward models should probably sit
> >> > somewhere else in the flow, possibly being invoked from
> >> hvm_send_ioreq()
> >> > since there should indeed be a selected ioreq server for these cases.
> >>
> >> I don't really think so. As I have gone through and carried out
> >> what I had described above, I think I managed to address at
> >> least one more issue with not properly handled rep counts, and
> >> hence I think doing it that way is correct. I'll have to test the
> >> thing before I can send it out, for you to take a look.
> >>
> >
> > Ok. I never particularly liked using X86EMUL_UNHANDLEABLE to invoke the
> > forwarding behaviour though as it's only legitimate to do it on the first
> > rep.
>
> Well, that's explicitly one of the wrong assumptions that patch
> addresses: It is perfectly fine for an individual handler to return
> this on other than the first iteration. It's only the generic
> infrastructure which doesn't currently permit this (for no
> apparent reason).
>
Well, I guess the reason was that something returning X86EMUL_UNHANDLEABLE on a rep used to be used in the case of a page fault to bail out of the rep cycle, page in the memory and have it be restarted. I got rid of that in favour of pre-slicing the reps and making sure the memory was paged in before attempting the I/O. Thus there needed to be some special way of indicating an I/O that needed to be forwarded to QEMU vs. a page fault somewhere in the middle.
> > I always had the feeling there had to be a nicer way of doing it.
> > Possibly just too intrusive a change at this point though.
>
> I'm of course up for alternatives, if you're willing to work on such.
I'll have a look at your code but, if I have the time I may look to re-factor things once 4.7 is out the door.
Paul
> Yet I think backporting would become even more of a problem when
> going such an alternative route.
>
> Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
prev parent reply other threads:[~2016-03-24 10:00 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-23 17:05 x86/vMSI-X emulation issue Jan Beulich
2016-03-24 7:51 ` Jan Beulich
2016-03-24 9:09 ` Paul Durrant
2016-03-24 9:35 ` Jan Beulich
2016-03-24 9:39 ` Paul Durrant
2016-03-24 9:46 ` Jan Beulich
2016-03-24 9:53 ` Paul Durrant [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d38ffd0da5bf4ddda3d9e7b2f6a254c4@AMSPEX02CL03.citrite.net \
--to=paul.durrant@citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=JBeulich@suse.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).