xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Sander Eikelenboom <linux@eikelenboom.it>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: Xen-unstable 4.8: HVM domain_crash called from emulate.c:144 RIP: c000:[<000000000000336a>]
Date: Wed, 15 Jun 2016 15:46:27 +0000	[thread overview]
Message-ID: <a5aa55033d1e4e9e863ea478bae07a0e@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <576193BD02000078000F56EB@prv-mh.provo.novell.com>

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 15 June 2016 16:43
> To: Paul Durrant
> Cc: Sander Eikelenboom; xen-devel@lists.xen.org; Boris Ostrovsky
> Subject: RE: [Xen-devel] Xen-unstable 4.8: HVM domain_crash called from
> emulate.c:144 RIP: c000:[<000000000000336a>]
> 
> >>> On 15.06.16 at 17:29, <Paul.Durrant@citrix.com> wrote:
> >>  -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 15 June 2016 16:22
> >> To: Paul Durrant; Boris Ostrovsky
> >> Cc: Sander Eikelenboom; xen-devel@lists.xen.org
> >> Subject: Re: [Xen-devel] Xen-unstable 4.8: HVM domain_crash called
> from
> >> emulate.c:144 RIP: c000:[<000000000000336a>]
> >>
> >> >>> On 15.06.16 at 16:56, <boris.ostrovsky@oracle.com> wrote:
> >> > On 06/15/2016 10:39 AM, Jan Beulich wrote:
> >> >>>>> On 15.06.16 at 16:32, <boris.ostrovsky@oracle.com> wrote:
> >> >>> So perhaps we shouldn't latch data for anything over page size.
> >> >> But why? What we latch is the start of the accessed range, so
> >> >> the repeat count shouldn't matter?
> >> >
> >> > Because otherwise we won't emulate full stos (or movs) --- we truncate
> >> > *reps to fit into a page, don't we?
> >>
> >> That merely causes the instruction to get restarted (with a smaller
> >> rCX).
> >>
> >> > And then we fail the completion check.
> >> >
> >> > And we should latch only when we don't cross page boundary, not just
> >> > when we are under 4K. Or maybe it's not that we don't latch it. It's
> >> > that we don't use latched data if page boundary is being crossed.
> >>
> >> Ah, I think that's it: When we hand a batch to qemu which crosses
> >> a page boundary and latch the start address translation, upon
> >> retry (after qemu did its job) we'd wrongly reduce the repeat count
> >> because of finding the start address in the cache. So indeed I think
> >> it should be the latter: Not using an available translation is likely
> >> better than breaking up a large batch we hand to qemu. Paul, what
> >> do you think?
> >
> > Presumably we can tell the difference because we have the vio ioreq state,
> > which should tell us that we're waiting for I/O completion and so, in this
> > case, you can avoid reducing the repeat count when retrying. You should
> still
> > be able to use the latched translation though, shouldn't you?
> 
> Would we want to rely on it despite crossing a page boundary?
> Of course what was determined to be contiguous should
> continue to be, so one might even say using the latched
> translation in that case would provide more consistent results
> (as we'd become independent of a guest page table change).

Yes, exactly.

> But then again a MOVS has two memory operands ...
> 

True... more of an argument for having two latched addresses though, right?

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-06-15 15:46 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-14 23:49 Xen-unstable 4.8: HVM domain_crash called from emulate.c:144 RIP: c000:[<000000000000336a>] linux
2016-06-15  8:29 ` Jan Beulich
2016-06-15  8:57   ` Sander Eikelenboom
2016-06-15  9:38     ` Sander Eikelenboom
2016-06-15 10:12       ` Jan Beulich
2016-06-15 12:00         ` Sander Eikelenboom
2016-06-15 12:48           ` Jan Beulich
2016-06-15 13:58             ` Sander Eikelenboom
2016-06-15 14:07               ` Jan Beulich
2016-06-15 14:20                 ` Boris Ostrovsky
2016-06-15 14:32                   ` Boris Ostrovsky
2016-06-15 14:39                     ` Jan Beulich
2016-06-15 14:56                       ` Boris Ostrovsky
2016-06-15 15:22                         ` Jan Beulich
2016-06-15 15:29                           ` Paul Durrant
2016-06-15 15:43                             ` Jan Beulich
2016-06-15 15:46                               ` Paul Durrant [this message]
2016-06-15 15:54                                 ` Jan Beulich
2016-06-15 16:46                                   ` Boris Ostrovsky
2016-06-16  8:03                                     ` Jan Beulich
2016-06-15 14:35                   ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a5aa55033d1e4e9e863ea478bae07a0e@AMSPEX02CL03.citrite.net \
    --to=paul.durrant@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=linux@eikelenboom.it \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).