All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Roger Pau Monné" <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Paul Durrant <paul.durrant@citrix.com>
Subject: Re: [PATCH] x86/HVM: correct hvmemul_map_linear_addr() for multi-page case
Date: Thu, 31 Aug 2023 10:59:31 +0200	[thread overview]
Message-ID: <ZPBWcyL-nyHKV9zT@MacBook-MacBook-Pro-de-Roger.local> (raw)
In-Reply-To: <5b28f42f-be2d-b826-2bfe-434b0c1742e2@suse.com>

On Thu, Aug 31, 2023 at 09:03:18AM +0200, Jan Beulich wrote:
> On 30.08.2023 20:09, Andrew Cooper wrote:
> > On 30/08/2023 3:30 pm, Roger Pau Monné wrote:
> >> On Wed, Sep 12, 2018 at 03:09:35AM -0600, Jan Beulich wrote:
> >>> The function does two translations in one go for a single guest access.
> >>> Any failure of the first translation step (guest linear -> guest
> >>> physical), resulting in #PF, ought to take precedence over any failure
> >>> of the second step (guest physical -> host physical).
> > 
> > Erm... No?
> > 
> > There are up to 25 translations steps, assuming a memory operand
> > contained entirely within a cache-line.
> > 
> > They intermix between gla->gpa and gpa->spa in a strict order.
> 
> But we're talking about an access crossing a page boundary here.
> 
> > There not a point where the error is ambiguous, nor is there ever a
> > point where a pagewalk continues beyond a faulting condition.
> > 
> > Hardware certainly isn't wasting transistors to hold state just to see
> > could try to progress further in order to hand back a different error...
> > 
> > 
> > When the pipeline needs to split an access, it has to generate multiple
> > adjacent memory accesses, because the unit of memory access is a cache line.
> > 
> > There is a total order of accesses in the memory queue, so any faults
> > from first byte of the access will be delivered before any fault from
> > the first byte to move into the next cache line.
> 
> Looks like we're fundamentally disagreeing on what we try to emulate in
> Xen. My view is that the goal ought to be to match, as closely as
> possible, how code would behave on bare metal. IOW no considerations of
> of the GPA -> MA translation steps. Of course in a fully virtualized
> environment these necessarily have to occur for the page table accesses
> themselves, before the the actual memory access can be carried out. But
> that's different for the leaf access itself. (In fact I'm not even sure
> the architecture guarantees that the two split accesses, or their
> associated page walks, always occur in [address] order.)
> 
> I'd also like to expand on the "we're": Considering the two R-b I got
> already back at the time, both apparently agreed with my way of looking
> at things. With Roger's reply that you've responded to here, I'm
> getting the impression that he also shares that view.

Ideally the emulator should attempt to replicate the behavior a guests
gets when running on second-stage translation, so it's not possible to
differentiate the behavior of emulating an instruction vs
executing it in non-root mode. IOW: not only take the ordering of #PF
into account, but also the EPT_VIOLATION vmexits.

> Of course that still doesn't mean we're right and you're wrong, but if
> you think that's the case, it'll take you actually supplying arguments
> supporting your view. And since we're talking of an abstract concept
> here, resorting to how CPUs actually deal with the same situation
> isn't enough. It wouldn't be the first time that they got things
> wrong. Plus it may also require you potentially accepting that
> different views are possible, without either being strictly wrong and
> the other strictly right.

I don't really have an answer here, with the lack of a written down
specification by vendors I think we should just go with whatever is
easier for us to handle in the hypervisor.

Also, this is such a corner case, that I would think any guest
attempting this is likely hitting a BUG or attempting something fishy.

Thanks, Roger.


  reply	other threads:[~2023-08-31  9:00 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-12  9:09 [PATCH] x86/HVM: correct hvmemul_map_linear_addr() for multi-page case Jan Beulich
2018-09-12 11:51 ` Paul Durrant
2018-09-12 12:13   ` Jan Beulich
2018-09-13 10:12 ` [PATCH v2] " Jan Beulich
2018-09-13 11:06   ` Paul Durrant
2018-09-13 11:39     ` Jan Beulich
2018-09-13 11:41       ` Paul Durrant
2018-09-20 12:41   ` Andrew Cooper
2018-09-20 13:39     ` Jan Beulich
2018-09-20 14:13       ` Andrew Cooper
2018-09-20 14:51         ` Jan Beulich
2018-09-25 12:41     ` Jan Beulich
2018-09-25 15:30       ` Andrew Cooper
2018-09-26  9:27         ` Jan Beulich
2018-10-08 11:53         ` Jan Beulich
2019-07-31 11:26   ` [Xen-devel] " Alexandru Stefan ISAILA
2023-08-30 14:30 ` [Xen-devel] [PATCH] " Roger Pau Monné
2023-08-30 18:09   ` Andrew Cooper
2023-08-31  7:03     ` Jan Beulich
2023-08-31  8:59       ` Roger Pau Monné [this message]
2023-08-31  7:14   ` [Xen-devel] " Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZPBWcyL-nyHKV9zT@MacBook-MacBook-Pro-de-Roger.local \
    --to=roger.pau@citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=paul.durrant@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.