All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Paul Durrant <paul.durrant@citrix.com>
Subject: Re: [PATCH v2] x86/HVM: correct hvmemul_map_linear_addr() for multi-page case
Date: Thu, 20 Sep 2018 08:51:39 -0600	[thread overview]
Message-ID: <5BA3B3FB02000078001EA504@prv1-mh.provo.novell.com> (raw)
In-Reply-To: <7fc0cc4b-2b1a-fa86-85ce-c72c39829318@citrix.com>

>>> On 20.09.18 at 16:13, <andrew.cooper3@citrix.com> wrote:
> On 20/09/18 14:39, Jan Beulich wrote:
>>>>> On 20.09.18 at 14:41, <andrew.cooper3@citrix.com> wrote:
>>> On 13/09/18 11:12, Jan Beulich wrote:
>>>> The function does two translations in one go for a single guest access.
>>>> Any failure of the first translation step (guest linear -> guest
>>>> physical), resulting in #PF, ought to take precedence over any failure
>>>> of the second step (guest physical -> host physical).
>>> Why?  What is the basis of this presumption?
>>>
>>> As far as what real hardware does...
>>>
>>> This test sets up a ballooned page and a read-only page.  I.e. a second
>>> stage fault on the first part of a misaligned access, and a first stage
>>> fault on the second part of the access.
>>>
>>> (d1) --- Xen Test Framework ---
>>> (d1) Environment: HVM 64bit (Long mode 4 levels)
>>> (d1) Test splitfault
>>> (d1) About to read
>>> (XEN) *** EPT qual 0000000000000181, gpa 000000000011cffc
>>> (d1) Reading PTR: got 00000000ffffffff
>>> (d1) About to write
>>> (XEN) *** EPT qual 0000000000000182, gpa 000000000011cffc
>>> (d1) ******************************
>>> (d1) PANIC: Unhandled exception at 0008:00000000001047e0
>>> (d1) Vec 14 #PF[-d-sWP] %cr2 000000000011d000
>>> (d1) ******************************
>>>
>>> The second stage fault is recognised first, which is contrary to your
>>> presumption, i.e. the code in its current form appears to be correct.
>> But the guest doesn't know about 2nd stage translation. In the
>> absence of it, the (1st stage / only) fault ought to occur before
>> any bus level actions would be taken.
> 
> You have not answered my question.
> 
> Why?  On what basis do you conclude that the behaviour you describe is
> "correct", especially now given evidence to the contrary?

As to the basis I'm taking: Without it spelled out anywhere, any
sensible behavior can be considered "correct". But let's look at the
steps unpatched code takes:

hvm_translate_get_page() for the tail of the first page produces
HVMTRANS_bad_gfn_to_mfn, so we bail from the loop, returning
NULL. The caller takes this as an indication to write the range in
pieces. Hence a write to the last bytes of the first page occurs (if
it was MMIO instead of a ballooned page) before we raise #PF.

Now let's look at patched code behavior:

hvm_translate_get_page() for the tail of the first page produces
HVMTRANS_bad_gfn_to_mfn again, but we continue the loop.
hvm_translate_get_page() for the start of the second page
produces HVMTRANS_bad_linear_to_gfn, so we raise #PF without
first doing a partial write.

I continue to think that this is the less surprising behavior. Without
it being mandated that the partial write _has_ to occur, I'd much
prefer this changed behavior, no matter how the specific piece of
hardware behaves that you ran your test on.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2018-09-20 14:51 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-12  9:09 [PATCH] x86/HVM: correct hvmemul_map_linear_addr() for multi-page case Jan Beulich
2018-09-12 11:51 ` Paul Durrant
2018-09-12 12:13   ` Jan Beulich
2018-09-13 10:12 ` [PATCH v2] " Jan Beulich
2018-09-13 11:06   ` Paul Durrant
2018-09-13 11:39     ` Jan Beulich
2018-09-13 11:41       ` Paul Durrant
2018-09-20 12:41   ` Andrew Cooper
2018-09-20 13:39     ` Jan Beulich
2018-09-20 14:13       ` Andrew Cooper
2018-09-20 14:51         ` Jan Beulich [this message]
2018-09-25 12:41     ` Jan Beulich
2018-09-25 15:30       ` Andrew Cooper
2018-09-26  9:27         ` Jan Beulich
2018-10-08 11:53         ` Jan Beulich
2019-07-31 11:26   ` [Xen-devel] " Alexandru Stefan ISAILA
2023-08-30 14:30 ` [Xen-devel] [PATCH] " Roger Pau Monné
2023-08-30 18:09   ` Andrew Cooper
2023-08-31  7:03     ` Jan Beulich
2023-08-31  8:59       ` Roger Pau Monné
2023-08-31  7:14   ` [Xen-devel] " Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5BA3B3FB02000078001EA504@prv1-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=paul.durrant@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.