All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paul Durrant <Paul.Durrant@citrix.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Igor Druzhinin <igor.druzhinin@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>,
	"jbeulich@suse.com" <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH] x86/hvm: finish IOREQ correctly on completion path
Date: Mon, 11 Mar 2019 10:30:42 +0000	[thread overview]
Message-ID: <71b6e1dee81b499a90e4b24b1989abf6@AMSPEX02CL02.citrite.net> (raw)
In-Reply-To: <1552080650-9168-1-git-send-email-igor.druzhinin@citrix.com>

> -----Original Message-----
> From: Igor Druzhinin [mailto:igor.druzhinin@citrix.com]
> Sent: 08 March 2019 21:31
> To: xen-devel@lists.xenproject.org
> Cc: Paul Durrant <Paul.Durrant@citrix.com>; jbeulich@suse.com; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Roger Pau Monne <roger.pau@citrix.com>;
> Igor Druzhinin <igor.druzhinin@citrix.com>
> Subject: [PATCH] x86/hvm: finish IOREQ correctly on completion path
> 
> Since the introduction of linear_{read,write}() helpers in 3bdec530a5
> (x86/HVM: split page straddling emulated accesses in more cases) the
> completion path for IOREQs has been broken: if there is an IOREQ in
> progress but hvm_copy_{to,from}_guest_linear() returns HVMTRANS_okay
> (e.g. when P2M type of source/destination has been changed by IOREQ
> handler) the execution will never re-enter hvmemul_do_io() where
> IOREQs are completed. This usually results in a domain crash upon
> the execution of the next IOREQ entering hvmemul_do_io() and finding
> the remnants of the previous IOREQ in the state machine.
> 
> This particular issue has been discovered in relation to p2m_ioreq_server
> type where an emulator changed the memory type between p2m_ioreq_server
> and p2m_ram_rw in process of responding to IOREQ which made hvm_copy_..()
> to behave differently on the way back. But could be also applied
> to a case where e.g. an emulator balloons memory to/from the guest in
> response to MMIO read/write, etc.
> 
> Fix it by checking if IOREQ completion is required before trying to
> finish a memory access immediately through hvm_copy_..(), re-enter
> hvmemul_do_io() otherwise.
> 
> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> ---
>  xen/arch/x86/hvm/emulate.c | 20 ++++++++++++++++++--
>  1 file changed, 18 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index 41aac28..36f8fee 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -1080,7 +1080,15 @@ static int linear_read(unsigned long addr, unsigned int bytes, void *p_data,
>                         uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
>  {
>      pagefault_info_t pfinfo;
> -    int rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
> +    const struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
> +    int rc = HVMTRANS_bad_gfn_to_mfn;
> +
> +    /*
> +     * If the memory access can be handled immediately - do it,
> +     * otherwise re-enter ioreq completion path to properly consume it.
> +     */
> +    if ( !hvm_ioreq_needs_completion(&vio->io_req) )
> +        rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);

I think this is the right thing to do but can we change the text to something like:

"If there is pending ioreq then we must be re-issuing an access that was previously handed as MMIO. Thus it is imperative that we handle this access in the same way to guarantee completion and hence clean up any interim state."

  Paul

> 
>      switch ( rc )
>      {
> @@ -1123,7 +1131,15 @@ static int linear_write(unsigned long addr, unsigned int bytes, void *p_data,
>                          uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
>  {
>      pagefault_info_t pfinfo;
> -    int rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo);
> +    const struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
> +    int rc = HVMTRANS_bad_gfn_to_mfn;
> +
> +    /*
> +     * If the memory access can be handled immediately - do it,
> +     * otherwise re-enter ioreq completion path to properly consume it.
> +     */
> +    if ( !hvm_ioreq_needs_completion(&vio->io_req) )
> +        rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo);
> 
>      switch ( rc )
>      {
> --
> 2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2019-03-11 10:31 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-08 21:30 [PATCH] x86/hvm: finish IOREQ correctly on completion path Igor Druzhinin
2019-03-11 10:30 ` Paul Durrant [this message]
2019-03-11 11:03   ` Jan Beulich
2019-03-11 11:09     ` Paul Durrant
2019-03-11 11:32       ` Jan Beulich
2019-03-11 11:39         ` Paul Durrant
2019-03-11 13:58           ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=71b6e1dee81b499a90e4b24b1989abf6@AMSPEX02CL02.citrite.net \
    --to=paul.durrant@citrix.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=igor.druzhinin@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=roger.pau@citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.