All of lore.kernel.org
 help / color / mirror / Atom feed
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>,
	wei.liu2@citrix.com, andrew.cooper3@citrix.com,
	paul.durrant@citrix.com, jbeulich@suse.com, roger.pau@citrix.com
Subject: [PATCH v3 2/2] x86/hvm: finish IOREQs correctly on completion path
Date: Thu, 14 Mar 2019 22:30:49 +0000	[thread overview]
Message-ID: <1552602649-14358-2-git-send-email-igor.druzhinin@citrix.com> (raw)
In-Reply-To: <1552602649-14358-1-git-send-email-igor.druzhinin@citrix.com>

Since the introduction of linear_{read,write}() helpers in 3bdec530a5
(x86/HVM: split page straddling emulated accesses in more cases) the
completion path for IOREQs has been broken: if there is an IOREQ in
progress but hvm_copy_{to,from}_guest_linear() returns HVMTRANS_okay
(e.g. when P2M type of source/destination has been changed by IOREQ
handler) the execution will never re-enter hvmemul_do_io() where
IOREQs are completed. This usually results in a domain crash upon
the execution of the next IOREQ entering hvmemul_do_io() and finding
the remnants of the previous IOREQ in the state machine.

This particular issue has been discovered in relation to p2m_ioreq_server
type where an emulator changed the memory type between p2m_ioreq_server
and p2m_ram_rw in process of responding to IOREQ which made
hvm_copy_..() to behave differently on the way back.

Fix it for now by checking if IOREQ completion is required (which
can be identified by quering MMIO cache) before trying to finish
a memory access immediately through hvm_copy_..(), re-enter
hvmemul_do_io() otherwise. This change alone addresses IOREQ
completion issue where P2M type is modified in the middle of emulation
but is not enough for a more general case where machine state
arbitrarely changes behind our back.

Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---
Changes in v3:
* made it more clear that it's still a partial fix in the commit description
* other minor suggestions
---
 xen/arch/x86/hvm/emulate.c | 31 +++++++++++++++++++++++++------
 1 file changed, 25 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 4879ccb..92a9b82 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -952,7 +952,7 @@ static int hvmemul_phys_mmio_access(
  * cache indexed by linear MMIO address.
  */
 static struct hvm_mmio_cache *hvmemul_find_mmio_cache(
-    struct hvm_vcpu_io *vio, unsigned long gla, uint8_t dir)
+    struct hvm_vcpu_io *vio, unsigned long gla, uint8_t dir, bool create)
 {
     unsigned int i;
     struct hvm_mmio_cache *cache;
@@ -966,6 +966,9 @@ static struct hvm_mmio_cache *hvmemul_find_mmio_cache(
             return cache;
     }
 
+    if ( !create )
+        return NULL;
+
     i = vio->mmio_cache_count;
     if( i == ARRAY_SIZE(vio->mmio_cache) )
         return NULL;
@@ -1000,7 +1003,7 @@ static int hvmemul_linear_mmio_access(
 {
     struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
     unsigned long offset = gla & ~PAGE_MASK;
-    struct hvm_mmio_cache *cache = hvmemul_find_mmio_cache(vio, gla, dir);
+    struct hvm_mmio_cache *cache = hvmemul_find_mmio_cache(vio, gla, dir, true);
     unsigned int chunk, buffer_offset = 0;
     paddr_t gpa;
     unsigned long one_rep = 1;
@@ -1089,8 +1092,9 @@ static int linear_read(unsigned long addr, unsigned int bytes, void *p_data,
                        uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
 {
     pagefault_info_t pfinfo;
+    struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
     unsigned int offset = addr & ~PAGE_MASK;
-    int rc;
+    int rc = HVMTRANS_bad_gfn_to_mfn;
 
     if ( offset + bytes > PAGE_SIZE )
     {
@@ -1104,7 +1108,14 @@ static int linear_read(unsigned long addr, unsigned int bytes, void *p_data,
         return rc;
     }
 
-    rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
+    /*
+     * If there is an MMIO cache entry for that access then we must be re-issuing
+     * an access that was previously handled as MMIO. Thus it is imperative that
+     * we handle this access in the same way to guarantee completion and hence
+     * clean up any interim state.
+     */
+    if ( !hvmemul_find_mmio_cache(vio, addr, IOREQ_READ, false) )
+        rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
 
     switch ( rc )
     {
@@ -1134,8 +1145,9 @@ static int linear_write(unsigned long addr, unsigned int bytes, void *p_data,
                         uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
 {
     pagefault_info_t pfinfo;
+    struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
     unsigned int offset = addr & ~PAGE_MASK;
-    int rc;
+    int rc = HVMTRANS_bad_gfn_to_mfn;
 
     if ( offset + bytes > PAGE_SIZE )
     {
@@ -1149,7 +1161,14 @@ static int linear_write(unsigned long addr, unsigned int bytes, void *p_data,
         return rc;
     }
 
-    rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo);
+    /*
+     * If there is an MMIO cache entry for that acces then we must be re-issuing
+     * an access that was previously handled as MMIO. Thus it is imperative that
+     * we handle this access in the same way to guarantee completion and hence
+     * clean up any interim state.
+     */
+    if ( !hvmemul_find_mmio_cache(vio, addr, IOREQ_WRITE, false) )
+        rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo);
 
     switch ( rc )
     {
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2019-03-14 22:31 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-14 22:30 [PATCH v3 1/2] x86/hvm: split all linear reads and writes at page boundary Igor Druzhinin
2019-03-14 22:30 ` Igor Druzhinin [this message]
2019-03-15  9:28   ` [PATCH v3 2/2] x86/hvm: finish IOREQs correctly on completion path Paul Durrant
2019-03-15 12:27   ` Jan Beulich
2019-03-15 13:05     ` Igor Druzhinin
2019-03-15 13:16       ` Jan Beulich
2019-03-15  9:23 ` [PATCH v3 1/2] x86/hvm: split all linear reads and writes at page boundary Paul Durrant
2019-03-15 12:06 ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1552602649-14358-2-git-send-email-igor.druzhinin@citrix.com \
    --to=igor.druzhinin@citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=paul.durrant@citrix.com \
    --cc=roger.pau@citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.