linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Brijesh Singh <brijesh.singh@amd.com>
To: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: <brijesh.singh@amd.com>, <pbonzini@redhat.com>, <joro@8bytes.org>,
	<kvm@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	<tglx@linutronix.de>, <mingo@redhat.com>, <hpa@zytor.com>,
	<x86@kernel.org>, <Thomas.Lendacky@amd.com>
Subject: Re: [PATCH] x86: kvm: Avoid guest page table walk when gpa_available is set
Date: Fri, 28 Apr 2017 14:15:39 -0500	[thread overview]
Message-ID: <7e27af5e-5978-5b57-f008-58515b218cce@amd.com> (raw)
In-Reply-To: <20170426204427.GA7135@potion>

Hi Radim,

>
> This will probably return false negatives when then vcpu->arch.gpa_val
> couldn't be used anyway (possibly after a VM exits), so it is hard to
> draw a conclusion.
>
> I would really like if we had a solution that passed the gpa inside
> function parameters.
>
> (Btw. cr2 can also be a virtual address, so we can call it gpa directly)
>

I've tried the below patch and it seems to work fine. This does not consider
PIO case and as you rightly pointed PIO should trigger #NPF relatively rarely.
At least so far in my runs I have not seen PIO causing #NPF. If this sounds
acceptable approach then I can submit v2 with these changes and remove gpa_val
additional.

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 13c132b..c040e38 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4662,17 +4662,17 @@ static int emulator_read_write_onepage(unsigned long addr, void *val,
          */
         if (vcpu->arch.gpa_available &&
             emulator_can_use_gpa(ctxt) &&
-           vcpu_is_mmio_gpa(vcpu, addr, exception->address, write) &&
             (addr & ~PAGE_MASK) == (exception->address & ~PAGE_MASK)) {
                 gpa = exception->address;
-               goto mmio;
+               ret = vcpu_is_mmio_gpa(vcpu, addr, gpa, write);
+       } else {
+               dump_stack();
+               ret = vcpu_mmio_gva_to_gpa(vcpu, addr, &gpa, exception, write);
+
+               if (ret < 0)
+                       return X86EMUL_PROPAGATE_FAULT;
         }
  
-       ret = vcpu_mmio_gva_to_gpa(vcpu, addr, &gpa, exception, write);
-
-       if (ret < 0)
-               return X86EMUL_PROPAGATE_FAULT;
-
         /* For APIC access vmexit */
         if (ret)
                 goto mmio;
@@ -7056,11 +7056,11 @@ static int vcpu_run(struct kvm_vcpu *vcpu)
         return r;
  }
  
-static inline int complete_emulated_io(struct kvm_vcpu *vcpu)
+static inline int complete_emulated_io(struct kvm_vcpu *vcpu, unsigned long cr2)
  {
         int r;
         vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
-       r = emulate_instruction(vcpu, EMULTYPE_NO_DECODE);
+       r = x86_emulate_instruction(vcpu, cr2, EMULTYPE_NO_DECODE, NULL, 0);
         srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
         if (r != EMULATE_DONE)
                 return 0;
@@ -7071,7 +7071,7 @@ static int complete_emulated_pio(struct kvm_vcpu *vcpu)
  {
         BUG_ON(!vcpu->arch.pio.count);
  
-       return complete_emulated_io(vcpu);
+       return complete_emulated_io(vcpu, 0);
  }
  /*
@@ -7097,6 +7097,7 @@ static int complete_emulated_mmio(struct kvm_vcpu *vcpu)
         struct kvm_run *run = vcpu->run;
         struct kvm_mmio_fragment *frag;
         unsigned len;
+       gpa_t gpa;
  
         BUG_ON(!vcpu->mmio_needed);
  
@@ -7106,6 +7107,12 @@ static int complete_emulated_mmio(struct kvm_vcpu *vcpu)
         if (!vcpu->mmio_is_write)
                 memcpy(frag->data, run->mmio.data, len);
  
+       /*
+       * lets use the GPA from previous guest page table walk to avoid yet
+       * another guest page table walk when completing the MMIO page-fault.
+        */
+       gpa = frag->gpa;
+
         if (frag->len <= 8) {
                 /* Switch to the next fragment. */
                 frag++;
@@ -7124,7 +7131,7 @@ static int complete_emulated_mmio(struct kvm_vcpu *vcpu)
                 if (vcpu->mmio_is_write)
                         return 1;
                 vcpu->mmio_read_completed = 1;
-               return complete_emulated_io(vcpu);
+               return complete_emulated_io(vcpu, gpa);
         }
  
         run->exit_reason = KVM_EXIT_MMIO;

  reply	other threads:[~2017-04-28 19:15 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-24 15:52 [PATCH] x86: kvm: Avoid guest page table walk when gpa_available is set Brijesh Singh
2017-04-24 16:16 ` Brijesh Singh
2017-04-24 20:52 ` Radim Krčmář
2017-04-24 22:14   ` Brijesh Singh
2017-04-25 14:03     ` Radim Krčmář
2017-04-25 22:02       ` Brijesh Singh
2017-04-26 20:44         ` Radim Krčmář
2017-04-28 19:15           ` Brijesh Singh [this message]
2017-05-02 16:24             ` Paolo Bonzini
2017-05-02 21:44               ` Brijesh Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7e27af5e-5978-5b57-f008-58515b218cce@amd.com \
    --to=brijesh.singh@amd.com \
    --cc=Thomas.Lendacky@amd.com \
    --cc=hpa@zytor.com \
    --cc=joro@8bytes.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=rkrcmar@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).