All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC dontapply] kvm_para: add mmio word store hypercall
@ 2012-03-25 22:05 Michael S. Tsirkin
  2012-03-25 23:25 ` H. Peter Anvin
  2012-03-26  9:21 ` Avi Kivity
  0 siblings, 2 replies; 10+ messages in thread
From: Michael S. Tsirkin @ 2012-03-25 22:05 UTC (permalink / raw)
  To: Joerg Roedel, Avi Kivity, Marcelo Tosatti, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, kvm, linux-kernel

We face a dilemma: IO mapped addresses are legacy,
so, for example, PCI express bridges waste 4K
of this space for each link, in effect limiting us
to 16 devices using this space.

Memory is supposed to replace them, but memory
exits are much slower than PIO because of the need for
emulation and page walks.

As a solution, this patch adds an MMIO hypercall with
the guest physical address + data.

I did test that this works but didn't benchmark yet.

TODOs:
This only implements a 2 bytes write since this is
the minimum required for virtio, but we'll probably need
at least 1 byte reads (for ISR read).
We can support up to 8 byte reads/writes for 64 bit
guests and up to 4 bytes for 32 ones - better limit
to 4 bytes for everyone for consistency, or support
the maximum that we can?

Further, a feature bit will need to be exposed to
guests so they know they can use the feature.

Need to test performance impact.

Finally the patch was on an ancient kvm version
and will need to be rebased.
Posting here for early flames/feedback.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 arch/x86/kvm/svm.c       |    3 +--
 arch/x86/kvm/vmx.c       |    3 +--
 arch/x86/kvm/x86.c       |   14 ++++++++++++++
 include/linux/kvm_para.h |    1 +
 4 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 5fa553b..00460e1 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1833,8 +1833,7 @@ static int vmmcall_interception(struct vcpu_svm *svm)
 {
 	svm->next_rip = kvm_rip_read(&svm->vcpu) + 3;
 	skip_emulated_instruction(&svm->vcpu);
-	kvm_emulate_hypercall(&svm->vcpu);
-	return 1;
+	return kvm_emulate_hypercall(&svm->vcpu);
 }
 
 static unsigned long nested_svm_get_tdp_cr3(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 3b4c8d8..0fff33e 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -4597,8 +4597,7 @@ static int handle_halt(struct kvm_vcpu *vcpu)
 static int handle_vmcall(struct kvm_vcpu *vcpu)
 {
 	skip_emulated_instruction(vcpu);
-	kvm_emulate_hypercall(vcpu);
-	return 1;
+	return kvm_emulate_hypercall(vcpu);
 }
 
 static int handle_invd(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9cbfc06..7bc00ae 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4915,7 +4915,9 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
 
 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 {
+	struct kvm_run *run = vcpu->run;
 	unsigned long nr, a0, a1, a2, a3, ret;
+	gpa_t gpa;
 	int r = 1;
 
 	if (kvm_hv_hypercall_enabled(vcpu->kvm))
@@ -4946,12 +4948,24 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 	case KVM_HC_VAPIC_POLL_IRQ:
 		ret = 0;
 		break;
+	case KVM_HC_MMIO_STORE_WORD:
+		gpa = hc_gpa(vcpu, a1, a2);
+		if (!write_mmio(vcpu, gpa, 2, &a0) && run) {
+			run->exit_reason = KVM_EXIT_MMIO;
+			run->mmio.phys_addr = gpa;
+			memcpy(run->mmio.data, &a0, 2);
+			run->mmio.len = 2;
+			run->mmio.is_write = 1;
+                        r = 0;
+		}
+		goto noret;
 	default:
 		ret = -KVM_ENOSYS;
 		break;
 	}
 out:
 	kvm_register_write(vcpu, VCPU_REGS_RAX, ret);
+noret:
 	++vcpu->stat.hypercalls;
 	return r;
 }
diff --git a/include/linux/kvm_para.h b/include/linux/kvm_para.h
index ff476dd..fa74700 100644
--- a/include/linux/kvm_para.h
+++ b/include/linux/kvm_para.h
@@ -19,6 +19,7 @@
 #define KVM_HC_MMU_OP			2
 #define KVM_HC_FEATURES			3
 #define KVM_HC_PPC_MAP_MAGIC_PAGE	4
+#define KVM_HC_MMIO_STORE_WORD		5
 
 /*
  * hypercalls use architecture specific
-- 
1.7.9.111.gf3fb0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH RFC dontapply] kvm_para: add mmio word store hypercall
  2012-03-25 22:05 [PATCH RFC dontapply] kvm_para: add mmio word store hypercall Michael S. Tsirkin
@ 2012-03-25 23:25 ` H. Peter Anvin
  2012-03-26  6:31   ` Michael S. Tsirkin
  2012-03-26  9:21 ` Avi Kivity
  1 sibling, 1 reply; 10+ messages in thread
From: H. Peter Anvin @ 2012-03-25 23:25 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Joerg Roedel, Avi Kivity, Marcelo Tosatti, Thomas Gleixner,
	Ingo Molnar, kvm, linux-kernel

On 03/25/2012 03:05 PM, Michael S. Tsirkin wrote:
> We face a dilemma: IO mapped addresses are legacy,
> so, for example, PCI express bridges waste 4K
> of this space for each link, in effect limiting us
> to 16 devices using this space.

That is *only* if they are physical devices on PCIe links.  For virtual
devices you have no such limitation.  What is the problem again?

	-hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH RFC dontapply] kvm_para: add mmio word store hypercall
  2012-03-25 23:25 ` H. Peter Anvin
@ 2012-03-26  6:31   ` Michael S. Tsirkin
  0 siblings, 0 replies; 10+ messages in thread
From: Michael S. Tsirkin @ 2012-03-26  6:31 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Joerg Roedel, Avi Kivity, Marcelo Tosatti, Thomas Gleixner,
	Ingo Molnar, kvm, linux-kernel

On Sun, Mar 25, 2012 at 04:25:38PM -0700, H. Peter Anvin wrote:
> On 03/25/2012 03:05 PM, Michael S. Tsirkin wrote:
> > We face a dilemma: IO mapped addresses are legacy,
> > so, for example, PCI express bridges waste 4K
> > of this space for each link, in effect limiting us
> > to 16 devices using this space.
> 
> That is *only* if they are physical devices on PCIe links.  For virtual
> devices you have no such limitation.

Why don't we? We emulate pci express with exact same limitations
for virtual and physical devices.

>  What is the problem again?
> 
> 	-hpa
> 
> -- 
> H. Peter Anvin, Intel Open Source Technology Center
> I work for Intel.  I don't speak on their behalf.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH RFC dontapply] kvm_para: add mmio word store hypercall
  2012-03-25 22:05 [PATCH RFC dontapply] kvm_para: add mmio word store hypercall Michael S. Tsirkin
  2012-03-25 23:25 ` H. Peter Anvin
@ 2012-03-26  9:21 ` Avi Kivity
  2012-03-26 10:08   ` Michael S. Tsirkin
  1 sibling, 1 reply; 10+ messages in thread
From: Avi Kivity @ 2012-03-26  9:21 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Joerg Roedel, Marcelo Tosatti, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, kvm, linux-kernel

On 03/26/2012 12:05 AM, Michael S. Tsirkin wrote:
> We face a dilemma: IO mapped addresses are legacy,
> so, for example, PCI express bridges waste 4K
> of this space for each link, in effect limiting us
> to 16 devices using this space.
>
> Memory is supposed to replace them, but memory
> exits are much slower than PIO because of the need for
> emulation and page walks.
>
> As a solution, this patch adds an MMIO hypercall with
> the guest physical address + data.
>
> I did test that this works but didn't benchmark yet.
>
> TODOs:
> This only implements a 2 bytes write since this is
> the minimum required for virtio, but we'll probably need
> at least 1 byte reads (for ISR read).
> We can support up to 8 byte reads/writes for 64 bit
> guests and up to 4 bytes for 32 ones - better limit
> to 4 bytes for everyone for consistency, or support
> the maximum that we can?

Let's support the maximum we can.

>  
>  static int handle_invd(struct kvm_vcpu *vcpu)
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 9cbfc06..7bc00ae 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -4915,7 +4915,9 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
>  
>  int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
>  {
> +	struct kvm_run *run = vcpu->run;
>  	unsigned long nr, a0, a1, a2, a3, ret;
> +	gpa_t gpa;
>  	int r = 1;
>  
>  	if (kvm_hv_hypercall_enabled(vcpu->kvm))
> @@ -4946,12 +4948,24 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
>  	case KVM_HC_VAPIC_POLL_IRQ:
>  		ret = 0;
>  		break;
> +	case KVM_HC_MMIO_STORE_WORD:

HC_MEMORY_WRITE

> +		gpa = hc_gpa(vcpu, a1, a2);
> +		if (!write_mmio(vcpu, gpa, 2, &a0) && run) {

What's this && run thing?

> +			run->exit_reason = KVM_EXIT_MMIO;
> +			run->mmio.phys_addr = gpa;
> +			memcpy(run->mmio.data, &a0, 2);
> +			run->mmio.len = 2;
> +			run->mmio.is_write = 1;
> +                        r = 0;
> +		}
> +		goto noret;

What if the address is in RAM?

Note the guest can't tell if a piece of memory is direct mapped or
implemented as mmio.


-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH RFC dontapply] kvm_para: add mmio word store hypercall
  2012-03-26  9:21 ` Avi Kivity
@ 2012-03-26 10:08   ` Michael S. Tsirkin
  2012-03-26 10:16     ` Avi Kivity
  2012-03-26 10:29     ` Gleb Natapov
  0 siblings, 2 replies; 10+ messages in thread
From: Michael S. Tsirkin @ 2012-03-26 10:08 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Joerg Roedel, Marcelo Tosatti, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, kvm, linux-kernel

On Mon, Mar 26, 2012 at 11:21:58AM +0200, Avi Kivity wrote:
> On 03/26/2012 12:05 AM, Michael S. Tsirkin wrote:
> > We face a dilemma: IO mapped addresses are legacy,
> > so, for example, PCI express bridges waste 4K
> > of this space for each link, in effect limiting us
> > to 16 devices using this space.
> >
> > Memory is supposed to replace them, but memory
> > exits are much slower than PIO because of the need for
> > emulation and page walks.
> >
> > As a solution, this patch adds an MMIO hypercall with
> > the guest physical address + data.
> >
> > I did test that this works but didn't benchmark yet.
> >
> > TODOs:
> > This only implements a 2 bytes write since this is
> > the minimum required for virtio, but we'll probably need
> > at least 1 byte reads (for ISR read).
> > We can support up to 8 byte reads/writes for 64 bit
> > guests and up to 4 bytes for 32 ones - better limit
> > to 4 bytes for everyone for consistency, or support
> > the maximum that we can?
> 
> Let's support the maximum we can.
> 
> >  
> >  static int handle_invd(struct kvm_vcpu *vcpu)
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 9cbfc06..7bc00ae 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -4915,7 +4915,9 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
> >  
> >  int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
> >  {
> > +	struct kvm_run *run = vcpu->run;
> >  	unsigned long nr, a0, a1, a2, a3, ret;
> > +	gpa_t gpa;
> >  	int r = 1;
> >  
> >  	if (kvm_hv_hypercall_enabled(vcpu->kvm))
> > @@ -4946,12 +4948,24 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
> >  	case KVM_HC_VAPIC_POLL_IRQ:
> >  		ret = 0;
> >  		break;
> > +	case KVM_HC_MMIO_STORE_WORD:
> 
> HC_MEMORY_WRITE

Do we really want guests to access random memory this way though?
Even though it can, how about HC_PCI_MEMORY_WRITE to stress the intended
usage?
See also discussion below.

> > +		gpa = hc_gpa(vcpu, a1, a2);
> > +		if (!write_mmio(vcpu, gpa, 2, &a0) && run) {
> 
> What's this && run thing?

I'm not sure - copied this from another other place in emulation:
arch/x86/kvm/x86.c:4953:                if (!write_mmio(vcpu, gpa, 2, &a0) && run)

I assumed there's some way to trigger emulation while VCPU does not run.
No?

> 
> > +			run->exit_reason = KVM_EXIT_MMIO;
> > +			run->mmio.phys_addr = gpa;
> > +			memcpy(run->mmio.data, &a0, 2);
> > +			run->mmio.len = 2;
> > +			run->mmio.is_write = 1;
> > +                        r = 0;
> > +		}
> > +		goto noret;
> 
> What if the address is in RAM?
> Note the guest can't tell if a piece of memory is direct mapped or
> implemented as mmio.

True but doing hypercalls for memory which can be
mapped directly is bad for performance - it's
the reverse of what we are trying to do here.

The intent is to use this for virtio where we can explicitly let the
guest know whether using a hypercall is safe.

Acceptable?  What do you suggest?

> 
> -- 
> error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH RFC dontapply] kvm_para: add mmio word store hypercall
  2012-03-26 10:08   ` Michael S. Tsirkin
@ 2012-03-26 10:16     ` Avi Kivity
  2012-03-26 11:30       ` Michael S. Tsirkin
  2012-03-26 10:29     ` Gleb Natapov
  1 sibling, 1 reply; 10+ messages in thread
From: Avi Kivity @ 2012-03-26 10:16 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Joerg Roedel, Marcelo Tosatti, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, kvm, linux-kernel

On 03/26/2012 12:08 PM, Michael S. Tsirkin wrote:
>
> > > +		gpa = hc_gpa(vcpu, a1, a2);
> > > +		if (!write_mmio(vcpu, gpa, 2, &a0) && run) {
> > 
> > What's this && run thing?
>
> I'm not sure - copied this from another other place in emulation:
> arch/x86/kvm/x86.c:4953:                if (!write_mmio(vcpu, gpa, 2, &a0) && run)
>
> I assumed there's some way to trigger emulation while VCPU does not run.
> No?

Not the way you initialize run above.

>
> > 
> > > +			run->exit_reason = KVM_EXIT_MMIO;
> > > +			run->mmio.phys_addr = gpa;
> > > +			memcpy(run->mmio.data, &a0, 2);
> > > +			run->mmio.len = 2;
> > > +			run->mmio.is_write = 1;
> > > +                        r = 0;
> > > +		}
> > > +		goto noret;
> > 
> > What if the address is in RAM?
> > Note the guest can't tell if a piece of memory is direct mapped or
> > implemented as mmio.
>
> True but doing hypercalls for memory which can be
> mapped directly is bad for performance - it's
> the reverse of what we are trying to do here.

It's bad, but the guest can't tell.

Suppose someone implements virtio in hardware and we pass it through to
a guest.  It should continue working, no?

> The intent is to use this for virtio where we can explicitly let the
> guest know whether using a hypercall is safe.
>
> Acceptable?  What do you suggest?

It's iffy.

What's the performance gain from this thing?


-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH RFC dontapply] kvm_para: add mmio word store hypercall
  2012-03-26 10:08   ` Michael S. Tsirkin
  2012-03-26 10:16     ` Avi Kivity
@ 2012-03-26 10:29     ` Gleb Natapov
  2012-03-26 11:24       ` Michael S. Tsirkin
  1 sibling, 1 reply; 10+ messages in thread
From: Gleb Natapov @ 2012-03-26 10:29 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Avi Kivity, Joerg Roedel, Marcelo Tosatti, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, kvm, linux-kernel

On Mon, Mar 26, 2012 at 12:08:29PM +0200, Michael S. Tsirkin wrote:
> On Mon, Mar 26, 2012 at 11:21:58AM +0200, Avi Kivity wrote:
> > On 03/26/2012 12:05 AM, Michael S. Tsirkin wrote:
> > > We face a dilemma: IO mapped addresses are legacy,
> > > so, for example, PCI express bridges waste 4K
> > > of this space for each link, in effect limiting us
> > > to 16 devices using this space.
> > >
> > > Memory is supposed to replace them, but memory
> > > exits are much slower than PIO because of the need for
> > > emulation and page walks.
> > >
> > > As a solution, this patch adds an MMIO hypercall with
> > > the guest physical address + data.
> > >
> > > I did test that this works but didn't benchmark yet.
> > >
> > > TODOs:
> > > This only implements a 2 bytes write since this is
> > > the minimum required for virtio, but we'll probably need
> > > at least 1 byte reads (for ISR read).
> > > We can support up to 8 byte reads/writes for 64 bit
> > > guests and up to 4 bytes for 32 ones - better limit
> > > to 4 bytes for everyone for consistency, or support
> > > the maximum that we can?
> > 
> > Let's support the maximum we can.
> > 
> > >  
> > >  static int handle_invd(struct kvm_vcpu *vcpu)
> > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > > index 9cbfc06..7bc00ae 100644
> > > --- a/arch/x86/kvm/x86.c
> > > +++ b/arch/x86/kvm/x86.c
> > > @@ -4915,7 +4915,9 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
> > >  
> > >  int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
> > >  {
> > > +	struct kvm_run *run = vcpu->run;
> > >  	unsigned long nr, a0, a1, a2, a3, ret;
> > > +	gpa_t gpa;
> > >  	int r = 1;
> > >  
> > >  	if (kvm_hv_hypercall_enabled(vcpu->kvm))
> > > @@ -4946,12 +4948,24 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
> > >  	case KVM_HC_VAPIC_POLL_IRQ:
> > >  		ret = 0;
> > >  		break;
> > > +	case KVM_HC_MMIO_STORE_WORD:
> > 
> > HC_MEMORY_WRITE
> 
> Do we really want guests to access random memory this way though?
> Even though it can, how about HC_PCI_MEMORY_WRITE to stress the intended
> usage?
> See also discussion below.
> 
> > > +		gpa = hc_gpa(vcpu, a1, a2);
> > > +		if (!write_mmio(vcpu, gpa, 2, &a0) && run) {
> > 
> > What's this && run thing?
> 
> I'm not sure - copied this from another other place in emulation:
> arch/x86/kvm/x86.c:4953:                if (!write_mmio(vcpu, gpa, 2, &a0) && run)
> 
What git tree is this from? I think that's the one you added.

> I assumed there's some way to trigger emulation while VCPU does not run.
> No?
> 
> > 
> > > +			run->exit_reason = KVM_EXIT_MMIO;
> > > +			run->mmio.phys_addr = gpa;
> > > +			memcpy(run->mmio.data, &a0, 2);
> > > +			run->mmio.len = 2;
> > > +			run->mmio.is_write = 1;
> > > +                        r = 0;
> > > +		}
> > > +		goto noret;
> > 
> > What if the address is in RAM?
> > Note the guest can't tell if a piece of memory is direct mapped or
> > implemented as mmio.
> 
> True but doing hypercalls for memory which can be
> mapped directly is bad for performance - it's
> the reverse of what we are trying to do here.
> 
> The intent is to use this for virtio where we can explicitly let the
> guest know whether using a hypercall is safe.
> 
> Acceptable?  What do you suggest?
> 
> > 
> > -- 
> > error compiling committee.c: too many arguments to function
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

--
			Gleb.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH RFC dontapply] kvm_para: add mmio word store hypercall
  2012-03-26 10:29     ` Gleb Natapov
@ 2012-03-26 11:24       ` Michael S. Tsirkin
  0 siblings, 0 replies; 10+ messages in thread
From: Michael S. Tsirkin @ 2012-03-26 11:24 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: Avi Kivity, Joerg Roedel, Marcelo Tosatti, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, kvm, linux-kernel

On Mon, Mar 26, 2012 at 12:29:21PM +0200, Gleb Natapov wrote:
> > > > +		gpa = hc_gpa(vcpu, a1, a2);
> > > > +		if (!write_mmio(vcpu, gpa, 2, &a0) && run) {
> > > 
> > > What's this && run thing?
> > 
> > I'm not sure - copied this from another other place in emulation:
> > arch/x86/kvm/x86.c:4953:                if (!write_mmio(vcpu, gpa, 2, &a0) && run)
> > 
> What git tree is this from? I think that's the one you added.

Ugh. My mistake. So the answer is this, as I said, this is
on an ancient tree that did not have
112592da0dc2460c95e8a89d0c5657c6a30286aa
I'll remove this check, thanks for pointing this out.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH RFC dontapply] kvm_para: add mmio word store hypercall
  2012-03-26 10:16     ` Avi Kivity
@ 2012-03-26 11:30       ` Michael S. Tsirkin
  2012-03-26 12:11         ` Avi Kivity
  0 siblings, 1 reply; 10+ messages in thread
From: Michael S. Tsirkin @ 2012-03-26 11:30 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Joerg Roedel, Marcelo Tosatti, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, kvm, linux-kernel

On Mon, Mar 26, 2012 at 12:16:14PM +0200, Avi Kivity wrote:
> On 03/26/2012 12:08 PM, Michael S. Tsirkin wrote:
> >
> > > > +		gpa = hc_gpa(vcpu, a1, a2);
> > > > +		if (!write_mmio(vcpu, gpa, 2, &a0) && run) {
> > > 
> > > What's this && run thing?
> >
> > I'm not sure - copied this from another other place in emulation:
> > arch/x86/kvm/x86.c:4953:                if (!write_mmio(vcpu, gpa, 2, &a0) && run)
> >
> > I assumed there's some way to trigger emulation while VCPU does not run.
> > No?
> 
> Not the way you initialize run above.

Thanks for pointing this out, I'll drop the test.

> >
> > > 
> > > > +			run->exit_reason = KVM_EXIT_MMIO;
> > > > +			run->mmio.phys_addr = gpa;
> > > > +			memcpy(run->mmio.data, &a0, 2);
> > > > +			run->mmio.len = 2;
> > > > +			run->mmio.is_write = 1;
> > > > +                        r = 0;
> > > > +		}
> > > > +		goto noret;
> > > 
> > > What if the address is in RAM?
> > > Note the guest can't tell if a piece of memory is direct mapped or
> > > implemented as mmio.
> >
> > True but doing hypercalls for memory which can be
> > mapped directly is bad for performance - it's
> > the reverse of what we are trying to do here.
> 
> It's bad, but the guest can't tell.
> 
> Suppose someone implements virtio in hardware and we pass it through to
> a guest.  It should continue working, no?

Why would we want hypercalls then?

As I see it, virtio device would have a capability
that tells the guest to use hypercalls for access.
An actual PCI device won't expose this capability,
as would a device on a host which lacks the hypercall.


> > The intent is to use this for virtio where we can explicitly let the
> > guest know whether using a hypercall is safe.
> >
> > Acceptable?  What do you suggest?
> 
> It's iffy.

Question is, do we want a bunch of dead code sitting there
just in case? And what are the chances it'll work correctly
when we need it to?

> What's the performance gain from this thing?

I'll test and post separately.

> 
> -- 
> error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH RFC dontapply] kvm_para: add mmio word store hypercall
  2012-03-26 11:30       ` Michael S. Tsirkin
@ 2012-03-26 12:11         ` Avi Kivity
  0 siblings, 0 replies; 10+ messages in thread
From: Avi Kivity @ 2012-03-26 12:11 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Joerg Roedel, Marcelo Tosatti, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, kvm, linux-kernel

On 03/26/2012 01:30 PM, Michael S. Tsirkin wrote:
> > >
> > > > 
> > > > > +			run->exit_reason = KVM_EXIT_MMIO;
> > > > > +			run->mmio.phys_addr = gpa;
> > > > > +			memcpy(run->mmio.data, &a0, 2);
> > > > > +			run->mmio.len = 2;
> > > > > +			run->mmio.is_write = 1;
> > > > > +                        r = 0;
> > > > > +		}
> > > > > +		goto noret;
> > > > 
> > > > What if the address is in RAM?
> > > > Note the guest can't tell if a piece of memory is direct mapped or
> > > > implemented as mmio.
> > >
> > > True but doing hypercalls for memory which can be
> > > mapped directly is bad for performance - it's
> > > the reverse of what we are trying to do here.
> > 
> > It's bad, but the guest can't tell.
> > 
> > Suppose someone implements virtio in hardware and we pass it through to
> > a guest.  It should continue working, no?
>
> Why would we want hypercalls then?
>
> As I see it, virtio device would have a capability
> that tells the guest to use hypercalls for access.
> An actual PCI device won't expose this capability,
> as would a device on a host which lacks the hypercall.

Ok, makes sense.

> > > The intent is to use this for virtio where we can explicitly let the
> > > guest know whether using a hypercall is safe.
> > >
> > > Acceptable?  What do you suggest?
> > 
> > It's iffy.
>
> Question is, do we want a bunch of dead code sitting there
> just in case? And what are the chances it'll work correctly
> when we need it to?

If we make it device specific, I guess not.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2012-03-26 12:12 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-03-25 22:05 [PATCH RFC dontapply] kvm_para: add mmio word store hypercall Michael S. Tsirkin
2012-03-25 23:25 ` H. Peter Anvin
2012-03-26  6:31   ` Michael S. Tsirkin
2012-03-26  9:21 ` Avi Kivity
2012-03-26 10:08   ` Michael S. Tsirkin
2012-03-26 10:16     ` Avi Kivity
2012-03-26 11:30       ` Michael S. Tsirkin
2012-03-26 12:11         ` Avi Kivity
2012-03-26 10:29     ` Gleb Natapov
2012-03-26 11:24       ` Michael S. Tsirkin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.