All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
@ 2015-12-13  0:25 Boris Ostrovsky
  2015-12-14 13:58 ` David Vrabel
                   ` (2 more replies)
  0 siblings, 3 replies; 23+ messages in thread
From: Boris Ostrovsky @ 2015-12-13  0:25 UTC (permalink / raw)
  To: david.vrabel, konrad.wilk
  Cc: xen-devel, linux-kernel, jbeulich, Boris Ostrovsky, stable

Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
will likely perform same IPIs as would have the guest.

More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
guest's address on remote CPU (when, for example, VCPU from another guest
is running there).

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Suggested-by: Jan Beulich <jbeulich@suse.com>
Cc: stable@vger.kernel.org # 3.14+
---
 arch/x86/xen/mmu.c |    9 ++-------
 1 files changed, 2 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 9c479fe..9ed7eed 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2495,14 +2495,9 @@ void __init xen_init_mmu_ops(void)
 {
 	x86_init.paging.pagetable_init = xen_pagetable_init;
 
-	/* Optimization - we can use the HVM one but it has no idea which
-	 * VCPUs are descheduled - which means that it will needlessly IPI
-	 * them. Xen knows so let it do the job.
-	 */
-	if (xen_feature(XENFEAT_auto_translated_physmap)) {
-		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
+	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return;
-	}
+
 	pv_mmu_ops = xen_mmu_ops;
 
 	memset(dummy_mapping, 0xff, PAGE_SIZE);
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [Xen-devel] [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-13  0:25 [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op Boris Ostrovsky
  2015-12-14 13:58 ` David Vrabel
@ 2015-12-14 13:58 ` David Vrabel
  2015-12-14 14:05   ` Boris Ostrovsky
  2015-12-14 14:05   ` [Xen-devel] " Boris Ostrovsky
  2015-12-14 15:27 ` Konrad Rzeszutek Wilk
  2 siblings, 2 replies; 23+ messages in thread
From: David Vrabel @ 2015-12-14 13:58 UTC (permalink / raw)
  To: Boris Ostrovsky, david.vrabel, konrad.wilk
  Cc: xen-devel, linux-kernel, jbeulich

On 13/12/15 00:25, Boris Ostrovsky wrote:
> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
> will likely perform same IPIs as would have the guest.
> 
> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
> guest's address on remote CPU (when, for example, VCPU from another guest
> is running there).
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Suggested-by: Jan Beulich <jbeulich@suse.com>
> Cc: stable@vger.kernel.org # 3.14+

Applied to for-linus-4.4, thanks.  But given that PVH is experimental
I've dropped the stable Cc.

David

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-13  0:25 [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op Boris Ostrovsky
@ 2015-12-14 13:58 ` David Vrabel
  2015-12-14 13:58 ` [Xen-devel] " David Vrabel
  2015-12-14 15:27 ` Konrad Rzeszutek Wilk
  2 siblings, 0 replies; 23+ messages in thread
From: David Vrabel @ 2015-12-14 13:58 UTC (permalink / raw)
  To: Boris Ostrovsky, david.vrabel, konrad.wilk
  Cc: xen-devel, linux-kernel, jbeulich

On 13/12/15 00:25, Boris Ostrovsky wrote:
> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
> will likely perform same IPIs as would have the guest.
> 
> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
> guest's address on remote CPU (when, for example, VCPU from another guest
> is running there).
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Suggested-by: Jan Beulich <jbeulich@suse.com>
> Cc: stable@vger.kernel.org # 3.14+

Applied to for-linus-4.4, thanks.  But given that PVH is experimental
I've dropped the stable Cc.

David

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-devel] [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-14 13:58 ` [Xen-devel] " David Vrabel
  2015-12-14 14:05   ` Boris Ostrovsky
@ 2015-12-14 14:05   ` Boris Ostrovsky
  1 sibling, 0 replies; 23+ messages in thread
From: Boris Ostrovsky @ 2015-12-14 14:05 UTC (permalink / raw)
  To: David Vrabel, konrad.wilk; +Cc: xen-devel, linux-kernel, jbeulich

On 12/14/2015 08:58 AM, David Vrabel wrote:
> On 13/12/15 00:25, Boris Ostrovsky wrote:
>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>> will likely perform same IPIs as would have the guest.
>>
>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>> guest's address on remote CPU (when, for example, VCPU from another guest
>> is running there).
>>
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Suggested-by: Jan Beulich <jbeulich@suse.com>
>> Cc: stable@vger.kernel.org # 3.14+
> Applied to for-linus-4.4, thanks.  But given that PVH is experimental
> I've dropped the stable Cc.

The reason I want this to go to stable is that I will be removing access 
to MMUEXT_TLB_FLUSH_MULTI and MMUEXT_INVLPG_MULTI to PVH guests in the 
hypervisor (as part of merging HVM and PVH hypercall tables) and that 
will result in essentially unbootable PVH guests due to warnings flood.

-boris

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-14 13:58 ` [Xen-devel] " David Vrabel
@ 2015-12-14 14:05   ` Boris Ostrovsky
  2015-12-14 14:05   ` [Xen-devel] " Boris Ostrovsky
  1 sibling, 0 replies; 23+ messages in thread
From: Boris Ostrovsky @ 2015-12-14 14:05 UTC (permalink / raw)
  To: David Vrabel, konrad.wilk; +Cc: xen-devel, linux-kernel, jbeulich

On 12/14/2015 08:58 AM, David Vrabel wrote:
> On 13/12/15 00:25, Boris Ostrovsky wrote:
>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>> will likely perform same IPIs as would have the guest.
>>
>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>> guest's address on remote CPU (when, for example, VCPU from another guest
>> is running there).
>>
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Suggested-by: Jan Beulich <jbeulich@suse.com>
>> Cc: stable@vger.kernel.org # 3.14+
> Applied to for-linus-4.4, thanks.  But given that PVH is experimental
> I've dropped the stable Cc.

The reason I want this to go to stable is that I will be removing access 
to MMUEXT_TLB_FLUSH_MULTI and MMUEXT_INVLPG_MULTI to PVH guests in the 
hypervisor (as part of merging HVM and PVH hypercall tables) and that 
will result in essentially unbootable PVH guests due to warnings flood.

-boris

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-13  0:25 [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op Boris Ostrovsky
  2015-12-14 13:58 ` David Vrabel
  2015-12-14 13:58 ` [Xen-devel] " David Vrabel
@ 2015-12-14 15:27 ` Konrad Rzeszutek Wilk
  2015-12-14 15:35   ` Roger Pau Monné
                     ` (3 more replies)
  2 siblings, 4 replies; 23+ messages in thread
From: Konrad Rzeszutek Wilk @ 2015-12-14 15:27 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: 3.14+, linux-kernel, stable, david.vrabel, jbeulich, xen-devel, #

On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
> will likely perform same IPIs as would have the guest.
> 

But if the VCPU is asleep, doing it via the hypervisor will save us waking
up the guest VCPU, sending an IPI - just to do an TLB flush
of that CPU. Which is pointless as the CPU hadn't been running the
guest in the first place.

>
>More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>guest's address on remote CPU (when, for example, VCPU from another
>guest
>is running there).

Right, so the hypervisor won't even send an IPI there.

But if you do it via the normal guest IPI mechanism (which are opaque
to the hypervisor) you and up scheduling the guest VCPU to do
send an hypervisor callback. And the callback will go the IPI routine
which will do an TLB flush. Not necessary.

This is all in case of oversubscription of course. In the case where
we are fine on vCPU resources it does not matter.

Perhaps if we have PV aware TLB flush it could do this differently?

> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Suggested-by: Jan Beulich <jbeulich@suse.com>
> Cc: stable@vger.kernel.org # 3.14+
> ---
>  arch/x86/xen/mmu.c |    9 ++-------
>  1 files changed, 2 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 9c479fe..9ed7eed 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -2495,14 +2495,9 @@ void __init xen_init_mmu_ops(void)
>  {
>  	x86_init.paging.pagetable_init = xen_pagetable_init;
>  
> -	/* Optimization - we can use the HVM one but it has no idea which
> -	 * VCPUs are descheduled - which means that it will needlessly IPI
> -	 * them. Xen knows so let it do the job.
> -	 */
> -	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> -		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
>  		return;
> -	}
> +
>  	pv_mmu_ops = xen_mmu_ops;
>  
>  	memset(dummy_mapping, 0xff, PAGE_SIZE);
> -- 
> 1.7.1
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-devel] [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-14 15:27 ` Konrad Rzeszutek Wilk
  2015-12-14 15:35   ` Roger Pau Monné
@ 2015-12-14 15:35   ` Roger Pau Monné
  2015-12-14 15:58       ` Boris Ostrovsky
  2015-12-14 15:58     ` Boris Ostrovsky
  2015-12-15 14:36   ` Boris Ostrovsky
  2015-12-15 14:36   ` Boris Ostrovsky
  3 siblings, 2 replies; 23+ messages in thread
From: Roger Pau Monné @ 2015-12-14 15:35 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, Boris Ostrovsky
  Cc: 3.14+, linux-kernel, stable, david.vrabel, jbeulich, xen-devel, #

El 14/12/15 a les 16.27, Konrad Rzeszutek Wilk ha escrit:
> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>> will likely perform same IPIs as would have the guest.
>>
> 
> But if the VCPU is asleep, doing it via the hypervisor will save us waking
> up the guest VCPU, sending an IPI - just to do an TLB flush
> of that CPU. Which is pointless as the CPU hadn't been running the
> guest in the first place.
> 
>>
>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>> guest's address on remote CPU (when, for example, VCPU from another
>> guest
>> is running there).
> 
> Right, so the hypervisor won't even send an IPI there.
> 
> But if you do it via the normal guest IPI mechanism (which are opaque
> to the hypervisor) you and up scheduling the guest VCPU to do
> send an hypervisor callback. And the callback will go the IPI routine
> which will do an TLB flush. Not necessary.
> 
> This is all in case of oversubscription of course. In the case where
> we are fine on vCPU resources it does not matter.
> 
> Perhaps if we have PV aware TLB flush it could do this differently?

Why don't HVM/PVH just uses the HVMOP_flush_tlbs hypercall?

Roger.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-14 15:27 ` Konrad Rzeszutek Wilk
@ 2015-12-14 15:35   ` Roger Pau Monné
  2015-12-14 15:35   ` [Xen-devel] " Roger Pau Monné
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 23+ messages in thread
From: Roger Pau Monné @ 2015-12-14 15:35 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, Boris Ostrovsky
  Cc: 3.14+, linux-kernel, stable, david.vrabel, jbeulich, xen-devel, #

El 14/12/15 a les 16.27, Konrad Rzeszutek Wilk ha escrit:
> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>> will likely perform same IPIs as would have the guest.
>>
> 
> But if the VCPU is asleep, doing it via the hypervisor will save us waking
> up the guest VCPU, sending an IPI - just to do an TLB flush
> of that CPU. Which is pointless as the CPU hadn't been running the
> guest in the first place.
> 
>>
>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>> guest's address on remote CPU (when, for example, VCPU from another
>> guest
>> is running there).
> 
> Right, so the hypervisor won't even send an IPI there.
> 
> But if you do it via the normal guest IPI mechanism (which are opaque
> to the hypervisor) you and up scheduling the guest VCPU to do
> send an hypervisor callback. And the callback will go the IPI routine
> which will do an TLB flush. Not necessary.
> 
> This is all in case of oversubscription of course. In the case where
> we are fine on vCPU resources it does not matter.
> 
> Perhaps if we have PV aware TLB flush it could do this differently?

Why don't HVM/PVH just uses the HVMOP_flush_tlbs hypercall?

Roger.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-devel] [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-14 15:35   ` [Xen-devel] " Roger Pau Monné
@ 2015-12-14 15:58       ` Boris Ostrovsky
  2015-12-14 15:58     ` Boris Ostrovsky
  1 sibling, 0 replies; 23+ messages in thread
From: Boris Ostrovsky @ 2015-12-14 15:58 UTC (permalink / raw)
  To: Roger Pau Monné, Konrad Rzeszutek Wilk
  Cc: 3.14+, linux-kernel, stable, david.vrabel, jbeulich, xen-devel, #

On 12/14/2015 10:35 AM, Roger Pau Monné wrote:
> El 14/12/15 a les 16.27, Konrad Rzeszutek Wilk ha escrit:
>> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
>>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>>> will likely perform same IPIs as would have the guest.
>>>
>> But if the VCPU is asleep, doing it via the hypervisor will save us waking
>> up the guest VCPU, sending an IPI - just to do an TLB flush
>> of that CPU. Which is pointless as the CPU hadn't been running the
>> guest in the first place.

OK, I then mis-read the hypervisor code, I didn't realize that 
vcpumask_to_pcpumask() takes into account vcpu_dirty_cpumask.


>>
>>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>>> guest's address on remote CPU (when, for example, VCPU from another
>>> guest
>>> is running there).
>> Right, so the hypervisor won't even send an IPI there.
>>
>> But if you do it via the normal guest IPI mechanism (which are opaque
>> to the hypervisor) you and up scheduling the guest VCPU to do
>> send an hypervisor callback. And the callback will go the IPI routine
>> which will do an TLB flush. Not necessary.
>>
>> This is all in case of oversubscription of course. In the case where
>> we are fine on vCPU resources it does not matter.
>>
>> Perhaps if we have PV aware TLB flush it could do this differently?
> Why don't HVM/PVH just uses the HVMOP_flush_tlbs hypercall?

It doesn't take any parameters so it will invalidate TLBs for all VCPUs, 
which is more than is being asked for. Especially in the case of 
MMUEXT_INVLPG_MULTI.

(That's in addition to the fact that it currently doesn't work for PVH 
as it has a test for is_hvm_domain() instead of has_hvm_container_domain()).

-boris

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-devel] [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
@ 2015-12-14 15:58       ` Boris Ostrovsky
  0 siblings, 0 replies; 23+ messages in thread
From: Boris Ostrovsky @ 2015-12-14 15:58 UTC (permalink / raw)
  To: Roger Pau Monné, Konrad Rzeszutek Wilk
  Cc: 3.14+, linux-kernel, stable, david.vrabel, jbeulich, xen-devel, #

On 12/14/2015 10:35 AM, Roger Pau Monn� wrote:
> El 14/12/15 a les 16.27, Konrad Rzeszutek Wilk ha escrit:
>> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
>>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>>> will likely perform same IPIs as would have the guest.
>>>
>> But if the VCPU is asleep, doing it via the hypervisor will save us waking
>> up the guest VCPU, sending an IPI - just to do an TLB flush
>> of that CPU. Which is pointless as the CPU hadn't been running the
>> guest in the first place.

OK, I then mis-read the hypervisor code, I didn't realize that 
vcpumask_to_pcpumask() takes into account vcpu_dirty_cpumask.


>>
>>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>>> guest's address on remote CPU (when, for example, VCPU from another
>>> guest
>>> is running there).
>> Right, so the hypervisor won't even send an IPI there.
>>
>> But if you do it via the normal guest IPI mechanism (which are opaque
>> to the hypervisor) you and up scheduling the guest VCPU to do
>> send an hypervisor callback. And the callback will go the IPI routine
>> which will do an TLB flush. Not necessary.
>>
>> This is all in case of oversubscription of course. In the case where
>> we are fine on vCPU resources it does not matter.
>>
>> Perhaps if we have PV aware TLB flush it could do this differently?
> Why don't HVM/PVH just uses the HVMOP_flush_tlbs hypercall?

It doesn't take any parameters so it will invalidate TLBs for all VCPUs, 
which is more than is being asked for. Especially in the case of 
MMUEXT_INVLPG_MULTI.

(That's in addition to the fact that it currently doesn't work for PVH 
as it has a test for is_hvm_domain() instead of has_hvm_container_domain()).

-boris

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-14 15:35   ` [Xen-devel] " Roger Pau Monné
  2015-12-14 15:58       ` Boris Ostrovsky
@ 2015-12-14 15:58     ` Boris Ostrovsky
  1 sibling, 0 replies; 23+ messages in thread
From: Boris Ostrovsky @ 2015-12-14 15:58 UTC (permalink / raw)
  To: Roger Pau Monné, Konrad Rzeszutek Wilk
  Cc: 3.14+, linux-kernel, stable, david.vrabel, jbeulich, xen-devel, #

On 12/14/2015 10:35 AM, Roger Pau Monné wrote:
> El 14/12/15 a les 16.27, Konrad Rzeszutek Wilk ha escrit:
>> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
>>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>>> will likely perform same IPIs as would have the guest.
>>>
>> But if the VCPU is asleep, doing it via the hypervisor will save us waking
>> up the guest VCPU, sending an IPI - just to do an TLB flush
>> of that CPU. Which is pointless as the CPU hadn't been running the
>> guest in the first place.

OK, I then mis-read the hypervisor code, I didn't realize that 
vcpumask_to_pcpumask() takes into account vcpu_dirty_cpumask.


>>
>>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>>> guest's address on remote CPU (when, for example, VCPU from another
>>> guest
>>> is running there).
>> Right, so the hypervisor won't even send an IPI there.
>>
>> But if you do it via the normal guest IPI mechanism (which are opaque
>> to the hypervisor) you and up scheduling the guest VCPU to do
>> send an hypervisor callback. And the callback will go the IPI routine
>> which will do an TLB flush. Not necessary.
>>
>> This is all in case of oversubscription of course. In the case where
>> we are fine on vCPU resources it does not matter.
>>
>> Perhaps if we have PV aware TLB flush it could do this differently?
> Why don't HVM/PVH just uses the HVMOP_flush_tlbs hypercall?

It doesn't take any parameters so it will invalidate TLBs for all VCPUs, 
which is more than is being asked for. Especially in the case of 
MMUEXT_INVLPG_MULTI.

(That's in addition to the fact that it currently doesn't work for PVH 
as it has a test for is_hvm_domain() instead of has_hvm_container_domain()).

-boris

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-14 15:27 ` Konrad Rzeszutek Wilk
  2015-12-14 15:35   ` Roger Pau Monné
  2015-12-14 15:35   ` [Xen-devel] " Roger Pau Monné
@ 2015-12-15 14:36   ` Boris Ostrovsky
  2015-12-15 15:03     ` Jan Beulich
  2015-12-15 15:03     ` Jan Beulich
  2015-12-15 14:36   ` Boris Ostrovsky
  3 siblings, 2 replies; 23+ messages in thread
From: Boris Ostrovsky @ 2015-12-15 14:36 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, jbeulich
  Cc: david.vrabel, xen-devel, linux-kernel, stable, #, 3.14+

On 12/14/2015 10:27 AM, Konrad Rzeszutek Wilk wrote:
> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>> will likely perform same IPIs as would have the guest.
>>
> But if the VCPU is asleep, doing it via the hypervisor will save us waking
> up the guest VCPU, sending an IPI - just to do an TLB flush
> of that CPU. Which is pointless as the CPU hadn't been running the
> guest in the first place.
>
>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>> guest's address on remote CPU (when, for example, VCPU from another
>> guest
>> is running there).
> Right, so the hypervisor won't even send an IPI there.
>
> But if you do it via the normal guest IPI mechanism (which are opaque
> to the hypervisor) you and up scheduling the guest VCPU to do
> send an hypervisor callback. And the callback will go the IPI routine
> which will do an TLB flush. Not necessary.
>
> This is all in case of oversubscription of course. In the case where
> we are fine on vCPU resources it does not matter.


So then should we keep these two operations (MMUEXT_INVLPG_MULTI and 
MMUEXT_TLB_FLUSH_MULT) available to HVM/PVH guests? If the guest's VCPU 
is not running then TLBs must have been flushed.

Jan?

-boris


>
> Perhaps if we have PV aware TLB flush it could do this differently?
>
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Suggested-by: Jan Beulich <jbeulich@suse.com>
>> Cc: stable@vger.kernel.org # 3.14+
>> ---
>>   arch/x86/xen/mmu.c |    9 ++-------
>>   1 files changed, 2 insertions(+), 7 deletions(-)
>>
>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
>> index 9c479fe..9ed7eed 100644
>> --- a/arch/x86/xen/mmu.c
>> +++ b/arch/x86/xen/mmu.c
>> @@ -2495,14 +2495,9 @@ void __init xen_init_mmu_ops(void)
>>   {
>>   	x86_init.paging.pagetable_init = xen_pagetable_init;
>>   
>> -	/* Optimization - we can use the HVM one but it has no idea which
>> -	 * VCPUs are descheduled - which means that it will needlessly IPI
>> -	 * them. Xen knows so let it do the job.
>> -	 */
>> -	if (xen_feature(XENFEAT_auto_translated_physmap)) {
>> -		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
>> +	if (xen_feature(XENFEAT_auto_translated_physmap))
>>   		return;
>> -	}
>> +
>>   	pv_mmu_ops = xen_mmu_ops;
>>   
>>   	memset(dummy_mapping, 0xff, PAGE_SIZE);
>> -- 
>> 1.7.1
>>


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-14 15:27 ` Konrad Rzeszutek Wilk
                     ` (2 preceding siblings ...)
  2015-12-15 14:36   ` Boris Ostrovsky
@ 2015-12-15 14:36   ` Boris Ostrovsky
  3 siblings, 0 replies; 23+ messages in thread
From: Boris Ostrovsky @ 2015-12-15 14:36 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, jbeulich
  Cc: 3.14+, linux-kernel, stable, david.vrabel, xen-devel, #

On 12/14/2015 10:27 AM, Konrad Rzeszutek Wilk wrote:
> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>> will likely perform same IPIs as would have the guest.
>>
> But if the VCPU is asleep, doing it via the hypervisor will save us waking
> up the guest VCPU, sending an IPI - just to do an TLB flush
> of that CPU. Which is pointless as the CPU hadn't been running the
> guest in the first place.
>
>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>> guest's address on remote CPU (when, for example, VCPU from another
>> guest
>> is running there).
> Right, so the hypervisor won't even send an IPI there.
>
> But if you do it via the normal guest IPI mechanism (which are opaque
> to the hypervisor) you and up scheduling the guest VCPU to do
> send an hypervisor callback. And the callback will go the IPI routine
> which will do an TLB flush. Not necessary.
>
> This is all in case of oversubscription of course. In the case where
> we are fine on vCPU resources it does not matter.


So then should we keep these two operations (MMUEXT_INVLPG_MULTI and 
MMUEXT_TLB_FLUSH_MULT) available to HVM/PVH guests? If the guest's VCPU 
is not running then TLBs must have been flushed.

Jan?

-boris


>
> Perhaps if we have PV aware TLB flush it could do this differently?
>
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Suggested-by: Jan Beulich <jbeulich@suse.com>
>> Cc: stable@vger.kernel.org # 3.14+
>> ---
>>   arch/x86/xen/mmu.c |    9 ++-------
>>   1 files changed, 2 insertions(+), 7 deletions(-)
>>
>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
>> index 9c479fe..9ed7eed 100644
>> --- a/arch/x86/xen/mmu.c
>> +++ b/arch/x86/xen/mmu.c
>> @@ -2495,14 +2495,9 @@ void __init xen_init_mmu_ops(void)
>>   {
>>   	x86_init.paging.pagetable_init = xen_pagetable_init;
>>   
>> -	/* Optimization - we can use the HVM one but it has no idea which
>> -	 * VCPUs are descheduled - which means that it will needlessly IPI
>> -	 * them. Xen knows so let it do the job.
>> -	 */
>> -	if (xen_feature(XENFEAT_auto_translated_physmap)) {
>> -		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
>> +	if (xen_feature(XENFEAT_auto_translated_physmap))
>>   		return;
>> -	}
>> +
>>   	pv_mmu_ops = xen_mmu_ops;
>>   
>>   	memset(dummy_mapping, 0xff, PAGE_SIZE);
>> -- 
>> 1.7.1
>>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-15 14:36   ` Boris Ostrovsky
@ 2015-12-15 15:03     ` Jan Beulich
  2015-12-15 15:14       ` Boris Ostrovsky
  2015-12-15 15:14       ` Boris Ostrovsky
  2015-12-15 15:03     ` Jan Beulich
  1 sibling, 2 replies; 23+ messages in thread
From: Jan Beulich @ 2015-12-15 15:03 UTC (permalink / raw)
  To: Boris Ostrovsky, Konrad Rzeszutek Wilk
  Cc: #, 3.14+, david.vrabel, xen-devel, linux-kernel, stable

>>> On 15.12.15 at 15:36, <boris.ostrovsky@oracle.com> wrote:
> On 12/14/2015 10:27 AM, Konrad Rzeszutek Wilk wrote:
>> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
>>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>>> will likely perform same IPIs as would have the guest.
>>>
>> But if the VCPU is asleep, doing it via the hypervisor will save us waking
>> up the guest VCPU, sending an IPI - just to do an TLB flush
>> of that CPU. Which is pointless as the CPU hadn't been running the
>> guest in the first place.
>>
>>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>>> guest's address on remote CPU (when, for example, VCPU from another
>>> guest
>>> is running there).
>> Right, so the hypervisor won't even send an IPI there.
>>
>> But if you do it via the normal guest IPI mechanism (which are opaque
>> to the hypervisor) you and up scheduling the guest VCPU to do
>> send an hypervisor callback. And the callback will go the IPI routine
>> which will do an TLB flush. Not necessary.
>>
>> This is all in case of oversubscription of course. In the case where
>> we are fine on vCPU resources it does not matter.
> 
> 
> So then should we keep these two operations (MMUEXT_INVLPG_MULTI and 
> MMUEXT_TLB_FLUSH_MULT) available to HVM/PVH guests? If the guest's VCPU 
> is not running then TLBs must have been flushed.

While I followed the discussion, it didn't become clear to me what
uses these are for HVM guests considering the separate address
spaces. As long as they're useless if called, I'd still favor making
them inaccessible.

Jan


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-15 14:36   ` Boris Ostrovsky
  2015-12-15 15:03     ` Jan Beulich
@ 2015-12-15 15:03     ` Jan Beulich
  1 sibling, 0 replies; 23+ messages in thread
From: Jan Beulich @ 2015-12-15 15:03 UTC (permalink / raw)
  To: Boris Ostrovsky, Konrad Rzeszutek Wilk
  Cc: 3.14+, linux-kernel, stable, david.vrabel, xen-devel, #

>>> On 15.12.15 at 15:36, <boris.ostrovsky@oracle.com> wrote:
> On 12/14/2015 10:27 AM, Konrad Rzeszutek Wilk wrote:
>> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
>>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>>> will likely perform same IPIs as would have the guest.
>>>
>> But if the VCPU is asleep, doing it via the hypervisor will save us waking
>> up the guest VCPU, sending an IPI - just to do an TLB flush
>> of that CPU. Which is pointless as the CPU hadn't been running the
>> guest in the first place.
>>
>>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>>> guest's address on remote CPU (when, for example, VCPU from another
>>> guest
>>> is running there).
>> Right, so the hypervisor won't even send an IPI there.
>>
>> But if you do it via the normal guest IPI mechanism (which are opaque
>> to the hypervisor) you and up scheduling the guest VCPU to do
>> send an hypervisor callback. And the callback will go the IPI routine
>> which will do an TLB flush. Not necessary.
>>
>> This is all in case of oversubscription of course. In the case where
>> we are fine on vCPU resources it does not matter.
> 
> 
> So then should we keep these two operations (MMUEXT_INVLPG_MULTI and 
> MMUEXT_TLB_FLUSH_MULT) available to HVM/PVH guests? If the guest's VCPU 
> is not running then TLBs must have been flushed.

While I followed the discussion, it didn't become clear to me what
uses these are for HVM guests considering the separate address
spaces. As long as they're useless if called, I'd still favor making
them inaccessible.

Jan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-15 15:03     ` Jan Beulich
  2015-12-15 15:14       ` Boris Ostrovsky
@ 2015-12-15 15:14       ` Boris Ostrovsky
  2015-12-15 15:24         ` Jan Beulich
  2015-12-15 15:24         ` Jan Beulich
  1 sibling, 2 replies; 23+ messages in thread
From: Boris Ostrovsky @ 2015-12-15 15:14 UTC (permalink / raw)
  To: Jan Beulich, Konrad Rzeszutek Wilk
  Cc: #, 3.14+, david.vrabel, xen-devel, linux-kernel, stable

On 12/15/2015 10:03 AM, Jan Beulich wrote:
>>>> On 15.12.15 at 15:36, <boris.ostrovsky@oracle.com> wrote:
>> On 12/14/2015 10:27 AM, Konrad Rzeszutek Wilk wrote:
>>> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
>>>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>>>> will likely perform same IPIs as would have the guest.
>>>>
>>> But if the VCPU is asleep, doing it via the hypervisor will save us waking
>>> up the guest VCPU, sending an IPI - just to do an TLB flush
>>> of that CPU. Which is pointless as the CPU hadn't been running the
>>> guest in the first place.
>>>
>>>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>>>> guest's address on remote CPU (when, for example, VCPU from another
>>>> guest
>>>> is running there).
>>> Right, so the hypervisor won't even send an IPI there.
>>>
>>> But if you do it via the normal guest IPI mechanism (which are opaque
>>> to the hypervisor) you and up scheduling the guest VCPU to do
>>> send an hypervisor callback. And the callback will go the IPI routine
>>> which will do an TLB flush. Not necessary.
>>>
>>> This is all in case of oversubscription of course. In the case where
>>> we are fine on vCPU resources it does not matter.
>>
>> So then should we keep these two operations (MMUEXT_INVLPG_MULTI and
>> MMUEXT_TLB_FLUSH_MULT) available to HVM/PVH guests? If the guest's VCPU
>> is not running then TLBs must have been flushed.
> While I followed the discussion, it didn't become clear to me what
> uses these are for HVM guests considering the separate address
> spaces.

To avoid unnecessary IPIs to VCPUs that are not currently scheduled (my 
mistake was that I didn't realize that IPIs to those pCPUs will be 
filtered out by the hypervisor).

> As long as they're useless if called, I'd still favor making
> them inaccessible.


VCPUs that are scheduled will receive the required flush requests.

-boris



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-15 15:03     ` Jan Beulich
@ 2015-12-15 15:14       ` Boris Ostrovsky
  2015-12-15 15:14       ` Boris Ostrovsky
  1 sibling, 0 replies; 23+ messages in thread
From: Boris Ostrovsky @ 2015-12-15 15:14 UTC (permalink / raw)
  To: Jan Beulich, Konrad Rzeszutek Wilk
  Cc: 3.14+, linux-kernel, stable, david.vrabel, xen-devel, #

On 12/15/2015 10:03 AM, Jan Beulich wrote:
>>>> On 15.12.15 at 15:36, <boris.ostrovsky@oracle.com> wrote:
>> On 12/14/2015 10:27 AM, Konrad Rzeszutek Wilk wrote:
>>> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
>>>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>>>> will likely perform same IPIs as would have the guest.
>>>>
>>> But if the VCPU is asleep, doing it via the hypervisor will save us waking
>>> up the guest VCPU, sending an IPI - just to do an TLB flush
>>> of that CPU. Which is pointless as the CPU hadn't been running the
>>> guest in the first place.
>>>
>>>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>>>> guest's address on remote CPU (when, for example, VCPU from another
>>>> guest
>>>> is running there).
>>> Right, so the hypervisor won't even send an IPI there.
>>>
>>> But if you do it via the normal guest IPI mechanism (which are opaque
>>> to the hypervisor) you and up scheduling the guest VCPU to do
>>> send an hypervisor callback. And the callback will go the IPI routine
>>> which will do an TLB flush. Not necessary.
>>>
>>> This is all in case of oversubscription of course. In the case where
>>> we are fine on vCPU resources it does not matter.
>>
>> So then should we keep these two operations (MMUEXT_INVLPG_MULTI and
>> MMUEXT_TLB_FLUSH_MULT) available to HVM/PVH guests? If the guest's VCPU
>> is not running then TLBs must have been flushed.
> While I followed the discussion, it didn't become clear to me what
> uses these are for HVM guests considering the separate address
> spaces.

To avoid unnecessary IPIs to VCPUs that are not currently scheduled (my 
mistake was that I didn't realize that IPIs to those pCPUs will be 
filtered out by the hypervisor).

> As long as they're useless if called, I'd still favor making
> them inaccessible.


VCPUs that are scheduled will receive the required flush requests.

-boris

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-15 15:14       ` Boris Ostrovsky
@ 2015-12-15 15:24         ` Jan Beulich
  2015-12-15 15:37           ` Boris Ostrovsky
  2015-12-15 15:37           ` Boris Ostrovsky
  2015-12-15 15:24         ` Jan Beulich
  1 sibling, 2 replies; 23+ messages in thread
From: Jan Beulich @ 2015-12-15 15:24 UTC (permalink / raw)
  To: Boris Ostrovsky, Konrad Rzeszutek Wilk
  Cc: #, 3.14+, david.vrabel, xen-devel, linux-kernel, stable

>>> On 15.12.15 at 16:14, <boris.ostrovsky@oracle.com> wrote:
> On 12/15/2015 10:03 AM, Jan Beulich wrote:
>>>>> On 15.12.15 at 15:36, <boris.ostrovsky@oracle.com> wrote:
>>> On 12/14/2015 10:27 AM, Konrad Rzeszutek Wilk wrote:
>>>> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
>>>>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>>>>> will likely perform same IPIs as would have the guest.
>>>>>
>>>> But if the VCPU is asleep, doing it via the hypervisor will save us waking
>>>> up the guest VCPU, sending an IPI - just to do an TLB flush
>>>> of that CPU. Which is pointless as the CPU hadn't been running the
>>>> guest in the first place.
>>>>
>>>>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>>>>> guest's address on remote CPU (when, for example, VCPU from another
>>>>> guest
>>>>> is running there).
>>>> Right, so the hypervisor won't even send an IPI there.
>>>>
>>>> But if you do it via the normal guest IPI mechanism (which are opaque
>>>> to the hypervisor) you and up scheduling the guest VCPU to do
>>>> send an hypervisor callback. And the callback will go the IPI routine
>>>> which will do an TLB flush. Not necessary.
>>>>
>>>> This is all in case of oversubscription of course. In the case where
>>>> we are fine on vCPU resources it does not matter.
>>>
>>> So then should we keep these two operations (MMUEXT_INVLPG_MULTI and
>>> MMUEXT_TLB_FLUSH_MULT) available to HVM/PVH guests? If the guest's VCPU
>>> is not running then TLBs must have been flushed.
>> While I followed the discussion, it didn't become clear to me what
>> uses these are for HVM guests considering the separate address
>> spaces.
> 
> To avoid unnecessary IPIs to VCPUs that are not currently scheduled (my 
> mistake was that I didn't realize that IPIs to those pCPUs will be 
> filtered out by the hypervisor).
> 
>> As long as they're useless if called, I'd still favor making
>> them inaccessible.
> 
> VCPUs that are scheduled will receive the required flush requests.

I don't follow - an INVLPG done by the hypervisor won't do any
flushing for a HVM guest.

Jan


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-15 15:14       ` Boris Ostrovsky
  2015-12-15 15:24         ` Jan Beulich
@ 2015-12-15 15:24         ` Jan Beulich
  1 sibling, 0 replies; 23+ messages in thread
From: Jan Beulich @ 2015-12-15 15:24 UTC (permalink / raw)
  To: Boris Ostrovsky, Konrad Rzeszutek Wilk
  Cc: 3.14+, linux-kernel, stable, david.vrabel, xen-devel, #

>>> On 15.12.15 at 16:14, <boris.ostrovsky@oracle.com> wrote:
> On 12/15/2015 10:03 AM, Jan Beulich wrote:
>>>>> On 15.12.15 at 15:36, <boris.ostrovsky@oracle.com> wrote:
>>> On 12/14/2015 10:27 AM, Konrad Rzeszutek Wilk wrote:
>>>> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
>>>>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>>>>> will likely perform same IPIs as would have the guest.
>>>>>
>>>> But if the VCPU is asleep, doing it via the hypervisor will save us waking
>>>> up the guest VCPU, sending an IPI - just to do an TLB flush
>>>> of that CPU. Which is pointless as the CPU hadn't been running the
>>>> guest in the first place.
>>>>
>>>>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>>>>> guest's address on remote CPU (when, for example, VCPU from another
>>>>> guest
>>>>> is running there).
>>>> Right, so the hypervisor won't even send an IPI there.
>>>>
>>>> But if you do it via the normal guest IPI mechanism (which are opaque
>>>> to the hypervisor) you and up scheduling the guest VCPU to do
>>>> send an hypervisor callback. And the callback will go the IPI routine
>>>> which will do an TLB flush. Not necessary.
>>>>
>>>> This is all in case of oversubscription of course. In the case where
>>>> we are fine on vCPU resources it does not matter.
>>>
>>> So then should we keep these two operations (MMUEXT_INVLPG_MULTI and
>>> MMUEXT_TLB_FLUSH_MULT) available to HVM/PVH guests? If the guest's VCPU
>>> is not running then TLBs must have been flushed.
>> While I followed the discussion, it didn't become clear to me what
>> uses these are for HVM guests considering the separate address
>> spaces.
> 
> To avoid unnecessary IPIs to VCPUs that are not currently scheduled (my 
> mistake was that I didn't realize that IPIs to those pCPUs will be 
> filtered out by the hypervisor).
> 
>> As long as they're useless if called, I'd still favor making
>> them inaccessible.
> 
> VCPUs that are scheduled will receive the required flush requests.

I don't follow - an INVLPG done by the hypervisor won't do any
flushing for a HVM guest.

Jan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-15 15:24         ` Jan Beulich
  2015-12-15 15:37           ` Boris Ostrovsky
@ 2015-12-15 15:37           ` Boris Ostrovsky
  2015-12-15 16:07             ` Jan Beulich
  1 sibling, 1 reply; 23+ messages in thread
From: Boris Ostrovsky @ 2015-12-15 15:37 UTC (permalink / raw)
  To: Jan Beulich, Konrad Rzeszutek Wilk
  Cc: #, 3.14+, david.vrabel, xen-devel, linux-kernel, stable

On 12/15/2015 10:24 AM, Jan Beulich wrote:
>>>> On 15.12.15 at 16:14, <boris.ostrovsky@oracle.com> wrote:
>> On 12/15/2015 10:03 AM, Jan Beulich wrote:
>>>>>> On 15.12.15 at 15:36, <boris.ostrovsky@oracle.com> wrote:
>>>> On 12/14/2015 10:27 AM, Konrad Rzeszutek Wilk wrote:
>>>>> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
>>>>>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>>>>>> will likely perform same IPIs as would have the guest.
>>>>>>
>>>>> But if the VCPU is asleep, doing it via the hypervisor will save us waking
>>>>> up the guest VCPU, sending an IPI - just to do an TLB flush
>>>>> of that CPU. Which is pointless as the CPU hadn't been running the
>>>>> guest in the first place.
>>>>>
>>>>>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>>>>>> guest's address on remote CPU (when, for example, VCPU from another
>>>>>> guest
>>>>>> is running there).
>>>>> Right, so the hypervisor won't even send an IPI there.
>>>>>
>>>>> But if you do it via the normal guest IPI mechanism (which are opaque
>>>>> to the hypervisor) you and up scheduling the guest VCPU to do
>>>>> send an hypervisor callback. And the callback will go the IPI routine
>>>>> which will do an TLB flush. Not necessary.
>>>>>
>>>>> This is all in case of oversubscription of course. In the case where
>>>>> we are fine on vCPU resources it does not matter.
>>>> So then should we keep these two operations (MMUEXT_INVLPG_MULTI and
>>>> MMUEXT_TLB_FLUSH_MULT) available to HVM/PVH guests? If the guest's VCPU
>>>> is not running then TLBs must have been flushed.
>>> While I followed the discussion, it didn't become clear to me what
>>> uses these are for HVM guests considering the separate address
>>> spaces.
>> To avoid unnecessary IPIs to VCPUs that are not currently scheduled (my
>> mistake was that I didn't realize that IPIs to those pCPUs will be
>> filtered out by the hypervisor).
>>
>>> As long as they're useless if called, I'd still favor making
>>> them inaccessible.
>> VCPUs that are scheduled will receive the required flush requests.
> I don't follow - an INVLPG done by the hypervisor won't do any
> flushing for a HVM guest.

I thought that this would be done with VPID of intended VCPU still 
loaded and so INVLPG would flush guest's address?

-boris


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-15 15:24         ` Jan Beulich
@ 2015-12-15 15:37           ` Boris Ostrovsky
  2015-12-15 15:37           ` Boris Ostrovsky
  1 sibling, 0 replies; 23+ messages in thread
From: Boris Ostrovsky @ 2015-12-15 15:37 UTC (permalink / raw)
  To: Jan Beulich, Konrad Rzeszutek Wilk
  Cc: 3.14+, linux-kernel, stable, david.vrabel, xen-devel, #

On 12/15/2015 10:24 AM, Jan Beulich wrote:
>>>> On 15.12.15 at 16:14, <boris.ostrovsky@oracle.com> wrote:
>> On 12/15/2015 10:03 AM, Jan Beulich wrote:
>>>>>> On 15.12.15 at 15:36, <boris.ostrovsky@oracle.com> wrote:
>>>> On 12/14/2015 10:27 AM, Konrad Rzeszutek Wilk wrote:
>>>>> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
>>>>>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>>>>>> will likely perform same IPIs as would have the guest.
>>>>>>
>>>>> But if the VCPU is asleep, doing it via the hypervisor will save us waking
>>>>> up the guest VCPU, sending an IPI - just to do an TLB flush
>>>>> of that CPU. Which is pointless as the CPU hadn't been running the
>>>>> guest in the first place.
>>>>>
>>>>>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>>>>>> guest's address on remote CPU (when, for example, VCPU from another
>>>>>> guest
>>>>>> is running there).
>>>>> Right, so the hypervisor won't even send an IPI there.
>>>>>
>>>>> But if you do it via the normal guest IPI mechanism (which are opaque
>>>>> to the hypervisor) you and up scheduling the guest VCPU to do
>>>>> send an hypervisor callback. And the callback will go the IPI routine
>>>>> which will do an TLB flush. Not necessary.
>>>>>
>>>>> This is all in case of oversubscription of course. In the case where
>>>>> we are fine on vCPU resources it does not matter.
>>>> So then should we keep these two operations (MMUEXT_INVLPG_MULTI and
>>>> MMUEXT_TLB_FLUSH_MULT) available to HVM/PVH guests? If the guest's VCPU
>>>> is not running then TLBs must have been flushed.
>>> While I followed the discussion, it didn't become clear to me what
>>> uses these are for HVM guests considering the separate address
>>> spaces.
>> To avoid unnecessary IPIs to VCPUs that are not currently scheduled (my
>> mistake was that I didn't realize that IPIs to those pCPUs will be
>> filtered out by the hypervisor).
>>
>>> As long as they're useless if called, I'd still favor making
>>> them inaccessible.
>> VCPUs that are scheduled will receive the required flush requests.
> I don't follow - an INVLPG done by the hypervisor won't do any
> flushing for a HVM guest.

I thought that this would be done with VPID of intended VCPU still 
loaded and so INVLPG would flush guest's address?

-boris

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
  2015-12-15 15:37           ` Boris Ostrovsky
@ 2015-12-15 16:07             ` Jan Beulich
  0 siblings, 0 replies; 23+ messages in thread
From: Jan Beulich @ 2015-12-15 16:07 UTC (permalink / raw)
  To: Boris Ostrovsky, Konrad Rzeszutek Wilk
  Cc: #, 3.14+, david.vrabel, xen-devel, linux-kernel, stable

>>> On 15.12.15 at 16:37, <boris.ostrovsky@oracle.com> wrote:
> On 12/15/2015 10:24 AM, Jan Beulich wrote:
>>>>> On 15.12.15 at 16:14, <boris.ostrovsky@oracle.com> wrote:
>>> On 12/15/2015 10:03 AM, Jan Beulich wrote:
>>>>>>> On 15.12.15 at 15:36, <boris.ostrovsky@oracle.com> wrote:
>>>>> On 12/14/2015 10:27 AM, Konrad Rzeszutek Wilk wrote:
>>>>>> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
>>>>>>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>>>>>>> will likely perform same IPIs as would have the guest.
>>>>>>>
>>>>>> But if the VCPU is asleep, doing it via the hypervisor will save us waking
>>>>>> up the guest VCPU, sending an IPI - just to do an TLB flush
>>>>>> of that CPU. Which is pointless as the CPU hadn't been running the
>>>>>> guest in the first place.
>>>>>>
>>>>>>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>>>>>>> guest's address on remote CPU (when, for example, VCPU from another
>>>>>>> guest
>>>>>>> is running there).
>>>>>> Right, so the hypervisor won't even send an IPI there.
>>>>>>
>>>>>> But if you do it via the normal guest IPI mechanism (which are opaque
>>>>>> to the hypervisor) you and up scheduling the guest VCPU to do
>>>>>> send an hypervisor callback. And the callback will go the IPI routine
>>>>>> which will do an TLB flush. Not necessary.
>>>>>>
>>>>>> This is all in case of oversubscription of course. In the case where
>>>>>> we are fine on vCPU resources it does not matter.
>>>>> So then should we keep these two operations (MMUEXT_INVLPG_MULTI and
>>>>> MMUEXT_TLB_FLUSH_MULT) available to HVM/PVH guests? If the guest's VCPU
>>>>> is not running then TLBs must have been flushed.
>>>> While I followed the discussion, it didn't become clear to me what
>>>> uses these are for HVM guests considering the separate address
>>>> spaces.
>>> To avoid unnecessary IPIs to VCPUs that are not currently scheduled (my
>>> mistake was that I didn't realize that IPIs to those pCPUs will be
>>> filtered out by the hypervisor).
>>>
>>>> As long as they're useless if called, I'd still favor making
>>>> them inaccessible.
>>> VCPUs that are scheduled will receive the required flush requests.
>> I don't follow - an INVLPG done by the hypervisor won't do any
>> flushing for a HVM guest.
> 
> I thought that this would be done with VPID of intended VCPU still 
> loaded and so INVLPG would flush guest's address?

Again - we're talking about separate address spaces here. INVLPG
can only act on the current one.

Jan


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
@ 2015-12-13  0:25 Boris Ostrovsky
  0 siblings, 0 replies; 23+ messages in thread
From: Boris Ostrovsky @ 2015-12-13  0:25 UTC (permalink / raw)
  To: david.vrabel, konrad.wilk
  Cc: stable, xen-devel, Boris Ostrovsky, linux-kernel, jbeulich

Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
will likely perform same IPIs as would have the guest.

More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
guest's address on remote CPU (when, for example, VCPU from another guest
is running there).

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Suggested-by: Jan Beulich <jbeulich@suse.com>
Cc: stable@vger.kernel.org # 3.14+
---
 arch/x86/xen/mmu.c |    9 ++-------
 1 files changed, 2 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 9c479fe..9ed7eed 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2495,14 +2495,9 @@ void __init xen_init_mmu_ops(void)
 {
 	x86_init.paging.pagetable_init = xen_pagetable_init;
 
-	/* Optimization - we can use the HVM one but it has no idea which
-	 * VCPUs are descheduled - which means that it will needlessly IPI
-	 * them. Xen knows so let it do the job.
-	 */
-	if (xen_feature(XENFEAT_auto_translated_physmap)) {
-		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
+	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return;
-	}
+
 	pv_mmu_ops = xen_mmu_ops;
 
 	memset(dummy_mapping, 0xff, PAGE_SIZE);
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2015-12-15 23:23 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-13  0:25 [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op Boris Ostrovsky
2015-12-14 13:58 ` David Vrabel
2015-12-14 13:58 ` [Xen-devel] " David Vrabel
2015-12-14 14:05   ` Boris Ostrovsky
2015-12-14 14:05   ` [Xen-devel] " Boris Ostrovsky
2015-12-14 15:27 ` Konrad Rzeszutek Wilk
2015-12-14 15:35   ` Roger Pau Monné
2015-12-14 15:35   ` [Xen-devel] " Roger Pau Monné
2015-12-14 15:58     ` Boris Ostrovsky
2015-12-14 15:58       ` Boris Ostrovsky
2015-12-14 15:58     ` Boris Ostrovsky
2015-12-15 14:36   ` Boris Ostrovsky
2015-12-15 15:03     ` Jan Beulich
2015-12-15 15:14       ` Boris Ostrovsky
2015-12-15 15:14       ` Boris Ostrovsky
2015-12-15 15:24         ` Jan Beulich
2015-12-15 15:37           ` Boris Ostrovsky
2015-12-15 15:37           ` Boris Ostrovsky
2015-12-15 16:07             ` Jan Beulich
2015-12-15 15:24         ` Jan Beulich
2015-12-15 15:03     ` Jan Beulich
2015-12-15 14:36   ` Boris Ostrovsky
2015-12-13  0:25 Boris Ostrovsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.