All of lore.kernel.org
 help / color / mirror / Atom feed
From: James Morse <james.morse@arm.com>
To: Julien Grall <julien.grall@arm.com>
Cc: linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, marc.zyngier@arm.com,
	julien.thierry@arm.com, suzuki.poulose@arm.com,
	catalin.marinas@arm.com, will.deacon@arm.com
Subject: Re: [RFC v2 14/14] kvm/arm: Align the VMID allocation with the arm64 ASID one
Date: Wed, 3 Jul 2019 18:36:09 +0100	[thread overview]
Message-ID: <39d47f54-459f-ce07-91c0-0158896a6783@arm.com> (raw)
In-Reply-To: <20190620130608.17230-15-julien.grall@arm.com>

Hi Julien,

On 20/06/2019 14:06, Julien Grall wrote:
> At the moment, the VMID algorithm will send an SGI to all the CPUs to
> force an exit and then broadcast a full TLB flush and I-Cache
> invalidation.
> 
> This patch re-use the new ASID allocator. The
> benefits are:
>     - CPUs are not forced to exit at roll-over. Instead the VMID will be
>     marked reserved and the context will be flushed at next exit. This
>     will reduce the IPIs traffic.
>     - Context invalidation is now per-CPU rather than broadcasted.

+ Catalin has a model of the asid-allocator.


> With the new algo, the code is now adapted:
>     - The function __kvm_flush_vm_context() has been renamed to
>     __kvm_flush_cpu_vmid_context and now only flushing the current CPU context.
>     - The call to update_vttbr() will be done with preemption disabled
>     as the new algo requires to store information per-CPU.
>     - The TLBs associated to EL1 will be flushed when booting a CPU to
>     deal with stale information. This was previously done on the
>     allocation of the first VMID of a new generation.
> 
> The measurement was made on a Seattle based SoC (8 CPUs), with the
> number of VMID limited to 4-bit. The test involves running concurrently 40
> guests with 2 vCPUs. Each guest will then execute hackbench 5 times
> before exiting.

> diff --git a/arch/arm64/include/asm/kvm_asid.h b/arch/arm64/include/asm/kvm_asid.h
> new file mode 100644
> index 000000000000..8b586e43c094
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_asid.h
> @@ -0,0 +1,8 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __ARM64_KVM_ASID_H__
> +#define __ARM64_KVM_ASID_H__
> +
> +#include <asm/asid.h>
> +
> +#endif /* __ARM64_KVM_ASID_H__ */
> +
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index ff73f5462aca..06821f548c0f 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -62,7 +62,7 @@ extern char __kvm_hyp_init_end[];
>  
>  extern char __kvm_hyp_vector[];
>  
> -extern void __kvm_flush_vm_context(void);
> +extern void __kvm_flush_cpu_vmid_context(void);
>  extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);

As we've got a __kvm_tlb_flush_local_vmid(), would __kvm_tlb_flush_local_all() fit in
better? (This mirrors local_flush_tlb_all() too)


>  extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
>  extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu);
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 4bcd9c1291d5..7ef45b7da4eb 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -68,8 +68,8 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext);
>  void __extended_idmap_trampoline(phys_addr_t boot_pgd, phys_addr_t idmap_start);
>  
>  struct kvm_vmid {
> -	/* The VMID generation used for the virt. memory system */
> -	u64    vmid_gen;
> +	/* The ASID used for the ASID allocator */
> +	atomic64_t asid;

Can we call this 'id' as happens in mm_context_t? (calling it asid is confusing)

>  	u32    vmid;

Can we filter out the generation bits in kvm_get_vttbr() in the same way the arch code
does in cpu_do_switch_mm().

I think this saves writing back a cached pre-filtered version every time, or needing
special hooks to know when the value changed. (so we can remove this variable)


>  };


> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index bd5c55916d0d..e906278a67cd 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -449,35 +447,17 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)

(git made a mess of the diff here... squashed to just the new code:)


> +static void vmid_update_ctxt(void *ctxt)
>  {
> +	struct kvm_vmid *vmid = ctxt;
> +	u64 asid = atomic64_read(&vmid->asid);

> +	vmid->vmid = asid & ((1ULL << kvm_get_vmid_bits()) - 1);

I don't like having to poke this through the asid-allocator as a kvm-specific hack. Can we
do it in kvm_get_vttbr()?


>  }

> @@ -487,48 +467,11 @@ static bool need_new_vmid_gen(struct kvm_vmid *vmid)

(git made a mess of the diff here... squashed to just the new code:)

>  static void update_vmid(struct kvm_vmid *vmid)
>  {

> +	int cpu = get_cpu();
>  
> +	asid_check_context(&vmid_info, &vmid->asid, cpu, vmid);
>  
> +	put_cpu();

If we're calling update_vmid() in a pre-emptible context, aren't we already doomed?

Could we use smp_processor_id() instead.


>  }


> @@ -1322,6 +1271,8 @@ static void cpu_init_hyp_mode(void *dummy)
>  
>  	__cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
>  	__cpu_init_stage2();


> +	kvm_call_hyp(__kvm_flush_cpu_vmid_context);

I think we only need to do this for VHE systems too. cpu_hyp_reinit() only does the call
to cpu_init_hyp_mode() if !is_kernel_in_hyp_mode().


>  }
>  
>  static void cpu_hyp_reset(void)
> @@ -1429,6 +1380,17 @@ static inline void hyp_cpu_pm_exit(void)
>  
>  static int init_common_resources(void)
>  {
> +	/*
> +	 * Initialize the ASID allocator telling it to allocate a single
> +	 * VMID per VM.
> +	 */
> +	if (asid_allocator_init(&vmid_info, kvm_get_vmid_bits(), 1,
> +				vmid_flush_cpu_ctxt, vmid_update_ctxt))
> +		panic("Failed to initialize VMID allocator\n");

Couldn't we return an error instead? The asid allocator is needed for user-space, its
pointless to keep running if it fails. The same isn't true here. (and it would make it
easier to debug what went wrong!)


> +	vmid_info.active = &active_vmids;
> +	vmid_info.reserved = &reserved_vmids;
> +
>  	kvm_set_ipa_limit();


Thanks,

James

WARNING: multiple messages have this Message-ID (diff)
From: James Morse <james.morse@arm.com>
To: Julien Grall <julien.grall@arm.com>
Cc: marc.zyngier@arm.com, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [RFC v2 14/14] kvm/arm: Align the VMID allocation with the arm64 ASID one
Date: Wed, 3 Jul 2019 18:36:09 +0100	[thread overview]
Message-ID: <39d47f54-459f-ce07-91c0-0158896a6783@arm.com> (raw)
In-Reply-To: <20190620130608.17230-15-julien.grall@arm.com>

Hi Julien,

On 20/06/2019 14:06, Julien Grall wrote:
> At the moment, the VMID algorithm will send an SGI to all the CPUs to
> force an exit and then broadcast a full TLB flush and I-Cache
> invalidation.
> 
> This patch re-use the new ASID allocator. The
> benefits are:
>     - CPUs are not forced to exit at roll-over. Instead the VMID will be
>     marked reserved and the context will be flushed at next exit. This
>     will reduce the IPIs traffic.
>     - Context invalidation is now per-CPU rather than broadcasted.

+ Catalin has a model of the asid-allocator.


> With the new algo, the code is now adapted:
>     - The function __kvm_flush_vm_context() has been renamed to
>     __kvm_flush_cpu_vmid_context and now only flushing the current CPU context.
>     - The call to update_vttbr() will be done with preemption disabled
>     as the new algo requires to store information per-CPU.
>     - The TLBs associated to EL1 will be flushed when booting a CPU to
>     deal with stale information. This was previously done on the
>     allocation of the first VMID of a new generation.
> 
> The measurement was made on a Seattle based SoC (8 CPUs), with the
> number of VMID limited to 4-bit. The test involves running concurrently 40
> guests with 2 vCPUs. Each guest will then execute hackbench 5 times
> before exiting.

> diff --git a/arch/arm64/include/asm/kvm_asid.h b/arch/arm64/include/asm/kvm_asid.h
> new file mode 100644
> index 000000000000..8b586e43c094
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_asid.h
> @@ -0,0 +1,8 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __ARM64_KVM_ASID_H__
> +#define __ARM64_KVM_ASID_H__
> +
> +#include <asm/asid.h>
> +
> +#endif /* __ARM64_KVM_ASID_H__ */
> +
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index ff73f5462aca..06821f548c0f 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -62,7 +62,7 @@ extern char __kvm_hyp_init_end[];
>  
>  extern char __kvm_hyp_vector[];
>  
> -extern void __kvm_flush_vm_context(void);
> +extern void __kvm_flush_cpu_vmid_context(void);
>  extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);

As we've got a __kvm_tlb_flush_local_vmid(), would __kvm_tlb_flush_local_all() fit in
better? (This mirrors local_flush_tlb_all() too)


>  extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
>  extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu);
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 4bcd9c1291d5..7ef45b7da4eb 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -68,8 +68,8 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext);
>  void __extended_idmap_trampoline(phys_addr_t boot_pgd, phys_addr_t idmap_start);
>  
>  struct kvm_vmid {
> -	/* The VMID generation used for the virt. memory system */
> -	u64    vmid_gen;
> +	/* The ASID used for the ASID allocator */
> +	atomic64_t asid;

Can we call this 'id' as happens in mm_context_t? (calling it asid is confusing)

>  	u32    vmid;

Can we filter out the generation bits in kvm_get_vttbr() in the same way the arch code
does in cpu_do_switch_mm().

I think this saves writing back a cached pre-filtered version every time, or needing
special hooks to know when the value changed. (so we can remove this variable)


>  };


> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index bd5c55916d0d..e906278a67cd 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -449,35 +447,17 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)

(git made a mess of the diff here... squashed to just the new code:)


> +static void vmid_update_ctxt(void *ctxt)
>  {
> +	struct kvm_vmid *vmid = ctxt;
> +	u64 asid = atomic64_read(&vmid->asid);

> +	vmid->vmid = asid & ((1ULL << kvm_get_vmid_bits()) - 1);

I don't like having to poke this through the asid-allocator as a kvm-specific hack. Can we
do it in kvm_get_vttbr()?


>  }

> @@ -487,48 +467,11 @@ static bool need_new_vmid_gen(struct kvm_vmid *vmid)

(git made a mess of the diff here... squashed to just the new code:)

>  static void update_vmid(struct kvm_vmid *vmid)
>  {

> +	int cpu = get_cpu();
>  
> +	asid_check_context(&vmid_info, &vmid->asid, cpu, vmid);
>  
> +	put_cpu();

If we're calling update_vmid() in a pre-emptible context, aren't we already doomed?

Could we use smp_processor_id() instead.


>  }


> @@ -1322,6 +1271,8 @@ static void cpu_init_hyp_mode(void *dummy)
>  
>  	__cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
>  	__cpu_init_stage2();


> +	kvm_call_hyp(__kvm_flush_cpu_vmid_context);

I think we only need to do this for VHE systems too. cpu_hyp_reinit() only does the call
to cpu_init_hyp_mode() if !is_kernel_in_hyp_mode().


>  }
>  
>  static void cpu_hyp_reset(void)
> @@ -1429,6 +1380,17 @@ static inline void hyp_cpu_pm_exit(void)
>  
>  static int init_common_resources(void)
>  {
> +	/*
> +	 * Initialize the ASID allocator telling it to allocate a single
> +	 * VMID per VM.
> +	 */
> +	if (asid_allocator_init(&vmid_info, kvm_get_vmid_bits(), 1,
> +				vmid_flush_cpu_ctxt, vmid_update_ctxt))
> +		panic("Failed to initialize VMID allocator\n");

Couldn't we return an error instead? The asid allocator is needed for user-space, its
pointless to keep running if it fails. The same isn't true here. (and it would make it
easier to debug what went wrong!)


> +	vmid_info.active = &active_vmids;
> +	vmid_info.reserved = &reserved_vmids;
> +
>  	kvm_set_ipa_limit();


Thanks,

James
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: James Morse <james.morse@arm.com>
To: Julien Grall <julien.grall@arm.com>
Cc: julien.thierry@arm.com, marc.zyngier@arm.com,
	catalin.marinas@arm.com, suzuki.poulose@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [RFC v2 14/14] kvm/arm: Align the VMID allocation with the arm64 ASID one
Date: Wed, 3 Jul 2019 18:36:09 +0100	[thread overview]
Message-ID: <39d47f54-459f-ce07-91c0-0158896a6783@arm.com> (raw)
In-Reply-To: <20190620130608.17230-15-julien.grall@arm.com>

Hi Julien,

On 20/06/2019 14:06, Julien Grall wrote:
> At the moment, the VMID algorithm will send an SGI to all the CPUs to
> force an exit and then broadcast a full TLB flush and I-Cache
> invalidation.
> 
> This patch re-use the new ASID allocator. The
> benefits are:
>     - CPUs are not forced to exit at roll-over. Instead the VMID will be
>     marked reserved and the context will be flushed at next exit. This
>     will reduce the IPIs traffic.
>     - Context invalidation is now per-CPU rather than broadcasted.

+ Catalin has a model of the asid-allocator.


> With the new algo, the code is now adapted:
>     - The function __kvm_flush_vm_context() has been renamed to
>     __kvm_flush_cpu_vmid_context and now only flushing the current CPU context.
>     - The call to update_vttbr() will be done with preemption disabled
>     as the new algo requires to store information per-CPU.
>     - The TLBs associated to EL1 will be flushed when booting a CPU to
>     deal with stale information. This was previously done on the
>     allocation of the first VMID of a new generation.
> 
> The measurement was made on a Seattle based SoC (8 CPUs), with the
> number of VMID limited to 4-bit. The test involves running concurrently 40
> guests with 2 vCPUs. Each guest will then execute hackbench 5 times
> before exiting.

> diff --git a/arch/arm64/include/asm/kvm_asid.h b/arch/arm64/include/asm/kvm_asid.h
> new file mode 100644
> index 000000000000..8b586e43c094
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_asid.h
> @@ -0,0 +1,8 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __ARM64_KVM_ASID_H__
> +#define __ARM64_KVM_ASID_H__
> +
> +#include <asm/asid.h>
> +
> +#endif /* __ARM64_KVM_ASID_H__ */
> +
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index ff73f5462aca..06821f548c0f 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -62,7 +62,7 @@ extern char __kvm_hyp_init_end[];
>  
>  extern char __kvm_hyp_vector[];
>  
> -extern void __kvm_flush_vm_context(void);
> +extern void __kvm_flush_cpu_vmid_context(void);
>  extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);

As we've got a __kvm_tlb_flush_local_vmid(), would __kvm_tlb_flush_local_all() fit in
better? (This mirrors local_flush_tlb_all() too)


>  extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
>  extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu);
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 4bcd9c1291d5..7ef45b7da4eb 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -68,8 +68,8 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext);
>  void __extended_idmap_trampoline(phys_addr_t boot_pgd, phys_addr_t idmap_start);
>  
>  struct kvm_vmid {
> -	/* The VMID generation used for the virt. memory system */
> -	u64    vmid_gen;
> +	/* The ASID used for the ASID allocator */
> +	atomic64_t asid;

Can we call this 'id' as happens in mm_context_t? (calling it asid is confusing)

>  	u32    vmid;

Can we filter out the generation bits in kvm_get_vttbr() in the same way the arch code
does in cpu_do_switch_mm().

I think this saves writing back a cached pre-filtered version every time, or needing
special hooks to know when the value changed. (so we can remove this variable)


>  };


> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index bd5c55916d0d..e906278a67cd 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -449,35 +447,17 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)

(git made a mess of the diff here... squashed to just the new code:)


> +static void vmid_update_ctxt(void *ctxt)
>  {
> +	struct kvm_vmid *vmid = ctxt;
> +	u64 asid = atomic64_read(&vmid->asid);

> +	vmid->vmid = asid & ((1ULL << kvm_get_vmid_bits()) - 1);

I don't like having to poke this through the asid-allocator as a kvm-specific hack. Can we
do it in kvm_get_vttbr()?


>  }

> @@ -487,48 +467,11 @@ static bool need_new_vmid_gen(struct kvm_vmid *vmid)

(git made a mess of the diff here... squashed to just the new code:)

>  static void update_vmid(struct kvm_vmid *vmid)
>  {

> +	int cpu = get_cpu();
>  
> +	asid_check_context(&vmid_info, &vmid->asid, cpu, vmid);
>  
> +	put_cpu();

If we're calling update_vmid() in a pre-emptible context, aren't we already doomed?

Could we use smp_processor_id() instead.


>  }


> @@ -1322,6 +1271,8 @@ static void cpu_init_hyp_mode(void *dummy)
>  
>  	__cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
>  	__cpu_init_stage2();


> +	kvm_call_hyp(__kvm_flush_cpu_vmid_context);

I think we only need to do this for VHE systems too. cpu_hyp_reinit() only does the call
to cpu_init_hyp_mode() if !is_kernel_in_hyp_mode().


>  }
>  
>  static void cpu_hyp_reset(void)
> @@ -1429,6 +1380,17 @@ static inline void hyp_cpu_pm_exit(void)
>  
>  static int init_common_resources(void)
>  {
> +	/*
> +	 * Initialize the ASID allocator telling it to allocate a single
> +	 * VMID per VM.
> +	 */
> +	if (asid_allocator_init(&vmid_info, kvm_get_vmid_bits(), 1,
> +				vmid_flush_cpu_ctxt, vmid_update_ctxt))
> +		panic("Failed to initialize VMID allocator\n");

Couldn't we return an error instead? The asid allocator is needed for user-space, its
pointless to keep running if it fails. The same isn't true here. (and it would make it
easier to debug what went wrong!)


> +	vmid_info.active = &active_vmids;
> +	vmid_info.reserved = &reserved_vmids;
> +
>  	kvm_set_ipa_limit();


Thanks,

James

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2019-07-03 17:36 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-20 13:05 [RFC v2 00/14] kvm/arm: Align the VMID allocation with the arm64 ASID one Julien Grall
2019-06-20 13:05 ` Julien Grall
2019-06-20 13:05 ` Julien Grall
2019-06-20 13:05 ` [RFC v2 01/14] arm64/mm: Introduce asid_info structure and move asid_generation/asid_map to it Julien Grall
2019-06-20 13:05   ` Julien Grall
2019-06-20 13:05   ` Julien Grall
2019-06-20 13:05 ` [RFC v2 02/14] arm64/mm: Move active_asids and reserved_asids to asid_info Julien Grall
2019-06-20 13:05   ` Julien Grall
2019-06-20 13:05   ` Julien Grall
2019-06-20 13:05 ` [RFC v2 03/14] arm64/mm: Move bits " Julien Grall
2019-06-20 13:05   ` Julien Grall
2019-06-20 13:05   ` Julien Grall
2019-06-20 13:05 ` [RFC v2 04/14] arm64/mm: Move the variable lock and tlb_flush_pending " Julien Grall
2019-06-20 13:05   ` Julien Grall
2019-06-20 13:05   ` Julien Grall
2019-06-20 13:05 ` [RFC v2 05/14] arm64/mm: Remove dependency on MM in new_context Julien Grall
2019-06-20 13:05   ` Julien Grall
2019-06-20 13:05   ` Julien Grall
2019-06-20 13:06 ` [RFC v2 06/14] arm64/mm: Store the number of asid allocated per context Julien Grall
2019-06-20 13:06   ` Julien Grall
2019-06-20 13:06   ` Julien Grall
2019-06-20 13:06 ` [RFC v2 07/14] arm64/mm: Introduce NUM_ASIDS Julien Grall
2019-06-20 13:06   ` Julien Grall
2019-06-20 13:06   ` Julien Grall
2019-06-20 13:06 ` [RFC v2 08/14] arm64/mm: Split asid_inits in 2 parts Julien Grall
2019-06-20 13:06   ` Julien Grall
2019-06-20 13:06   ` Julien Grall
2019-06-20 13:06 ` [RFC v2 09/14] arm64/mm: Split the function check_and_switch_context in 3 parts Julien Grall
2019-06-20 13:06   ` Julien Grall
2019-06-20 13:06   ` Julien Grall
2019-06-20 13:06 ` [RFC v2 10/14] arm64/mm: Introduce a callback to flush the local context Julien Grall
2019-06-20 13:06   ` Julien Grall
2019-06-20 13:06   ` Julien Grall
2019-06-20 13:06 ` [RFC v2 11/14] arm64: Move the ASID allocator code in a separate file Julien Grall
2019-06-20 13:06   ` Julien Grall
2019-06-20 13:06   ` Julien Grall
2019-07-04 14:56   ` James Morse
2019-07-04 14:56     ` James Morse
2019-07-04 14:56     ` James Morse
2019-07-15 10:58     ` Julien Grall
2019-07-15 10:58       ` Julien Grall
2019-07-15 10:58       ` Julien Grall
2019-06-20 13:06 ` [RFC v2 12/14] arm64/lib: asid: Allow user to update the context under the lock Julien Grall
2019-06-20 13:06   ` Julien Grall
2019-06-20 13:06   ` Julien Grall
2019-07-03 17:35   ` James Morse
2019-07-03 17:35     ` James Morse
2019-07-03 17:35     ` James Morse
2019-07-15 14:38     ` Julien Grall
2019-07-15 14:38       ` Julien Grall
2019-07-15 14:38       ` Julien Grall
2019-06-20 13:06 ` [RFC v2 13/14] arm/kvm: Introduce a new VMID allocator Julien Grall
2019-06-20 13:06   ` Julien Grall
2019-06-20 13:06   ` Julien Grall
2019-06-20 13:06 ` [RFC v2 14/14] kvm/arm: Align the VMID allocation with the arm64 ASID one Julien Grall
2019-06-20 13:06   ` Julien Grall
2019-06-20 13:06   ` Julien Grall
2019-07-03 17:36   ` James Morse [this message]
2019-07-03 17:36     ` James Morse
2019-07-03 17:36     ` James Morse
2019-07-15 17:06     ` Julien Grall
2019-07-15 17:06       ` Julien Grall
2019-07-15 17:06       ` Julien Grall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=39d47f54-459f-ce07-91c0-0158896a6783@arm.com \
    --to=james.morse@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=julien.grall@arm.com \
    --cc=julien.thierry@arm.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marc.zyngier@arm.com \
    --cc=suzuki.poulose@arm.com \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.