All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC v2 1/3] KVM: Guard mmu_notifier specific code with CONFIG_MMU_NOTIFIER
@ 2012-02-10 22:22 Marc Zyngier
  2012-02-10 22:22 ` [PATCH RFC v2 2/3] ARM: KVM: mark the end of the HYP mode code with __kvm_hyp_code_end Marc Zyngier
  2012-02-10 22:22 ` [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers Marc Zyngier
  0 siblings, 2 replies; 12+ messages in thread
From: Marc Zyngier @ 2012-02-10 22:22 UTC (permalink / raw)
  To: c.dall; +Cc: kvm, android-virt

In order to avoid compilation failure when KVM is not compiled in,
guard the mmu_notifier specific sections with both CONFIG_MMU_NOTIFIER
and KVM_ARCH_WANT_MMU_NOTIFIER, like it is being done in the rest of
the KVM code.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 include/linux/kvm_host.h |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 900c763..a596b47 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -287,7 +287,7 @@ struct kvm {
 	struct hlist_head irq_ack_notifier_list;
 #endif
 
-#ifdef KVM_ARCH_WANT_MMU_NOTIFIER
+#if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER)
 	struct mmu_notifier mmu_notifier;
 	unsigned long mmu_notifier_seq;
 	long mmu_notifier_count;
@@ -695,7 +695,7 @@ struct kvm_stats_debugfs_item {
 extern struct kvm_stats_debugfs_item debugfs_entries[];
 extern struct dentry *kvm_debugfs_dir;
 
-#ifdef KVM_ARCH_WANT_MMU_NOTIFIER
+#if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER)
 static inline int mmu_notifier_retry(struct kvm_vcpu *vcpu, unsigned long mmu_seq)
 {
 	if (unlikely(vcpu->kvm->mmu_notifier_count))
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH RFC v2 2/3] ARM: KVM: mark the end of the HYP mode code with __kvm_hyp_code_end
  2012-02-10 22:22 [PATCH RFC v2 1/3] KVM: Guard mmu_notifier specific code with CONFIG_MMU_NOTIFIER Marc Zyngier
@ 2012-02-10 22:22 ` Marc Zyngier
  2012-02-10 22:32   ` Christoffer Dall
  2012-02-10 22:22 ` [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers Marc Zyngier
  1 sibling, 1 reply; 12+ messages in thread
From: Marc Zyngier @ 2012-02-10 22:22 UTC (permalink / raw)
  To: c.dall; +Cc: kvm, android-virt

Use __kvm_hyp_code_end to mark the end of the main HYP code instead of
__kvm_vcpu_run_end. It's a bit cleaner as we're about to add more code
to that section.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_asm.h |    3 ++-
 arch/arm/kvm/arm.c             |    4 ++--
 arch/arm/kvm/interrupts.S      |    8 +++++---
 3 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 89c318ea..5ee7bd3 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -45,7 +45,8 @@ extern char __kvm_hyp_vector[];
 extern char __kvm_hyp_vector_end[];
 
 extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
-extern char __kvm_vcpu_run_end[];
+
+extern char __kvm_hyp_code_end[];
 #endif
 
 #endif /* __ARM_KVM_ASM_H__ */
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 14ccc4d..602e087 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -636,9 +636,9 @@ static int init_hyp_mode(void)
 	 * Map the world-switch code
 	 */
 	err = create_hyp_mappings(kvm_hyp_pgd,
-				  __kvm_vcpu_run, __kvm_vcpu_run_end);
+				  __kvm_vcpu_run, __kvm_hyp_code_end);
 	if (err) {
-		kvm_err(err, "Cannot map world-switch code");
+		kvm_err(err, "Cannot map hyp mode code");
 		goto out_free_mappings;
 	}
 
diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
index fbc26ca..8b7e5e9 100644
--- a/arch/arm/kvm/interrupts.S
+++ b/arch/arm/kvm/interrupts.S
@@ -351,11 +351,13 @@ return_to_ioctl:
 THUMB(	orr	lr, lr, #1)
 	mov	pc, lr
 
-	.ltorg
 
-__kvm_vcpu_run_end:
-	.globl __kvm_vcpu_run_end
+	
+	.ltorg
 
+__kvm_hyp_code_end:
+	.globl	__kvm_hyp_code_end
+	
 
 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
 @  Hypervisor exception vector and handlers
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers
  2012-02-10 22:22 [PATCH RFC v2 1/3] KVM: Guard mmu_notifier specific code with CONFIG_MMU_NOTIFIER Marc Zyngier
  2012-02-10 22:22 ` [PATCH RFC v2 2/3] ARM: KVM: mark the end of the HYP mode code with __kvm_hyp_code_end Marc Zyngier
@ 2012-02-10 22:22 ` Marc Zyngier
  2012-02-10 22:49   ` Christoffer Dall
  2012-02-11 15:00   ` [Android-virt] " Antonios Motakis
  1 sibling, 2 replies; 12+ messages in thread
From: Marc Zyngier @ 2012-02-10 22:22 UTC (permalink / raw)
  To: c.dall; +Cc: kvm, android-virt

Add the necessary infrastructure to handle MMU notifiers on KVM/ARM.
As we don't have shadow page tables, the implementation is actually very
simple. The only supported operation is kvm_unmap_hva(), where we remove
the HVA from the 2nd stage translation. All other hooks are NOPs.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
The aging ops are left unused for the moment, until I actually understand what
they are used for and whether they apply to the ARM architecture.

>From v1:
- Fixed the brown paper bug of invalidating the hva instead of the ipa

 arch/arm/include/asm/kvm_asm.h  |    2 +
 arch/arm/include/asm/kvm_host.h |   19 ++++++++++++++++
 arch/arm/kvm/Kconfig            |    1 +
 arch/arm/kvm/interrupts.S       |   18 ++++++++++++++++
 arch/arm/kvm/mmu.c              |   44 ++++++++++++++++++++++++++++++++++++--
 5 files changed, 81 insertions(+), 3 deletions(-)

diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 5ee7bd3..18be9bb 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -36,6 +36,7 @@ asm(".equ SMCHYP_HVBAR_W, 0xfffffff0");
 #endif /* __ASSEMBLY__ */
 
 #ifndef __ASSEMBLY__
+struct kvm;
 struct kvm_vcpu;
 
 extern char __kvm_hyp_init[];
@@ -46,6 +47,7 @@ extern char __kvm_hyp_vector_end[];
 
 extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
 
+extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
 extern char __kvm_hyp_code_end[];
 #endif
 
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 555a6f1..1c0c68b 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -109,4 +109,23 @@ struct kvm_vm_stat {
 struct kvm_vcpu_stat {
 };
 
+#define KVM_ARCH_WANT_MMU_NOTIFIER
+struct kvm;
+int kvm_unmap_hva(struct kvm *kvm, unsigned long hva);
+
+/* We do not have shadow page tables, hence the empty hooks */
+static inline int kvm_age_hva(struct kvm *kvm, unsigned long hva)
+{
+	return 0;
+}
+
+static inline int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
+{
+	return 0;
+}
+
+static inline void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte)
+{
+}
+
 #endif /* __ARM_KVM_HOST_H__ */
diff --git a/arch/arm/kvm/Kconfig b/arch/arm/kvm/Kconfig
index ccabbb3..7ce9173 100644
--- a/arch/arm/kvm/Kconfig
+++ b/arch/arm/kvm/Kconfig
@@ -36,6 +36,7 @@ config KVM_ARM_HOST
 	depends on KVM
 	depends on MMU
 	depends on CPU_V7 || ARM_VIRT_EXT
+	select	MMU_NOTIFIER
 	---help---
 	  Provides host support for ARM processors.
 
diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
index 8b7e5e9..8822fb3 100644
--- a/arch/arm/kvm/interrupts.S
+++ b/arch/arm/kvm/interrupts.S
@@ -351,7 +351,25 @@ return_to_ioctl:
 THUMB(	orr	lr, lr, #1)
 	mov	pc, lr
 
+ENTRY(__kvm_tlb_flush_vmid)
+	hvc	#0			@ Switch to Hyp mode
+	push	{r2, r3}
 
+	ldrd	r2, r3, [r0, #KVM_VTTBR]
+	mcrr	p15, 6, r2, r3, c2	@ Write VTTBR
+	isb
+	mcr     p15, 0, r0, c8, c7, 0	@ TBLIALL
+	dsb
+	isb
+	mov	r2, #0
+	mov	r3, #0
+	mcrr	p15, 6, r2, r3, c2	@ Back to VMID #0
+	isb
+
+	pop	{r2, r3}
+	hvc	#0			@ Back to SVC
+	mov	pc, lr
+ENDPROC(__kvm_tlb_flush_vmid)
 	
 	.ltorg
 
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index baeb8a1..3f8d83b 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -245,12 +245,12 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
 	kvm->arch.pgd = NULL;
 }
 
-static int __user_mem_abort(struct kvm *kvm, phys_addr_t addr, pfn_t pfn)
+static int stage2_set_pte(struct kvm *kvm, phys_addr_t addr, pte_t new_pte)
 {
 	pgd_t *pgd;
 	pud_t *pud;
 	pmd_t *pmd;
-	pte_t *pte, new_pte;
+	pte_t *pte;
 
 	/* Create 2nd stage page table mapping - Level 1 */
 	pgd = kvm->arch.pgd + pgd_index(addr);
@@ -279,12 +279,18 @@ static int __user_mem_abort(struct kvm *kvm, phys_addr_t addr, pfn_t pfn)
 		pte = pte_offset_kernel(pmd, addr);
 
 	/* Create 2nd stage page table mapping - Level 3 */
-	new_pte = pfn_pte(pfn, PAGE_KVM_GUEST);
 	set_pte_ext(pte, new_pte, 0);
 
 	return 0;
 }
 
+static int __user_mem_abort(struct kvm *kvm, phys_addr_t addr, pfn_t pfn)
+{
+	pte_t new_pte = pfn_pte(pfn, PAGE_KVM_GUEST);
+
+	return stage2_set_pte(kvm, addr, new_pte);
+}
+
 static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 			  gfn_t gfn, struct kvm_memory_slot *memslot)
 {
@@ -510,3 +516,35 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
 
 	return user_mem_abort(vcpu, fault_ipa, gfn, memslot);
 }
+
+int kvm_unmap_hva(struct kvm *kvm, unsigned long hva)
+{
+	static const pte_t null_pte;
+	struct kvm_memslots *slots;
+	struct kvm_memory_slot *memslot;
+	int needs_stage2_flush = 0;
+
+	slots = kvm_memslots(kvm);
+
+	/* we only care about the pages that the guest sees */
+	kvm_for_each_memslot(memslot, slots) {
+		unsigned long start = memslot->userspace_addr;
+		unsigned long end;
+
+		end = start + (memslot->npages << PAGE_SHIFT);
+		if (hva >= start && hva < end) {
+			gpa_t gpa_offset = hva - start;
+			gpa_t gpa = (memslot->base_gfn << PAGE_SHIFT) + gpa_offset;
+
+			if (stage2_set_pte(kvm, gpa, null_pte))
+				continue; /* Something bad happened, try to carry on */
+
+			needs_stage2_flush = 1;
+		}
+	}
+
+	if (needs_stage2_flush)
+		__kvm_tlb_flush_vmid(kvm);
+
+	return 0;
+}
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH RFC v2 2/3] ARM: KVM: mark the end of the HYP mode code with __kvm_hyp_code_end
  2012-02-10 22:22 ` [PATCH RFC v2 2/3] ARM: KVM: mark the end of the HYP mode code with __kvm_hyp_code_end Marc Zyngier
@ 2012-02-10 22:32   ` Christoffer Dall
  0 siblings, 0 replies; 12+ messages in thread
From: Christoffer Dall @ 2012-02-10 22:32 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, android-virt

On Fri, Feb 10, 2012 at 2:22 PM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> Use __kvm_hyp_code_end to mark the end of the main HYP code instead of
> __kvm_vcpu_run_end. It's a bit cleaner as we're about to add more code
> to that section.

this is good, but should we not rename the beginning of the section as
well (something like __kvm_hyp_code_start)???

Then we can include your new snippet and also the
__kvm_flush_vm_context in that section and perform a single mapping in
the init code - much nicer.

>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm/include/asm/kvm_asm.h |    3 ++-
>  arch/arm/kvm/arm.c             |    4 ++--
>  arch/arm/kvm/interrupts.S      |    8 +++++---
>  3 files changed, 9 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
> index 89c318ea..5ee7bd3 100644
> --- a/arch/arm/include/asm/kvm_asm.h
> +++ b/arch/arm/include/asm/kvm_asm.h
> @@ -45,7 +45,8 @@ extern char __kvm_hyp_vector[];
>  extern char __kvm_hyp_vector_end[];
>
>  extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
> -extern char __kvm_vcpu_run_end[];
> +
> +extern char __kvm_hyp_code_end[];
>  #endif
>
>  #endif /* __ARM_KVM_ASM_H__ */
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 14ccc4d..602e087 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -636,9 +636,9 @@ static int init_hyp_mode(void)
>         * Map the world-switch code
>         */
>        err = create_hyp_mappings(kvm_hyp_pgd,
> -                                 __kvm_vcpu_run, __kvm_vcpu_run_end);
> +                                 __kvm_vcpu_run, __kvm_hyp_code_end);
>        if (err) {
> -               kvm_err(err, "Cannot map world-switch code");
> +               kvm_err(err, "Cannot map hyp mode code");
>                goto out_free_mappings;
>        }
>
> diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
> index fbc26ca..8b7e5e9 100644
> --- a/arch/arm/kvm/interrupts.S
> +++ b/arch/arm/kvm/interrupts.S
> @@ -351,11 +351,13 @@ return_to_ioctl:
>  THUMB( orr     lr, lr, #1)
>        mov     pc, lr
>
> -       .ltorg
>
> -__kvm_vcpu_run_end:
> -       .globl __kvm_vcpu_run_end
> +
> +       .ltorg
>
> +__kvm_hyp_code_end:
> +       .globl  __kvm_hyp_code_end
> +
>
>  @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
>  @  Hypervisor exception vector and handlers
> --
> 1.7.3.4
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers
  2012-02-10 22:22 ` [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers Marc Zyngier
@ 2012-02-10 22:49   ` Christoffer Dall
  2012-02-11 15:00   ` [Android-virt] " Antonios Motakis
  1 sibling, 0 replies; 12+ messages in thread
From: Christoffer Dall @ 2012-02-10 22:49 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, android-virt

On Fri, Feb 10, 2012 at 2:22 PM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> Add the necessary infrastructure to handle MMU notifiers on KVM/ARM.
> As we don't have shadow page tables, the implementation is actually very
> simple. The only supported operation is kvm_unmap_hva(), where we remove
> the HVA from the 2nd stage translation. All other hooks are NOPs.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> The aging ops are left unused for the moment, until I actually understand what
> they are used for and whether they apply to the ARM architecture.
>
> From v1:
> - Fixed the brown paper bug of invalidating the hva instead of the ipa
>
>  arch/arm/include/asm/kvm_asm.h  |    2 +
>  arch/arm/include/asm/kvm_host.h |   19 ++++++++++++++++
>  arch/arm/kvm/Kconfig            |    1 +
>  arch/arm/kvm/interrupts.S       |   18 ++++++++++++++++
>  arch/arm/kvm/mmu.c              |   44 ++++++++++++++++++++++++++++++++++++--
>  5 files changed, 81 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
> index 5ee7bd3..18be9bb 100644
> --- a/arch/arm/include/asm/kvm_asm.h
> +++ b/arch/arm/include/asm/kvm_asm.h
> @@ -36,6 +36,7 @@ asm(".equ SMCHYP_HVBAR_W, 0xfffffff0");
>  #endif /* __ASSEMBLY__ */
>
>  #ifndef __ASSEMBLY__
> +struct kvm;
>  struct kvm_vcpu;
>
>  extern char __kvm_hyp_init[];
> @@ -46,6 +47,7 @@ extern char __kvm_hyp_vector_end[];
>
>  extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
>
> +extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
>  extern char __kvm_hyp_code_end[];
>  #endif
>
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index 555a6f1..1c0c68b 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -109,4 +109,23 @@ struct kvm_vm_stat {
>  struct kvm_vcpu_stat {
>  };
>
> +#define KVM_ARCH_WANT_MMU_NOTIFIER
> +struct kvm;
> +int kvm_unmap_hva(struct kvm *kvm, unsigned long hva);
> +
> +/* We do not have shadow page tables, hence the empty hooks */
> +static inline int kvm_age_hva(struct kvm *kvm, unsigned long hva)
> +{
> +       return 0;
> +}
> +
> +static inline int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
> +{
> +       return 0;
> +}
> +
> +static inline void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte)
> +{
> +}
> +
>  #endif /* __ARM_KVM_HOST_H__ */
> diff --git a/arch/arm/kvm/Kconfig b/arch/arm/kvm/Kconfig
> index ccabbb3..7ce9173 100644
> --- a/arch/arm/kvm/Kconfig
> +++ b/arch/arm/kvm/Kconfig
> @@ -36,6 +36,7 @@ config KVM_ARM_HOST
>        depends on KVM
>        depends on MMU
>        depends on CPU_V7 || ARM_VIRT_EXT
> +       select  MMU_NOTIFIER
>        ---help---
>          Provides host support for ARM processors.
>
> diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
> index 8b7e5e9..8822fb3 100644
> --- a/arch/arm/kvm/interrupts.S
> +++ b/arch/arm/kvm/interrupts.S
> @@ -351,7 +351,25 @@ return_to_ioctl:
>  THUMB( orr     lr, lr, #1)
>        mov     pc, lr
>

I would prefer moving this to the top of the file before all the
macros so the coherency between the world-switch to the guest and the
return path is clearer (see the v6 staging branch where I already
added a function).

What I want to avoid is this

__switch:
   do some switch stuff
__return:
   do some return stuff

cache_fun_1:
   foo

cache_fun_2:
   foo

cache_fun_3:
   foo

cache_fun_4:
   foo

vector:
   b __return


> +ENTRY(__kvm_tlb_flush_vmid)
> +       hvc     #0                      @ Switch to Hyp mode
> +       push    {r2, r3}
>
> +       ldrd    r2, r3, [r0, #KVM_VTTBR]
> +       mcrr    p15, 6, r2, r3, c2      @ Write VTTBR
> +       isb
> +       mcr     p15, 0, r0, c8, c7, 0   @ TBLIALL
> +       dsb
> +       isb
> +       mov     r2, #0
> +       mov     r3, #0
> +       mcrr    p15, 6, r2, r3, c2      @ Back to VMID #0
> +       isb
> +
> +       pop     {r2, r3}
> +       hvc     #0                      @ Back to SVC
> +       mov     pc, lr
> +ENDPROC(__kvm_tlb_flush_vmid)
>
>        .ltorg
>
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index baeb8a1..3f8d83b 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -245,12 +245,12 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
>        kvm->arch.pgd = NULL;
>  }
>
> -static int __user_mem_abort(struct kvm *kvm, phys_addr_t addr, pfn_t pfn)
> +static int stage2_set_pte(struct kvm *kvm, phys_addr_t addr, pte_t new_pte)
>  {
>        pgd_t *pgd;
>        pud_t *pud;
>        pmd_t *pmd;
> -       pte_t *pte, new_pte;
> +       pte_t *pte;
>
>        /* Create 2nd stage page table mapping - Level 1 */
>        pgd = kvm->arch.pgd + pgd_index(addr);
> @@ -279,12 +279,18 @@ static int __user_mem_abort(struct kvm *kvm, phys_addr_t addr, pfn_t pfn)
>                pte = pte_offset_kernel(pmd, addr);
>
>        /* Create 2nd stage page table mapping - Level 3 */
> -       new_pte = pfn_pte(pfn, PAGE_KVM_GUEST);
>        set_pte_ext(pte, new_pte, 0);
>
>        return 0;
>  }
>
> +static int __user_mem_abort(struct kvm *kvm, phys_addr_t addr, pfn_t pfn)
> +{
> +       pte_t new_pte = pfn_pte(pfn, PAGE_KVM_GUEST);
> +
> +       return stage2_set_pte(kvm, addr, new_pte);
> +}
> +
>  static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>                          gfn_t gfn, struct kvm_memory_slot *memslot)
>  {
> @@ -510,3 +516,35 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
>
>        return user_mem_abort(vcpu, fault_ipa, gfn, memslot);
>  }
> +
> +int kvm_unmap_hva(struct kvm *kvm, unsigned long hva)
> +{
> +       static const pte_t null_pte;
> +       struct kvm_memslots *slots;
> +       struct kvm_memory_slot *memslot;
> +       int needs_stage2_flush = 0;
> +
> +       slots = kvm_memslots(kvm);
> +
> +       /* we only care about the pages that the guest sees */
> +       kvm_for_each_memslot(memslot, slots) {

what a pain that you have to do this.

should we not create an arch-independent hva_to_gfn function, that
might return a bad_gfn if that hva is not in a memslot?

that would simplify this code quite a bit...

> +               unsigned long start = memslot->userspace_addr;
> +               unsigned long end;
> +
> +               end = start + (memslot->npages << PAGE_SHIFT);
> +               if (hva >= start && hva < end) {
> +                       gpa_t gpa_offset = hva - start;
> +                       gpa_t gpa = (memslot->base_gfn << PAGE_SHIFT) + gpa_offset;
> +
> +                       if (stage2_set_pte(kvm, gpa, null_pte))
> +                               continue; /* Something bad happened, try to carry on */

the something bad that happened here was that we're out of memory to
create page tables, which really shouldn't happen, because even if
QEMU accessed some of these pages without regard for KVM and these
pages are being evicted, we should certainly not start allocing page
tables to put a blank pte in there...

And, since we're here because of memory pressure in the first place,
continuing to hammer on the system is probably a bad idea...

consider checking for null_pte in stage_set_pte and an error should
not happen here, and if it does, it's more like a BUG();

> +
> +                       needs_stage2_flush = 1;
> +               }
> +       }
> +
> +       if (needs_stage2_flush)
> +               __kvm_tlb_flush_vmid(kvm);
> +
> +       return 0;
> +}
> --
> 1.7.3.4
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Android-virt] [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers
  2012-02-10 22:22 ` [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers Marc Zyngier
  2012-02-10 22:49   ` Christoffer Dall
@ 2012-02-11 15:00   ` Antonios Motakis
  2012-02-11 17:35     ` Christoffer Dall
  1 sibling, 1 reply; 12+ messages in thread
From: Antonios Motakis @ 2012-02-11 15:00 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: c.dall, android-virt, kvm

On 02/10/2012 11:22 PM, Marc Zyngier wrote:
> +ENTRY(__kvm_tlb_flush_vmid)
> +	hvc	#0			@ Switch to Hyp mode
> +	push	{r2, r3}
>
> +	ldrd	r2, r3, [r0, #KVM_VTTBR]
> +	mcrr	p15, 6, r2, r3, c2	@ Write VTTBR
> +	isb
> +	mcr     p15, 0, r0, c8, c7, 0	@ TBLIALL
> +	dsb
> +	isb
> +	mov	r2, #0
> +	mov	r3, #0
> +	mcrr	p15, 6, r2, r3, c2	@ Back to VMID #0
> +	isb
> +
> +	pop	{r2, r3}
> +	hvc	#0			@ Back to SVC
> +	mov	pc, lr
> +ENDPROC(__kvm_tlb_flush_vmid)

With the last VMID implementation, you could get the equivalent effect 
of a per-VMID flush, by just getting a new VMID for the current VM. So 
you could do a (kvm->arch.vmid = 0) to force a new VMID when the guest 
reruns, and save the overhead of that flush (you will do a complete 
flush every 255 times instead of a small one every single time).

Best regards,
Antonios

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Android-virt] [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers
  2012-02-11 15:00   ` [Android-virt] " Antonios Motakis
@ 2012-02-11 17:35     ` Christoffer Dall
  2012-02-11 18:33       ` Antonios Motakis
  0 siblings, 1 reply; 12+ messages in thread
From: Christoffer Dall @ 2012-02-11 17:35 UTC (permalink / raw)
  To: Antonios Motakis; +Cc: Marc Zyngier, android-virt, kvm

On Sat, Feb 11, 2012 at 7:00 AM, Antonios Motakis
<a.motakis@virtualopensystems.com> wrote:
> On 02/10/2012 11:22 PM, Marc Zyngier wrote:
>>
>> +ENTRY(__kvm_tlb_flush_vmid)
>> +       hvc     #0                      @ Switch to Hyp mode
>> +       push    {r2, r3}
>>
>> +       ldrd    r2, r3, [r0, #KVM_VTTBR]
>> +       mcrr    p15, 6, r2, r3, c2      @ Write VTTBR
>> +       isb
>> +       mcr     p15, 0, r0, c8, c7, 0   @ TBLIALL
>> +       dsb
>> +       isb
>> +       mov     r2, #0
>> +       mov     r3, #0
>> +       mcrr    p15, 6, r2, r3, c2      @ Back to VMID #0
>> +       isb
>> +
>> +       pop     {r2, r3}
>> +       hvc     #0                      @ Back to SVC
>> +       mov     pc, lr
>> +ENDPROC(__kvm_tlb_flush_vmid)
>
>
> With the last VMID implementation, you could get the equivalent effect of a
> per-VMID flush, by just getting a new VMID for the current VM. So you could
> do a (kvm->arch.vmid = 0) to force a new VMID when the guest reruns, and
> save the overhead of that flush (you will do a complete flush every 255
> times instead of a small one every single time).
>

to do this you would need to send an IPI if the guest is currently
executing on another CPU and make it exit the guest, so that the VMID
assignment will run before the guest potentially accesses that TLB
entry that points to the page that was just reclaimed - which I am not
sure will be better than this solution.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Android-virt] [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers
  2012-02-11 17:35     ` Christoffer Dall
@ 2012-02-11 18:33       ` Antonios Motakis
  2012-02-12  1:12         ` Christoffer Dall
  0 siblings, 1 reply; 12+ messages in thread
From: Antonios Motakis @ 2012-02-11 18:33 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: Marc Zyngier, android-virt, kvm

On 02/11/2012 06:35 PM, Christoffer Dall wrote:
> On Sat, Feb 11, 2012 at 7:00 AM, Antonios Motakis
> <a.motakis@virtualopensystems.com>  wrote:
>> On 02/10/2012 11:22 PM, Marc Zyngier wrote:
>>> +ENTRY(__kvm_tlb_flush_vmid)
>>> +       hvc     #0                      @ Switch to Hyp mode
>>> +       push    {r2, r3}
>>>
>>> +       ldrd    r2, r3, [r0, #KVM_VTTBR]
>>> +       mcrr    p15, 6, r2, r3, c2      @ Write VTTBR
>>> +       isb
>>> +       mcr     p15, 0, r0, c8, c7, 0   @ TBLIALL
>>> +       dsb
>>> +       isb
>>> +       mov     r2, #0
>>> +       mov     r3, #0
>>> +       mcrr    p15, 6, r2, r3, c2      @ Back to VMID #0
>>> +       isb
>>> +
>>> +       pop     {r2, r3}
>>> +       hvc     #0                      @ Back to SVC
>>> +       mov     pc, lr
>>> +ENDPROC(__kvm_tlb_flush_vmid)
>>
>> With the last VMID implementation, you could get the equivalent effect of a
>> per-VMID flush, by just getting a new VMID for the current VM. So you could
>> do a (kvm->arch.vmid = 0) to force a new VMID when the guest reruns, and
>> save the overhead of that flush (you will do a complete flush every 255
>> times instead of a small one every single time).
>>
> to do this you would need to send an IPI if the guest is currently
> executing on another CPU and make it exit the guest, so that the VMID
> assignment will run before the guest potentially accesses that TLB
> entry that points to the page that was just reclaimed - which I am not
> sure will be better than this solution.
Don't you have to do this anyway? You'd want the flush to be effective 
on all CPUs before proceeding.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Android-virt] [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers
  2012-02-11 18:33       ` Antonios Motakis
@ 2012-02-12  1:12         ` Christoffer Dall
  2012-02-12  8:20           ` Alexander Graf
  2012-02-13 13:13           ` Marc Zyngier
  0 siblings, 2 replies; 12+ messages in thread
From: Christoffer Dall @ 2012-02-12  1:12 UTC (permalink / raw)
  To: Antonios Motakis; +Cc: Marc Zyngier, android-virt, kvm

On Sat, Feb 11, 2012 at 10:33 AM, Antonios Motakis
<a.motakis@virtualopensystems.com> wrote:
> On 02/11/2012 06:35 PM, Christoffer Dall wrote:
>>
>> On Sat, Feb 11, 2012 at 7:00 AM, Antonios Motakis
>> <a.motakis@virtualopensystems.com>  wrote:
>>>
>>> On 02/10/2012 11:22 PM, Marc Zyngier wrote:
>>>>
>>>> +ENTRY(__kvm_tlb_flush_vmid)
>>>> +       hvc     #0                      @ Switch to Hyp mode
>>>> +       push    {r2, r3}
>>>>
>>>> +       ldrd    r2, r3, [r0, #KVM_VTTBR]
>>>> +       mcrr    p15, 6, r2, r3, c2      @ Write VTTBR
>>>> +       isb
>>>> +       mcr     p15, 0, r0, c8, c7, 0   @ TBLIALL
>>>> +       dsb
>>>> +       isb
>>>> +       mov     r2, #0
>>>> +       mov     r3, #0
>>>> +       mcrr    p15, 6, r2, r3, c2      @ Back to VMID #0
>>>> +       isb
>>>> +
>>>> +       pop     {r2, r3}
>>>> +       hvc     #0                      @ Back to SVC
>>>> +       mov     pc, lr
>>>> +ENDPROC(__kvm_tlb_flush_vmid)
>>>
>>>
>>> With the last VMID implementation, you could get the equivalent effect of
>>> a
>>> per-VMID flush, by just getting a new VMID for the current VM. So you
>>> could
>>> do a (kvm->arch.vmid = 0) to force a new VMID when the guest reruns, and
>>> save the overhead of that flush (you will do a complete flush every 255
>>> times instead of a small one every single time).
>>>
>> to do this you would need to send an IPI if the guest is currently
>> executing on another CPU and make it exit the guest, so that the VMID
>> assignment will run before the guest potentially accesses that TLB
>> entry that points to the page that was just reclaimed - which I am not
>> sure will be better than this solution.
>
> Don't you have to do this anyway? You'd want the flush to be effective on
> all CPUs before proceeding.

hmm yeah, actually you do need this. Unless the -IS version of the
flush instruction covers all relevant cores in this case. Marc, I
don't think that the processor clearing out the page table entry will
necessarily belong to the same inner-shareable domain as the processor
potentially executing the VM, so therefore the -IS flushing version
would not be sufficient and we actually have to go and send an IPI.

So, it sounds to me like:
 1) we have to signal all vcpus using the VMID for which we are
clearing page table entries
 2) make sure that they, either
    2a) flush their TLBs
    2b) get a new VMID

seems like 2b might be slightly faster, but leaves more entries in the
TLB that are then unused - not sure if that's a bad thing considering
the replacement policy. Perhaps 2a is cleaner...

Thoughts anyone?

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Android-virt] [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers
  2012-02-12  1:12         ` Christoffer Dall
@ 2012-02-12  8:20           ` Alexander Graf
  2012-02-13 13:13           ` Marc Zyngier
  1 sibling, 0 replies; 12+ messages in thread
From: Alexander Graf @ 2012-02-12  8:20 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: Antonios Motakis, kvm, android-virt



On 12.02.2012, at 02:12, Christoffer Dall <c.dall@virtualopensystems.com> wrote:

> On Sat, Feb 11, 2012 at 10:33 AM, Antonios Motakis
> <a.motakis@virtualopensystems.com> wrote:
>> On 02/11/2012 06:35 PM, Christoffer Dall wrote:
>>> 
>>> On Sat, Feb 11, 2012 at 7:00 AM, Antonios Motakis
>>> <a.motakis@virtualopensystems.com>  wrote:
>>>> 
>>>> On 02/10/2012 11:22 PM, Marc Zyngier wrote:
>>>>> 
>>>>> +ENTRY(__kvm_tlb_flush_vmid)
>>>>> +       hvc     #0                      @ Switch to Hyp mode
>>>>> +       push    {r2, r3}
>>>>> 
>>>>> +       ldrd    r2, r3, [r0, #KVM_VTTBR]
>>>>> +       mcrr    p15, 6, r2, r3, c2      @ Write VTTBR
>>>>> +       isb
>>>>> +       mcr     p15, 0, r0, c8, c7, 0   @ TBLIALL
>>>>> +       dsb
>>>>> +       isb
>>>>> +       mov     r2, #0
>>>>> +       mov     r3, #0
>>>>> +       mcrr    p15, 6, r2, r3, c2      @ Back to VMID #0
>>>>> +       isb
>>>>> +
>>>>> +       pop     {r2, r3}
>>>>> +       hvc     #0                      @ Back to SVC
>>>>> +       mov     pc, lr
>>>>> +ENDPROC(__kvm_tlb_flush_vmid)
>>>> 
>>>> 
>>>> With the last VMID implementation, you could get the equivalent effect of
>>>> a
>>>> per-VMID flush, by just getting a new VMID for the current VM. So you
>>>> could
>>>> do a (kvm->arch.vmid = 0) to force a new VMID when the guest reruns, and
>>>> save the overhead of that flush (you will do a complete flush every 255
>>>> times instead of a small one every single time).
>>>> 
>>> to do this you would need to send an IPI if the guest is currently
>>> executing on another CPU and make it exit the guest, so that the VMID
>>> assignment will run before the guest potentially accesses that TLB
>>> entry that points to the page that was just reclaimed - which I am not
>>> sure will be better than this solution.
>> 
>> Don't you have to do this anyway? You'd want the flush to be effective on
>> all CPUs before proceeding.
> 
> hmm yeah, actually you do need this. Unless the -IS version of the
> flush instruction covers all relevant cores in this case. Marc, I
> don't think that the processor clearing out the page table entry will
> necessarily belong to the same inner-shareable domain as the processor
> potentially executing the VM, so therefore the -IS flushing version
> would not be sufficient and we actually have to go and send an IPI.
> 
> So, it sounds to me like:
> 1) we have to signal all vcpus using the VMID for which we are
> clearing page table entries
> 2) make sure that they, either
>    2a) flush their TLBs
>    2b) get a new VMID
> 
> seems like 2b might be slightly faster, but leaves more entries in the
> TLB that are then unused - not sure if that's a bad thing considering
> the replacement policy. Perhaps 2a is cleaner...

X86 basically does 2b, but has per-cpu tlb tags.

On PPC, we statically map the guest id to a guest atm.


Alex


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Android-virt] [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers
  2012-02-12  1:12         ` Christoffer Dall
  2012-02-12  8:20           ` Alexander Graf
@ 2012-02-13 13:13           ` Marc Zyngier
  2012-02-13 14:50             ` Christoffer Dall
  1 sibling, 1 reply; 12+ messages in thread
From: Marc Zyngier @ 2012-02-13 13:13 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: Antonios Motakis, android-virt, kvm

On 12/02/12 01:12, Christoffer Dall wrote:
> On Sat, Feb 11, 2012 at 10:33 AM, Antonios Motakis
> <a.motakis@virtualopensystems.com> wrote:
>> On 02/11/2012 06:35 PM, Christoffer Dall wrote:
>>>
>>> On Sat, Feb 11, 2012 at 7:00 AM, Antonios Motakis
>>> <a.motakis@virtualopensystems.com>  wrote:
>>>>
>>>> On 02/10/2012 11:22 PM, Marc Zyngier wrote:
>>>>>
>>>>> +ENTRY(__kvm_tlb_flush_vmid)
>>>>> +       hvc     #0                      @ Switch to Hyp mode
>>>>> +       push    {r2, r3}
>>>>>
>>>>> +       ldrd    r2, r3, [r0, #KVM_VTTBR]
>>>>> +       mcrr    p15, 6, r2, r3, c2      @ Write VTTBR
>>>>> +       isb
>>>>> +       mcr     p15, 0, r0, c8, c7, 0   @ TBLIALL
>>>>> +       dsb
>>>>> +       isb
>>>>> +       mov     r2, #0
>>>>> +       mov     r3, #0
>>>>> +       mcrr    p15, 6, r2, r3, c2      @ Back to VMID #0
>>>>> +       isb
>>>>> +
>>>>> +       pop     {r2, r3}
>>>>> +       hvc     #0                      @ Back to SVC
>>>>> +       mov     pc, lr
>>>>> +ENDPROC(__kvm_tlb_flush_vmid)
>>>>
>>>>
>>>> With the last VMID implementation, you could get the equivalent effect of
>>>> a
>>>> per-VMID flush, by just getting a new VMID for the current VM. So you
>>>> could
>>>> do a (kvm->arch.vmid = 0) to force a new VMID when the guest reruns, and
>>>> save the overhead of that flush (you will do a complete flush every 255
>>>> times instead of a small one every single time).
>>>>
>>> to do this you would need to send an IPI if the guest is currently
>>> executing on another CPU and make it exit the guest, so that the VMID
>>> assignment will run before the guest potentially accesses that TLB
>>> entry that points to the page that was just reclaimed - which I am not
>>> sure will be better than this solution.
>>
>> Don't you have to do this anyway? You'd want the flush to be effective on
>> all CPUs before proceeding.
> 
> hmm yeah, actually you do need this. Unless the -IS version of the
> flush instruction covers all relevant cores in this case. Marc, I
> don't think that the processor clearing out the page table entry will
> necessarily belong to the same inner-shareable domain as the processor
> potentially executing the VM, so therefore the -IS flushing version
> would not be sufficient and we actually have to go and send an IPI.

If we forget about the 11MPCore (which doesn't broadcast the TLB
invalidation in hardware), the TLBIALLIS operation makes sure all cores
belonging to the same inner shareable domain will see the TLB
invalidation at the same time. If they don't, this is a hardware bug.

Now, I do not have an example of a system where two CPUs are not part of
the same IS domain. Even big.LITTLE has all of the potential 8 cores in
an IS domain. If such a system exists one of these days, then it will be
worth considering having a separate method to cope with the case. Until
then, my opinion is to keep it as simple as possible.

	M.
-- 
Jazz is not dead. It just smells funny...


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Android-virt] [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers
  2012-02-13 13:13           ` Marc Zyngier
@ 2012-02-13 14:50             ` Christoffer Dall
  0 siblings, 0 replies; 12+ messages in thread
From: Christoffer Dall @ 2012-02-13 14:50 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: Antonios Motakis, android-virt, kvm

On Mon, Feb 13, 2012 at 5:13 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> On 12/02/12 01:12, Christoffer Dall wrote:
>> On Sat, Feb 11, 2012 at 10:33 AM, Antonios Motakis
>> <a.motakis@virtualopensystems.com> wrote:
>>> On 02/11/2012 06:35 PM, Christoffer Dall wrote:
>>>>
>>>> On Sat, Feb 11, 2012 at 7:00 AM, Antonios Motakis
>>>> <a.motakis@virtualopensystems.com>  wrote:
>>>>>
>>>>> On 02/10/2012 11:22 PM, Marc Zyngier wrote:
>>>>>>
>>>>>> +ENTRY(__kvm_tlb_flush_vmid)
>>>>>> +       hvc     #0                      @ Switch to Hyp mode
>>>>>> +       push    {r2, r3}
>>>>>>
>>>>>> +       ldrd    r2, r3, [r0, #KVM_VTTBR]
>>>>>> +       mcrr    p15, 6, r2, r3, c2      @ Write VTTBR
>>>>>> +       isb
>>>>>> +       mcr     p15, 0, r0, c8, c7, 0   @ TBLIALL
>>>>>> +       dsb
>>>>>> +       isb
>>>>>> +       mov     r2, #0
>>>>>> +       mov     r3, #0
>>>>>> +       mcrr    p15, 6, r2, r3, c2      @ Back to VMID #0
>>>>>> +       isb
>>>>>> +
>>>>>> +       pop     {r2, r3}
>>>>>> +       hvc     #0                      @ Back to SVC
>>>>>> +       mov     pc, lr
>>>>>> +ENDPROC(__kvm_tlb_flush_vmid)
>>>>>
>>>>>
>>>>> With the last VMID implementation, you could get the equivalent effect of
>>>>> a
>>>>> per-VMID flush, by just getting a new VMID for the current VM. So you
>>>>> could
>>>>> do a (kvm->arch.vmid = 0) to force a new VMID when the guest reruns, and
>>>>> save the overhead of that flush (you will do a complete flush every 255
>>>>> times instead of a small one every single time).
>>>>>
>>>> to do this you would need to send an IPI if the guest is currently
>>>> executing on another CPU and make it exit the guest, so that the VMID
>>>> assignment will run before the guest potentially accesses that TLB
>>>> entry that points to the page that was just reclaimed - which I am not
>>>> sure will be better than this solution.
>>>
>>> Don't you have to do this anyway? You'd want the flush to be effective on
>>> all CPUs before proceeding.
>>
>> hmm yeah, actually you do need this. Unless the -IS version of the
>> flush instruction covers all relevant cores in this case. Marc, I
>> don't think that the processor clearing out the page table entry will
>> necessarily belong to the same inner-shareable domain as the processor
>> potentially executing the VM, so therefore the -IS flushing version
>> would not be sufficient and we actually have to go and send an IPI.
>
> If we forget about the 11MPCore (which doesn't broadcast the TLB
> invalidation in hardware), the TLBIALLIS operation makes sure all cores
> belonging to the same inner shareable domain will see the TLB
> invalidation at the same time. If they don't, this is a hardware bug.
>
> Now, I do not have an example of a system where two CPUs are not part of
> the same IS domain. Even big.LITTLE has all of the potential 8 cores in
> an IS domain. If such a system exists one of these days, then it will be
> worth considering having a separate method to cope with the case. Until
> then, my opinion is to keep it as simple as possible.
>

ok, sounds good to me. Although, perhaps keep this as a comment somewhere...

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2012-02-13 14:50 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-02-10 22:22 [PATCH RFC v2 1/3] KVM: Guard mmu_notifier specific code with CONFIG_MMU_NOTIFIER Marc Zyngier
2012-02-10 22:22 ` [PATCH RFC v2 2/3] ARM: KVM: mark the end of the HYP mode code with __kvm_hyp_code_end Marc Zyngier
2012-02-10 22:32   ` Christoffer Dall
2012-02-10 22:22 ` [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers Marc Zyngier
2012-02-10 22:49   ` Christoffer Dall
2012-02-11 15:00   ` [Android-virt] " Antonios Motakis
2012-02-11 17:35     ` Christoffer Dall
2012-02-11 18:33       ` Antonios Motakis
2012-02-12  1:12         ` Christoffer Dall
2012-02-12  8:20           ` Alexander Graf
2012-02-13 13:13           ` Marc Zyngier
2012-02-13 14:50             ` Christoffer Dall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.