kvmarm.lists.cs.columbia.edu archive mirror
 help / color / mirror / Atom feed
* [PATCH] arm64: KVM: Invoke compute_layout() before alternatives are applied
@ 2019-10-16 16:59 Sebastian Andrzej Siewior
  2019-10-17 17:41 ` James Morse
  0 siblings, 1 reply; 8+ messages in thread
From: Sebastian Andrzej Siewior @ 2019-10-16 16:59 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: Catalin Marinas, Marc Zyngier, Thomas Gleixner, Will Deacon

compute_layout() is invoked as part of an alternative fixup under
stop_machine(). This function invokes get_random_long() which acquires a
sleeping lock on -RT which can not be acquired in this context.

Rename compute_layout() to kvm_compute_layout() and invoke it before
stop_machines() invokes the fixups. Add a __init prefix to
kvm_compute_layout() because the caller has it, too (and so the code can
be discarded after boot).

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 arch/arm64/include/asm/alternative.h |  6 ++++++
 arch/arm64/kernel/alternative.c      |  1 +
 arch/arm64/kvm/va_layout.c           | 11 ++++-------
 3 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
index b9f8d787eea9f..7532f044d713b 100644
--- a/arch/arm64/include/asm/alternative.h
+++ b/arch/arm64/include/asm/alternative.h
@@ -35,6 +35,12 @@ void apply_alternatives_module(void *start, size_t length);
 static inline void apply_alternatives_module(void *start, size_t length) { }
 #endif
 
+#ifdef CONFIG_KVM_ARM_HOST
+void kvm_compute_layout(void);
+#else
+static inline void kvm_compute_layout(void) { }
+#endif
+
 #define ALTINSTR_ENTRY(feature,cb)					      \
 	" .word 661b - .\n"				/* label           */ \
 	" .if " __stringify(cb) " == 0\n"				      \
diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index d1757ef1b1e74..c28652ee06f64 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -238,6 +238,7 @@ static int __apply_alternatives_multi_stop(void *unused)
 void __init apply_alternatives_all(void)
 {
 	/* better not try code patching on a live SMP system */
+	kvm_compute_layout();
 	stop_machine(__apply_alternatives_multi_stop, NULL, cpu_online_mask);
 }
 
diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
index 2cf7d4b606c38..0b9fbdf0aaddd 100644
--- a/arch/arm64/kvm/va_layout.c
+++ b/arch/arm64/kvm/va_layout.c
@@ -22,12 +22,15 @@ static u8 tag_lsb;
 static u64 tag_val;
 static u64 va_mask;
 
-static void compute_layout(void)
+__init void kvm_compute_layout(void)
 {
 	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
 	u64 hyp_va_msb;
 	int kva_msb;
 
+	if (WARN_ON(va_mask))
+		return;
+
 	/* Where is my RAM region? */
 	hyp_va_msb  = idmap_addr & BIT(vabits_actual - 1);
 	hyp_va_msb ^= BIT(vabits_actual - 1);
@@ -110,9 +113,6 @@ void __init kvm_update_va_mask(struct alt_instr *alt,
 
 	BUG_ON(nr_inst != 5);
 
-	if (!has_vhe() && !va_mask)
-		compute_layout();
-
 	for (i = 0; i < nr_inst; i++) {
 		u32 rd, rn, insn, oinsn;
 
@@ -156,9 +156,6 @@ void kvm_patch_vector_branch(struct alt_instr *alt,
 		return;
 	}
 
-	if (!va_mask)
-		compute_layout();
-
 	/*
 	 * Compute HYP VA by using the same computation as kern_hyp_va()
 	 */
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] arm64: KVM: Invoke compute_layout() before alternatives are applied
  2019-10-16 16:59 [PATCH] arm64: KVM: Invoke compute_layout() before alternatives are applied Sebastian Andrzej Siewior
@ 2019-10-17 17:41 ` James Morse
  2019-11-28 19:58   ` [PATCH v2] " Sebastian Andrzej Siewior
  0 siblings, 1 reply; 8+ messages in thread
From: James Morse @ 2019-10-17 17:41 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior, kvmarm, linux-arm-kernel
  Cc: Catalin Marinas, Marc Zyngier, Thomas Gleixner, Will Deacon

Hi Sebastian,

On 16/10/2019 17:59, Sebastian Andrzej Siewior wrote:
> compute_layout() is invoked as part of an alternative fixup under
> stop_machine(). This function invokes get_random_long() which acquires a
> sleeping lock on -RT which can not be acquired in this context.

> Rename compute_layout() to kvm_compute_layout() and invoke it before
> stop_machines() invokes the fixups.

Nit: stop_machine() applies the alternatives.


> Add a __init prefix to
> kvm_compute_layout() because the caller has it, too (and so the code can
> be discarded after boot).


> diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
> index b9f8d787eea9f..7532f044d713b 100644
> --- a/arch/arm64/include/asm/alternative.h
> +++ b/arch/arm64/include/asm/alternative.h
> @@ -35,6 +35,12 @@ void apply_alternatives_module(void *start, size_t length);
>  static inline void apply_alternatives_module(void *start, size_t length) { }
>  #endif
>  
> +#ifdef CONFIG_KVM_ARM_HOST
> +void kvm_compute_layout(void);
> +#else
> +static inline void kvm_compute_layout(void) { }
> +#endif

I don't think alternative.h is where this belongs... Could you move it to kvm_mmu.h, which
is where the kvm_update_va_mask macro that depends on it lives.

You can avoid the #ifdef if you use if(IS_ENABLED()) in the caller.
This has the advantage that the compiler will catch invalid C regardless of the build
options. (and its easier on the eye)


> diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
> index d1757ef1b1e74..c28652ee06f64 100644
> --- a/arch/arm64/kernel/alternative.c
> +++ b/arch/arm64/kernel/alternative.c
> @@ -238,6 +238,7 @@ static int __apply_alternatives_multi_stop(void *unused)
>  void __init apply_alternatives_all(void)
>  {
>  	/* better not try code patching on a live SMP system */
> +	kvm_compute_layout();
>  	stop_machine(__apply_alternatives_multi_stop, NULL, cpu_online_mask);
>  }

This is a funny place to do this kvm check, its not needed to apply the alternatives.

apply_alternatives_all() is only called from smp_cpus_done(), immediately before it calls
hyp_mode_check(), could we move it there to live with the 'started at EL2' message?

(to save you battling the header-jungle: To include asm/kvm_mmu.h, you need to include
linux/kvm_host.h first)


We end up calling it unconditionally, but I don't think that matters, both callers do the
right thing.

With that:
Reviewed-by: James Morse <james.morse@arm.com>


Thanks,

James
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2] arm64: KVM: Invoke compute_layout() before alternatives are applied
  2019-10-17 17:41 ` James Morse
@ 2019-11-28 19:58   ` Sebastian Andrzej Siewior
  2019-12-06 11:22     ` Marc Zyngier
  2019-12-06 11:57     ` Catalin Marinas
  0 siblings, 2 replies; 8+ messages in thread
From: Sebastian Andrzej Siewior @ 2019-11-28 19:58 UTC (permalink / raw)
  To: James Morse
  Cc: Catalin Marinas, linux-arm-kernel, Marc Zyngier, Thomas Gleixner,
	Will Deacon, kvmarm

compute_layout() is invoked as part of an alternative fixup under
stop_machine(). This function invokes get_random_long() which acquires a
sleeping lock on -RT which can not be acquired in this context.

Rename compute_layout() to kvm_compute_layout() and invoke it before
stop_machine() applies the alternatives. Add a __init prefix to
kvm_compute_layout() because the caller has it, too (and so the code can be
discarded after boot).

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---

v1…v2: Move code as suggested by James Morse. Didn't add his Reviewed-by
       (as suggested) because I'm not sure that I got everything
       correct.

 arch/arm64/include/asm/kvm_mmu.h | 1 +
 arch/arm64/kernel/smp.c          | 4 ++++
 arch/arm64/kvm/va_layout.c       | 8 +-------
 3 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index befe37d4bc0e5..53d846f1bfe70 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -91,6 +91,7 @@ alternative_cb_end
 
 void kvm_update_va_mask(struct alt_instr *alt,
 			__le32 *origptr, __le32 *updptr, int nr_inst);
+void kvm_compute_layout(void);
 
 static inline unsigned long __kern_hyp_va(unsigned long v)
 {
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index dc9fe879c2793..02d41eae3da86 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -31,6 +31,7 @@
 #include <linux/of.h>
 #include <linux/irq_work.h>
 #include <linux/kexec.h>
+#include <linux/kvm_host.h>
 
 #include <asm/alternative.h>
 #include <asm/atomic.h>
@@ -39,6 +40,7 @@
 #include <asm/cputype.h>
 #include <asm/cpu_ops.h>
 #include <asm/daifflags.h>
+#include <asm/kvm_mmu.h>
 #include <asm/mmu_context.h>
 #include <asm/numa.h>
 #include <asm/pgtable.h>
@@ -408,6 +410,8 @@ static void __init hyp_mode_check(void)
 			   "CPU: CPUs started in inconsistent modes");
 	else
 		pr_info("CPU: All CPU(s) started at EL1\n");
+	if (IS_ENABLED(CONFIG_KVM_ARM_HOST))
+		kvm_compute_layout();
 }
 
 void __init smp_cpus_done(unsigned int max_cpus)
diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
index 2cf7d4b606c38..dab1fea4752aa 100644
--- a/arch/arm64/kvm/va_layout.c
+++ b/arch/arm64/kvm/va_layout.c
@@ -22,7 +22,7 @@ static u8 tag_lsb;
 static u64 tag_val;
 static u64 va_mask;
 
-static void compute_layout(void)
+__init void kvm_compute_layout(void)
 {
 	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
 	u64 hyp_va_msb;
@@ -110,9 +110,6 @@ void __init kvm_update_va_mask(struct alt_instr *alt,
 
 	BUG_ON(nr_inst != 5);
 
-	if (!has_vhe() && !va_mask)
-		compute_layout();
-
 	for (i = 0; i < nr_inst; i++) {
 		u32 rd, rn, insn, oinsn;
 
@@ -156,9 +153,6 @@ void kvm_patch_vector_branch(struct alt_instr *alt,
 		return;
 	}
 
-	if (!va_mask)
-		compute_layout();
-
 	/*
 	 * Compute HYP VA by using the same computation as kern_hyp_va()
 	 */
-- 
2.24.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] arm64: KVM: Invoke compute_layout() before alternatives are applied
  2019-11-28 19:58   ` [PATCH v2] " Sebastian Andrzej Siewior
@ 2019-12-06 11:22     ` Marc Zyngier
  2019-12-06 11:46       ` Catalin Marinas
  2019-12-06 11:57     ` Catalin Marinas
  1 sibling, 1 reply; 8+ messages in thread
From: Marc Zyngier @ 2019-12-06 11:22 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Catalin Marinas, linux-arm-kernel, Thomas Gleixner, Will Deacon, kvmarm

On 2019-11-28 19:58, Sebastian Andrzej Siewior wrote:
> compute_layout() is invoked as part of an alternative fixup under
> stop_machine(). This function invokes get_random_long() which 
> acquires a
> sleeping lock on -RT which can not be acquired in this context.
>
> Rename compute_layout() to kvm_compute_layout() and invoke it before
> stop_machine() applies the alternatives. Add a __init prefix to
> kvm_compute_layout() because the caller has it, too (and so the code 
> can be
> discarded after boot).
>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Acked-by: Marc Zyngier <maz@kernel.org>

I think this should go via the arm64 tree, so I'll let Catalin
and Will pick this up (unless they think otherwise).

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] arm64: KVM: Invoke compute_layout() before alternatives are applied
  2019-12-06 11:22     ` Marc Zyngier
@ 2019-12-06 11:46       ` Catalin Marinas
  0 siblings, 0 replies; 8+ messages in thread
From: Catalin Marinas @ 2019-12-06 11:46 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Sebastian Andrzej Siewior, linux-arm-kernel, Thomas Gleixner,
	Will Deacon, kvmarm

On Fri, Dec 06, 2019 at 11:22:02AM +0000, Marc Zyngier wrote:
> On 2019-11-28 19:58, Sebastian Andrzej Siewior wrote:
> > compute_layout() is invoked as part of an alternative fixup under
> > stop_machine(). This function invokes get_random_long() which acquires a
> > sleeping lock on -RT which can not be acquired in this context.
> > 
> > Rename compute_layout() to kvm_compute_layout() and invoke it before
> > stop_machine() applies the alternatives. Add a __init prefix to
> > kvm_compute_layout() because the caller has it, too (and so the code can
> > be
> > discarded after boot).
> > 
> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> 
> Acked-by: Marc Zyngier <maz@kernel.org>
> 
> I think this should go via the arm64 tree, so I'll let Catalin
> and Will pick this up (unless they think otherwise).

I can pick this up. Thanks.

-- 
Catalin
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] arm64: KVM: Invoke compute_layout() before alternatives are applied
  2019-11-28 19:58   ` [PATCH v2] " Sebastian Andrzej Siewior
  2019-12-06 11:22     ` Marc Zyngier
@ 2019-12-06 11:57     ` Catalin Marinas
  2019-12-06 12:12       ` Marc Zyngier
  1 sibling, 1 reply; 8+ messages in thread
From: Catalin Marinas @ 2019-12-06 11:57 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Marc Zyngier, linux-arm-kernel, Thomas Gleixner, Will Deacon, kvmarm

On Thu, Nov 28, 2019 at 08:58:05PM +0100, Sebastian Andrzej Siewior wrote:
> @@ -408,6 +410,8 @@ static void __init hyp_mode_check(void)
>  			   "CPU: CPUs started in inconsistent modes");
>  	else
>  		pr_info("CPU: All CPU(s) started at EL1\n");
> +	if (IS_ENABLED(CONFIG_KVM_ARM_HOST))
> +		kvm_compute_layout();
>  }

It looks like we call this unconditionally here even if the kernel was
booted at EL1.

>  void __init smp_cpus_done(unsigned int max_cpus)
> diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
> index 2cf7d4b606c38..dab1fea4752aa 100644
> --- a/arch/arm64/kvm/va_layout.c
> +++ b/arch/arm64/kvm/va_layout.c
> @@ -22,7 +22,7 @@ static u8 tag_lsb;
>  static u64 tag_val;
>  static u64 va_mask;
>  
> -static void compute_layout(void)
> +__init void kvm_compute_layout(void)
>  {
>  	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
>  	u64 hyp_va_msb;
> @@ -110,9 +110,6 @@ void __init kvm_update_va_mask(struct alt_instr *alt,
>  
>  	BUG_ON(nr_inst != 5);
>  
> -	if (!has_vhe() && !va_mask)
> -		compute_layout();
> -
>  	for (i = 0; i < nr_inst; i++) {
>  		u32 rd, rn, insn, oinsn;
>  
> @@ -156,9 +153,6 @@ void kvm_patch_vector_branch(struct alt_instr *alt,
>  		return;
>  	}
>  
> -	if (!va_mask)
> -		compute_layout();

And here we had a few more checks.

Maybe it's still correct but asking anyway.

-- 
Catalin
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] arm64: KVM: Invoke compute_layout() before alternatives are applied
  2019-12-06 11:57     ` Catalin Marinas
@ 2019-12-06 12:12       ` Marc Zyngier
  2019-12-06 12:21         ` Catalin Marinas
  0 siblings, 1 reply; 8+ messages in thread
From: Marc Zyngier @ 2019-12-06 12:12 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Sebastian Andrzej Siewior, linux-arm-kernel, Thomas Gleixner,
	Will Deacon, kvmarm

On 2019-12-06 11:57, Catalin Marinas wrote:
> On Thu, Nov 28, 2019 at 08:58:05PM +0100, Sebastian Andrzej Siewior 
> wrote:
>> @@ -408,6 +410,8 @@ static void __init hyp_mode_check(void)
>>  			   "CPU: CPUs started in inconsistent modes");
>>  	else
>>  		pr_info("CPU: All CPU(s) started at EL1\n");
>> +	if (IS_ENABLED(CONFIG_KVM_ARM_HOST))
>> +		kvm_compute_layout();
>>  }
>
> It looks like we call this unconditionally here even if the kernel 
> was
> booted at EL1.

It has always been the case. My motivation was to be able to debug
this in a guest, because doing this on the host is... painful! ;-)

Feel free to condition it on !EL1 though.

>
>>  void __init smp_cpus_done(unsigned int max_cpus)
>> diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
>> index 2cf7d4b606c38..dab1fea4752aa 100644
>> --- a/arch/arm64/kvm/va_layout.c
>> +++ b/arch/arm64/kvm/va_layout.c
>> @@ -22,7 +22,7 @@ static u8 tag_lsb;
>>  static u64 tag_val;
>>  static u64 va_mask;
>>
>> -static void compute_layout(void)
>> +__init void kvm_compute_layout(void)
>>  {
>>  	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
>>  	u64 hyp_va_msb;
>> @@ -110,9 +110,6 @@ void __init kvm_update_va_mask(struct alt_instr 
>> *alt,
>>
>>  	BUG_ON(nr_inst != 5);
>>
>> -	if (!has_vhe() && !va_mask)
>> -		compute_layout();
>> -
>>  	for (i = 0; i < nr_inst; i++) {
>>  		u32 rd, rn, insn, oinsn;
>>
>> @@ -156,9 +153,6 @@ void kvm_patch_vector_branch(struct alt_instr 
>> *alt,
>>  		return;
>>  	}
>>
>> -	if (!va_mask)
>> -		compute_layout();
>
> And here we had a few more checks.
>
> Maybe it's still correct but asking anyway.

It should be correct. These checks were there to ensure that we only 
computed
the layout once, and this now happens by construction (calling 
compute_layout
from a single location instead of doing it from the patch callbacks).

Thanks,

        M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] arm64: KVM: Invoke compute_layout() before alternatives are applied
  2019-12-06 12:12       ` Marc Zyngier
@ 2019-12-06 12:21         ` Catalin Marinas
  0 siblings, 0 replies; 8+ messages in thread
From: Catalin Marinas @ 2019-12-06 12:21 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Sebastian Andrzej Siewior, linux-arm-kernel, Thomas Gleixner,
	Will Deacon, kvmarm

On Fri, Dec 06, 2019 at 12:12:10PM +0000, Marc Zyngier wrote:
> On 2019-12-06 11:57, Catalin Marinas wrote:
> > On Thu, Nov 28, 2019 at 08:58:05PM +0100, Sebastian Andrzej Siewior
> > wrote:
> > > @@ -408,6 +410,8 @@ static void __init hyp_mode_check(void)
> > >  			   "CPU: CPUs started in inconsistent modes");
> > >  	else
> > >  		pr_info("CPU: All CPU(s) started at EL1\n");
> > > +	if (IS_ENABLED(CONFIG_KVM_ARM_HOST))
> > > +		kvm_compute_layout();
> > >  }
> > 
> > It looks like we call this unconditionally here even if the kernel was
> > booted at EL1.
> 
> It has always been the case. My motivation was to be able to debug
> this in a guest, because doing this on the host is... painful! ;-)
> 
> Feel free to condition it on !EL1 though.

I'll leave the patch as is. Thanks for confirming.

-- 
Catalin
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-12-06 12:21 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-16 16:59 [PATCH] arm64: KVM: Invoke compute_layout() before alternatives are applied Sebastian Andrzej Siewior
2019-10-17 17:41 ` James Morse
2019-11-28 19:58   ` [PATCH v2] " Sebastian Andrzej Siewior
2019-12-06 11:22     ` Marc Zyngier
2019-12-06 11:46       ` Catalin Marinas
2019-12-06 11:57     ` Catalin Marinas
2019-12-06 12:12       ` Marc Zyngier
2019-12-06 12:21         ` Catalin Marinas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).