linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/2] arm64: kvm: cache ID register trapping
@ 2018-12-17 15:02 Ard Biesheuvel
  2018-12-17 15:02 ` [RFC PATCH 1/2] arm64: kvm: expose sanitised cache type register to guest Ard Biesheuvel
  2018-12-17 15:02 ` [RFC PATCH 2/2] arm64: kvm: describe data or unified caches as having 1 set and 1 way Ard Biesheuvel
  0 siblings, 2 replies; 10+ messages in thread
From: Ard Biesheuvel @ 2018-12-17 15:02 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: marc.zyngier, Ard Biesheuvel, christoffer.dall, Suzuki.Poulose

While looking into whether we could report the cache geometry as 1 set
and 1 way so that the ARM kernel doesn't stall for 13 seconds at boot,
I noticed that we don't expose the sanitised version of CTR_EL0 to guests,
so I fixed that first (#1)

Since that gives us most of the groundwork for overriding the cache
geometry, it is a fairly trivial change (#2) to clear the set/way
fields in the CCSIDR register so that it describes 1 set and 1 way.

Notes:
- build tested only
- 64-bit hosts only.

Ard Biesheuvel (2):
  arm64: kvm: expose sanitised cache type register to guest
  arm64: kvm: describe data or unified caches as having 1 set and 1 way

 arch/arm64/include/asm/kvm_arm.h |  3 +-
 arch/arm64/include/asm/sysreg.h  |  1 +
 arch/arm64/kvm/sys_regs.c        | 74 +++++++++++++++++++-
 3 files changed, 75 insertions(+), 3 deletions(-)

-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [RFC PATCH 1/2] arm64: kvm: expose sanitised cache type register to guest
  2018-12-17 15:02 [RFC PATCH 0/2] arm64: kvm: cache ID register trapping Ard Biesheuvel
@ 2018-12-17 15:02 ` Ard Biesheuvel
  2019-01-31 11:22   ` Marc Zyngier
  2018-12-17 15:02 ` [RFC PATCH 2/2] arm64: kvm: describe data or unified caches as having 1 set and 1 way Ard Biesheuvel
  1 sibling, 1 reply; 10+ messages in thread
From: Ard Biesheuvel @ 2018-12-17 15:02 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: marc.zyngier, Ard Biesheuvel, christoffer.dall, Suzuki.Poulose

We currently permit CPUs in the same system to deviate in the exact
topology of the caches, and we subsequently hide this fact from user
space by exposing a sanitised value of the cache type register CTR_EL0.

However, guests running under KVM see the bare value of CTR_EL0, which
could potentially result in issues with, e.g., JITs or other pieces of
code that are sensitive to misreported cache line sizes.

So let's start trapping cache ID instructions, and expose the sanitised
version of CTR_EL0 to guests. Note that CTR_EL0 is treated as an invariant
to KVM user space, so update that part as well.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/kvm_arm.h |  3 +-
 arch/arm64/include/asm/sysreg.h  |  1 +
 arch/arm64/kvm/sys_regs.c        | 59 +++++++++++++++++++-
 3 files changed, 60 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 6f602af5263c..628dcb0cfea3 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -81,9 +81,10 @@
  * IMO:		Override CPSR.I and enable signaling with VI
  * FMO:		Override CPSR.F and enable signaling with VF
  * SWIO:	Turn set/way invalidates into set/way clean+invalidate
+ * TID2:	Trap cache identification instructions
  */
 #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
-			 HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | \
+			 HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | HCR_TID2 | \
 			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
 			 HCR_FMO | HCR_IMO)
 #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 842fb9572661..3b8e51874da4 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -342,6 +342,7 @@
 
 #define SYS_CNTKCTL_EL1			sys_reg(3, 0, 14, 1, 0)
 
+#define SYS_CCSIDR_EL1			sys_reg(3, 1, 0, 0, 0)
 #define SYS_CLIDR_EL1			sys_reg(3, 1, 0, 0, 1)
 #define SYS_AIDR_EL1			sys_reg(3, 1, 0, 0, 7)
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 22fbbdbece3c..464e794b5bc5 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1140,6 +1140,49 @@ static int set_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	return __set_id_reg(rd, uaddr, true);
 }
 
+static bool access_ctr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+		       const struct sys_reg_desc *r)
+{
+	if (p->is_write)
+		return write_to_read_only(vcpu, p, r);
+
+	p->regval = read_sanitised_ftr_reg(SYS_CTR_EL0);
+	return true;
+}
+
+static bool access_clidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			 const struct sys_reg_desc *r)
+{
+	if (p->is_write)
+		return write_to_read_only(vcpu, p, r);
+
+	p->regval = read_sysreg(clidr_el1);
+	return true;
+}
+
+static bool access_csselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			  const struct sys_reg_desc *r)
+{
+	if (p->is_write)
+		vcpu_write_sys_reg(vcpu, p->regval, r->reg);
+	else
+		p->regval = vcpu_read_sys_reg(vcpu, r->reg);
+	return true;
+}
+
+static bool access_ccsidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			  const struct sys_reg_desc *r)
+{
+	u32 csselr;
+
+	if (p->is_write)
+		return write_to_read_only(vcpu, p, r);
+
+	csselr = vcpu_read_sys_reg(vcpu, CSSELR_EL1);
+	p->regval = get_ccsidr(csselr);
+	return true;
+}
+
 /* sys_reg_desc initialiser for known cpufeature ID registers */
 #define ID_SANITISED(name) {			\
 	SYS_DESC(SYS_##name),			\
@@ -1357,7 +1400,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	{ SYS_DESC(SYS_CNTKCTL_EL1), NULL, reset_val, CNTKCTL_EL1, 0},
 
-	{ SYS_DESC(SYS_CSSELR_EL1), NULL, reset_unknown, CSSELR_EL1 },
+	{ SYS_DESC(SYS_CCSIDR_EL1), access_ccsidr },
+	{ SYS_DESC(SYS_CLIDR_EL1), access_clidr },
+	{ SYS_DESC(SYS_CSSELR_EL1), access_csselr, reset_unknown, CSSELR_EL1 },
+	{ SYS_DESC(SYS_CTR_EL0), access_ctr },
 
 	{ SYS_DESC(SYS_PMCR_EL0), access_pmcr, reset_pmcr, },
 	{ SYS_DESC(SYS_PMCNTENSET_EL0), access_pmcnten, reset_unknown, PMCNTENSET_EL0 },
@@ -1657,6 +1703,7 @@ static const struct sys_reg_desc cp14_64_regs[] = {
  * register).
  */
 static const struct sys_reg_desc cp15_regs[] = {
+	{ Op1( 0), CRn( 0), CRm( 0), Op2( 1), access_ctr },
 	{ Op1( 0), CRn( 1), CRm( 0), Op2( 0), access_vm_reg, NULL, c1_SCTLR },
 	{ Op1( 0), CRn( 2), CRm( 0), Op2( 0), access_vm_reg, NULL, c2_TTBR0 },
 	{ Op1( 0), CRn( 2), CRm( 0), Op2( 1), access_vm_reg, NULL, c2_TTBR1 },
@@ -1774,6 +1821,10 @@ static const struct sys_reg_desc cp15_regs[] = {
 	PMU_PMEVTYPER(30),
 	/* PMCCFILTR */
 	{ Op1(0), CRn(14), CRm(15), Op2(7), access_pmu_evtyper },
+
+	{ Op1(1), CRn( 0), CRm( 0), Op2(0), access_ccsidr },
+	{ Op1(1), CRn( 0), CRm( 0), Op2(1), access_clidr },
+	{ Op1(2), CRn( 0), CRm( 0), Op2(0), access_csselr, NULL, c0_CSSELR },
 };
 
 static const struct sys_reg_desc cp15_64_regs[] = {
@@ -2196,11 +2247,15 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
 	}
 
 FUNCTION_INVARIANT(midr_el1)
-FUNCTION_INVARIANT(ctr_el0)
 FUNCTION_INVARIANT(revidr_el1)
 FUNCTION_INVARIANT(clidr_el1)
 FUNCTION_INVARIANT(aidr_el1)
 
+static void get_ctr_el0(struct kvm_vcpu *v, const struct sys_reg_desc *r)
+{
+	((struct sys_reg_desc *)r)->val = read_sanitised_ftr_reg(SYS_CTR_EL0);
+}
+
 /* ->val is filled in by kvm_sys_reg_table_init() */
 static struct sys_reg_desc invariant_sys_regs[] = {
 	{ SYS_DESC(SYS_MIDR_EL1), NULL, get_midr_el1 },
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC PATCH 2/2] arm64: kvm: describe data or unified caches as having 1 set and 1 way
  2018-12-17 15:02 [RFC PATCH 0/2] arm64: kvm: cache ID register trapping Ard Biesheuvel
  2018-12-17 15:02 ` [RFC PATCH 1/2] arm64: kvm: expose sanitised cache type register to guest Ard Biesheuvel
@ 2018-12-17 15:02 ` Ard Biesheuvel
  2019-01-08 11:02   ` Christoffer Dall
  1 sibling, 1 reply; 10+ messages in thread
From: Ard Biesheuvel @ 2018-12-17 15:02 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: marc.zyngier, Ard Biesheuvel, christoffer.dall, Suzuki.Poulose

On SMP ARM systems, cache maintenance by set/way should only ever be
done in the context of onlining or offlining CPUs, which is typically
done by bare metal firmware and never in a virtual machine. For this
reason, we trap set/way cache maintenance operations and replace them
with conditional flushing of the entire guest address space.

Due to this trapping, the set/way arguments passed into the set/way
ops are completely ignored, and thus irrelevant. This also means that
the set/way geometry is equally irrelevant, and we can simply report
it as 1 set and 1 way, so that legacy 32-bit ARM system software (i.e.,
the kind that only receives odd fixes) doesn't take a performance hit
due to the trapping when iterating over the cachelines.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 464e794b5bc5..eb244ff98dca 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1180,6 +1180,21 @@ static bool access_ccsidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 
 	csselr = vcpu_read_sys_reg(vcpu, CSSELR_EL1);
 	p->regval = get_ccsidr(csselr);
+
+	/*
+	 * Guests should not be doing cache operations by set/way at all, and
+	 * for this reason, we trap them and attempt to infer the intent, so
+	 * that we can flush the entire guest's address space at the appropriate
+	 * time.
+	 * To prevent this trapping from causing performance problems, let's
+	 * expose the geometry of all data and unified caches (which are
+	 * guaranteed to be PIPT and thus non-aliasing) as 1 set and 1 way.
+	 * [If guests should attempt to infer aliasing properties from the
+	 * geometry (which is not permitted by the architecture), they would
+	 * only do so for virtually indexed caches.]
+	 */
+	if (!(csselr & 1)) // data or unified cache
+		p->regval &= ~GENMASK(27, 2);
 	return true;
 }
 
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH 2/2] arm64: kvm: describe data or unified caches as having 1 set and 1 way
  2018-12-17 15:02 ` [RFC PATCH 2/2] arm64: kvm: describe data or unified caches as having 1 set and 1 way Ard Biesheuvel
@ 2019-01-08 11:02   ` Christoffer Dall
  2019-01-08 11:11     ` Ard Biesheuvel
  0 siblings, 1 reply; 10+ messages in thread
From: Christoffer Dall @ 2019-01-08 11:02 UTC (permalink / raw)
  To: Ard Biesheuvel; +Cc: marc.zyngier, linux-arm-kernel, Suzuki.Poulose

On Mon, Dec 17, 2018 at 04:02:05PM +0100, Ard Biesheuvel wrote:
> On SMP ARM systems, cache maintenance by set/way should only ever be
> done in the context of onlining or offlining CPUs, which is typically
> done by bare metal firmware and never in a virtual machine. For this
> reason, we trap set/way cache maintenance operations and replace them
> with conditional flushing of the entire guest address space.
> 
> Due to this trapping, the set/way arguments passed into the set/way
> ops are completely ignored, and thus irrelevant. This also means that
> the set/way geometry is equally irrelevant, and we can simply report
> it as 1 set and 1 way, so that legacy 32-bit ARM system software (i.e.,
> the kind that only receives odd fixes) doesn't take a performance hit
> due to the trapping when iterating over the cachelines.
> 
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 15 +++++++++++++++
>  1 file changed, 15 insertions(+)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 464e794b5bc5..eb244ff98dca 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1180,6 +1180,21 @@ static bool access_ccsidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  
>  	csselr = vcpu_read_sys_reg(vcpu, CSSELR_EL1);
>  	p->regval = get_ccsidr(csselr);
> +
> +	/*
> +	 * Guests should not be doing cache operations by set/way at all, and
> +	 * for this reason, we trap them and attempt to infer the intent, so
> +	 * that we can flush the entire guest's address space at the appropriate
> +	 * time.
> +	 * To prevent this trapping from causing performance problems, let's
> +	 * expose the geometry of all data and unified caches (which are
> +	 * guaranteed to be PIPT and thus non-aliasing) as 1 set and 1 way.
> +	 * [If guests should attempt to infer aliasing properties from the
> +	 * geometry (which is not permitted by the architecture), they would
> +	 * only do so for virtually indexed caches.]
> +	 */
> +	if (!(csselr & 1)) // data or unified cache
> +		p->regval &= ~GENMASK(27, 2);

Why are you clearing the upper bit the LineSize field?

Thanks,

    Christoffer

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH 2/2] arm64: kvm: describe data or unified caches as having 1 set and 1 way
  2019-01-08 11:02   ` Christoffer Dall
@ 2019-01-08 11:11     ` Ard Biesheuvel
  2019-01-08 11:14       ` Christoffer Dall
  0 siblings, 1 reply; 10+ messages in thread
From: Ard Biesheuvel @ 2019-01-08 11:11 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: Marc Zyngier, linux-arm-kernel, Suzuki K. Poulose

On Tue, 8 Jan 2019 at 12:02, Christoffer Dall <christoffer.dall@arm.com> wrote:
>
> On Mon, Dec 17, 2018 at 04:02:05PM +0100, Ard Biesheuvel wrote:
> > On SMP ARM systems, cache maintenance by set/way should only ever be
> > done in the context of onlining or offlining CPUs, which is typically
> > done by bare metal firmware and never in a virtual machine. For this
> > reason, we trap set/way cache maintenance operations and replace them
> > with conditional flushing of the entire guest address space.
> >
> > Due to this trapping, the set/way arguments passed into the set/way
> > ops are completely ignored, and thus irrelevant. This also means that
> > the set/way geometry is equally irrelevant, and we can simply report
> > it as 1 set and 1 way, so that legacy 32-bit ARM system software (i.e.,
> > the kind that only receives odd fixes) doesn't take a performance hit
> > due to the trapping when iterating over the cachelines.
> >
> > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> > ---
> >  arch/arm64/kvm/sys_regs.c | 15 +++++++++++++++
> >  1 file changed, 15 insertions(+)
> >
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 464e794b5bc5..eb244ff98dca 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1180,6 +1180,21 @@ static bool access_ccsidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >
> >       csselr = vcpu_read_sys_reg(vcpu, CSSELR_EL1);
> >       p->regval = get_ccsidr(csselr);
> > +
> > +     /*
> > +      * Guests should not be doing cache operations by set/way at all, and
> > +      * for this reason, we trap them and attempt to infer the intent, so
> > +      * that we can flush the entire guest's address space at the appropriate
> > +      * time.
> > +      * To prevent this trapping from causing performance problems, let's
> > +      * expose the geometry of all data and unified caches (which are
> > +      * guaranteed to be PIPT and thus non-aliasing) as 1 set and 1 way.
> > +      * [If guests should attempt to infer aliasing properties from the
> > +      * geometry (which is not permitted by the architecture), they would
> > +      * only do so for virtually indexed caches.]
> > +      */
> > +     if (!(csselr & 1)) // data or unified cache
> > +             p->regval &= ~GENMASK(27, 2);
>
> Why are you clearing the upper bit the LineSize field?
>

Ah, that needs to be ~GENMASK(27,3), of course.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH 2/2] arm64: kvm: describe data or unified caches as having 1 set and 1 way
  2019-01-08 11:11     ` Ard Biesheuvel
@ 2019-01-08 11:14       ` Christoffer Dall
  0 siblings, 0 replies; 10+ messages in thread
From: Christoffer Dall @ 2019-01-08 11:14 UTC (permalink / raw)
  To: Ard Biesheuvel; +Cc: Marc Zyngier, linux-arm-kernel, Suzuki K. Poulose

On Tue, Jan 08, 2019 at 12:11:33PM +0100, Ard Biesheuvel wrote:
> On Tue, 8 Jan 2019 at 12:02, Christoffer Dall <christoffer.dall@arm.com> wrote:
> >
> > On Mon, Dec 17, 2018 at 04:02:05PM +0100, Ard Biesheuvel wrote:
> > > On SMP ARM systems, cache maintenance by set/way should only ever be
> > > done in the context of onlining or offlining CPUs, which is typically
> > > done by bare metal firmware and never in a virtual machine. For this
> > > reason, we trap set/way cache maintenance operations and replace them
> > > with conditional flushing of the entire guest address space.
> > >
> > > Due to this trapping, the set/way arguments passed into the set/way
> > > ops are completely ignored, and thus irrelevant. This also means that
> > > the set/way geometry is equally irrelevant, and we can simply report
> > > it as 1 set and 1 way, so that legacy 32-bit ARM system software (i.e.,
> > > the kind that only receives odd fixes) doesn't take a performance hit
> > > due to the trapping when iterating over the cachelines.
> > >
> > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> > > ---
> > >  arch/arm64/kvm/sys_regs.c | 15 +++++++++++++++
> > >  1 file changed, 15 insertions(+)
> > >
> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > index 464e794b5bc5..eb244ff98dca 100644
> > > --- a/arch/arm64/kvm/sys_regs.c
> > > +++ b/arch/arm64/kvm/sys_regs.c
> > > @@ -1180,6 +1180,21 @@ static bool access_ccsidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> > >
> > >       csselr = vcpu_read_sys_reg(vcpu, CSSELR_EL1);
> > >       p->regval = get_ccsidr(csselr);
> > > +
> > > +     /*
> > > +      * Guests should not be doing cache operations by set/way at all, and
> > > +      * for this reason, we trap them and attempt to infer the intent, so
> > > +      * that we can flush the entire guest's address space at the appropriate
> > > +      * time.
> > > +      * To prevent this trapping from causing performance problems, let's
> > > +      * expose the geometry of all data and unified caches (which are
> > > +      * guaranteed to be PIPT and thus non-aliasing) as 1 set and 1 way.
> > > +      * [If guests should attempt to infer aliasing properties from the
> > > +      * geometry (which is not permitted by the architecture), they would
> > > +      * only do so for virtually indexed caches.]
> > > +      */
> > > +     if (!(csselr & 1)) // data or unified cache
> > > +             p->regval &= ~GENMASK(27, 2);
> >
> > Why are you clearing the upper bit the LineSize field?
> >
> 
> Ah, that needs to be ~GENMASK(27,3), of course.

With that fixed, the patches look fine to me.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH 1/2] arm64: kvm: expose sanitised cache type register to guest
  2018-12-17 15:02 ` [RFC PATCH 1/2] arm64: kvm: expose sanitised cache type register to guest Ard Biesheuvel
@ 2019-01-31 11:22   ` Marc Zyngier
  2019-01-31 11:24     ` Ard Biesheuvel
  0 siblings, 1 reply; 10+ messages in thread
From: Marc Zyngier @ 2019-01-31 11:22 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-arm-kernel; +Cc: christoffer.dall, Suzuki.Poulose

Hi Ard,

On 17/12/2018 15:02, Ard Biesheuvel wrote:
> We currently permit CPUs in the same system to deviate in the exact
> topology of the caches, and we subsequently hide this fact from user
> space by exposing a sanitised value of the cache type register CTR_EL0.
> 
> However, guests running under KVM see the bare value of CTR_EL0, which
> could potentially result in issues with, e.g., JITs or other pieces of
> code that are sensitive to misreported cache line sizes.
> 
> So let's start trapping cache ID instructions, and expose the sanitised
> version of CTR_EL0 to guests. Note that CTR_EL0 is treated as an invariant
> to KVM user space, so update that part as well.

I'm a bit uneasy with this. We rely on the kernel to perform this
sanitization for userspace when absolutely required, and this is so far
the exception.

If we start trapping it unconditionally, we're likely to introduce
performance regressions on system where there is no need to perform any
form of sanitization.

Could we instead only do this if ARM64_MISMATCHED_CACHE_TYPE is set?

Thanks,

	M.

> 
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/include/asm/kvm_arm.h |  3 +-
>  arch/arm64/include/asm/sysreg.h  |  1 +
>  arch/arm64/kvm/sys_regs.c        | 59 +++++++++++++++++++-
>  3 files changed, 60 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index 6f602af5263c..628dcb0cfea3 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -81,9 +81,10 @@
>   * IMO:		Override CPSR.I and enable signaling with VI
>   * FMO:		Override CPSR.F and enable signaling with VF
>   * SWIO:	Turn set/way invalidates into set/way clean+invalidate
> + * TID2:	Trap cache identification instructions
>   */
>  #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
> -			 HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | \
> +			 HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | HCR_TID2 | \
>  			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
>  			 HCR_FMO | HCR_IMO)
>  #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 842fb9572661..3b8e51874da4 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -342,6 +342,7 @@
>  
>  #define SYS_CNTKCTL_EL1			sys_reg(3, 0, 14, 1, 0)
>  
> +#define SYS_CCSIDR_EL1			sys_reg(3, 1, 0, 0, 0)
>  #define SYS_CLIDR_EL1			sys_reg(3, 1, 0, 0, 1)
>  #define SYS_AIDR_EL1			sys_reg(3, 1, 0, 0, 7)
>  
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 22fbbdbece3c..464e794b5bc5 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1140,6 +1140,49 @@ static int set_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
>  	return __set_id_reg(rd, uaddr, true);
>  }
>  
> +static bool access_ctr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +		       const struct sys_reg_desc *r)
> +{
> +	if (p->is_write)
> +		return write_to_read_only(vcpu, p, r);
> +
> +	p->regval = read_sanitised_ftr_reg(SYS_CTR_EL0);
> +	return true;
> +}
> +
> +static bool access_clidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +			 const struct sys_reg_desc *r)
> +{
> +	if (p->is_write)
> +		return write_to_read_only(vcpu, p, r);
> +
> +	p->regval = read_sysreg(clidr_el1);
> +	return true;
> +}
> +
> +static bool access_csselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +			  const struct sys_reg_desc *r)
> +{
> +	if (p->is_write)
> +		vcpu_write_sys_reg(vcpu, p->regval, r->reg);
> +	else
> +		p->regval = vcpu_read_sys_reg(vcpu, r->reg);
> +	return true;
> +}
> +
> +static bool access_ccsidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +			  const struct sys_reg_desc *r)
> +{
> +	u32 csselr;
> +
> +	if (p->is_write)
> +		return write_to_read_only(vcpu, p, r);
> +
> +	csselr = vcpu_read_sys_reg(vcpu, CSSELR_EL1);
> +	p->regval = get_ccsidr(csselr);
> +	return true;
> +}
> +
>  /* sys_reg_desc initialiser for known cpufeature ID registers */
>  #define ID_SANITISED(name) {			\
>  	SYS_DESC(SYS_##name),			\
> @@ -1357,7 +1400,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  
>  	{ SYS_DESC(SYS_CNTKCTL_EL1), NULL, reset_val, CNTKCTL_EL1, 0},
>  
> -	{ SYS_DESC(SYS_CSSELR_EL1), NULL, reset_unknown, CSSELR_EL1 },
> +	{ SYS_DESC(SYS_CCSIDR_EL1), access_ccsidr },
> +	{ SYS_DESC(SYS_CLIDR_EL1), access_clidr },
> +	{ SYS_DESC(SYS_CSSELR_EL1), access_csselr, reset_unknown, CSSELR_EL1 },
> +	{ SYS_DESC(SYS_CTR_EL0), access_ctr },
>  
>  	{ SYS_DESC(SYS_PMCR_EL0), access_pmcr, reset_pmcr, },
>  	{ SYS_DESC(SYS_PMCNTENSET_EL0), access_pmcnten, reset_unknown, PMCNTENSET_EL0 },
> @@ -1657,6 +1703,7 @@ static const struct sys_reg_desc cp14_64_regs[] = {
>   * register).
>   */
>  static const struct sys_reg_desc cp15_regs[] = {
> +	{ Op1( 0), CRn( 0), CRm( 0), Op2( 1), access_ctr },
>  	{ Op1( 0), CRn( 1), CRm( 0), Op2( 0), access_vm_reg, NULL, c1_SCTLR },
>  	{ Op1( 0), CRn( 2), CRm( 0), Op2( 0), access_vm_reg, NULL, c2_TTBR0 },
>  	{ Op1( 0), CRn( 2), CRm( 0), Op2( 1), access_vm_reg, NULL, c2_TTBR1 },
> @@ -1774,6 +1821,10 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	PMU_PMEVTYPER(30),
>  	/* PMCCFILTR */
>  	{ Op1(0), CRn(14), CRm(15), Op2(7), access_pmu_evtyper },
> +
> +	{ Op1(1), CRn( 0), CRm( 0), Op2(0), access_ccsidr },
> +	{ Op1(1), CRn( 0), CRm( 0), Op2(1), access_clidr },
> +	{ Op1(2), CRn( 0), CRm( 0), Op2(0), access_csselr, NULL, c0_CSSELR },
>  };
>  
>  static const struct sys_reg_desc cp15_64_regs[] = {
> @@ -2196,11 +2247,15 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
>  	}
>  
>  FUNCTION_INVARIANT(midr_el1)
> -FUNCTION_INVARIANT(ctr_el0)
>  FUNCTION_INVARIANT(revidr_el1)
>  FUNCTION_INVARIANT(clidr_el1)
>  FUNCTION_INVARIANT(aidr_el1)
>  
> +static void get_ctr_el0(struct kvm_vcpu *v, const struct sys_reg_desc *r)
> +{
> +	((struct sys_reg_desc *)r)->val = read_sanitised_ftr_reg(SYS_CTR_EL0);
> +}
> +
>  /* ->val is filled in by kvm_sys_reg_table_init() */
>  static struct sys_reg_desc invariant_sys_regs[] = {
>  	{ SYS_DESC(SYS_MIDR_EL1), NULL, get_midr_el1 },
> 


-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH 1/2] arm64: kvm: expose sanitised cache type register to guest
  2019-01-31 11:22   ` Marc Zyngier
@ 2019-01-31 11:24     ` Ard Biesheuvel
  2019-01-31 11:44       ` Marc Zyngier
  0 siblings, 1 reply; 10+ messages in thread
From: Ard Biesheuvel @ 2019-01-31 11:24 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: Christoffer Dall, linux-arm-kernel, Suzuki K. Poulose

On Thu, 31 Jan 2019 at 12:22, Marc Zyngier <marc.zyngier@arm.com> wrote:
>
> Hi Ard,
>
> On 17/12/2018 15:02, Ard Biesheuvel wrote:
> > We currently permit CPUs in the same system to deviate in the exact
> > topology of the caches, and we subsequently hide this fact from user
> > space by exposing a sanitised value of the cache type register CTR_EL0.
> >
> > However, guests running under KVM see the bare value of CTR_EL0, which
> > could potentially result in issues with, e.g., JITs or other pieces of
> > code that are sensitive to misreported cache line sizes.
> >
> > So let's start trapping cache ID instructions, and expose the sanitised
> > version of CTR_EL0 to guests. Note that CTR_EL0 is treated as an invariant
> > to KVM user space, so update that part as well.
>
> I'm a bit uneasy with this. We rely on the kernel to perform this
> sanitization for userspace when absolutely required, and this is so far
> the exception.
>
> If we start trapping it unconditionally, we're likely to introduce
> performance regressions on system where there is no need to perform any
> form of sanitization.
>
> Could we instead only do this if ARM64_MISMATCHED_CACHE_TYPE is set?
>

I suppose. Note that the next patch relies on the trapping as well,
but we could enable that piece for only 32-bit guests.


> >
> > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> > ---
> >  arch/arm64/include/asm/kvm_arm.h |  3 +-
> >  arch/arm64/include/asm/sysreg.h  |  1 +
> >  arch/arm64/kvm/sys_regs.c        | 59 +++++++++++++++++++-
> >  3 files changed, 60 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> > index 6f602af5263c..628dcb0cfea3 100644
> > --- a/arch/arm64/include/asm/kvm_arm.h
> > +++ b/arch/arm64/include/asm/kvm_arm.h
> > @@ -81,9 +81,10 @@
> >   * IMO:              Override CPSR.I and enable signaling with VI
> >   * FMO:              Override CPSR.F and enable signaling with VF
> >   * SWIO:     Turn set/way invalidates into set/way clean+invalidate
> > + * TID2:     Trap cache identification instructions
> >   */
> >  #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
> > -                      HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | \
> > +                      HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | HCR_TID2 | \
> >                        HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
> >                        HCR_FMO | HCR_IMO)
> >  #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
> > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> > index 842fb9572661..3b8e51874da4 100644
> > --- a/arch/arm64/include/asm/sysreg.h
> > +++ b/arch/arm64/include/asm/sysreg.h
> > @@ -342,6 +342,7 @@
> >
> >  #define SYS_CNTKCTL_EL1                      sys_reg(3, 0, 14, 1, 0)
> >
> > +#define SYS_CCSIDR_EL1                       sys_reg(3, 1, 0, 0, 0)
> >  #define SYS_CLIDR_EL1                        sys_reg(3, 1, 0, 0, 1)
> >  #define SYS_AIDR_EL1                 sys_reg(3, 1, 0, 0, 7)
> >
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 22fbbdbece3c..464e794b5bc5 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1140,6 +1140,49 @@ static int set_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
> >       return __set_id_reg(rd, uaddr, true);
> >  }
> >
> > +static bool access_ctr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> > +                    const struct sys_reg_desc *r)
> > +{
> > +     if (p->is_write)
> > +             return write_to_read_only(vcpu, p, r);
> > +
> > +     p->regval = read_sanitised_ftr_reg(SYS_CTR_EL0);
> > +     return true;
> > +}
> > +
> > +static bool access_clidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> > +                      const struct sys_reg_desc *r)
> > +{
> > +     if (p->is_write)
> > +             return write_to_read_only(vcpu, p, r);
> > +
> > +     p->regval = read_sysreg(clidr_el1);
> > +     return true;
> > +}
> > +
> > +static bool access_csselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> > +                       const struct sys_reg_desc *r)
> > +{
> > +     if (p->is_write)
> > +             vcpu_write_sys_reg(vcpu, p->regval, r->reg);
> > +     else
> > +             p->regval = vcpu_read_sys_reg(vcpu, r->reg);
> > +     return true;
> > +}
> > +
> > +static bool access_ccsidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> > +                       const struct sys_reg_desc *r)
> > +{
> > +     u32 csselr;
> > +
> > +     if (p->is_write)
> > +             return write_to_read_only(vcpu, p, r);
> > +
> > +     csselr = vcpu_read_sys_reg(vcpu, CSSELR_EL1);
> > +     p->regval = get_ccsidr(csselr);
> > +     return true;
> > +}
> > +
> >  /* sys_reg_desc initialiser for known cpufeature ID registers */
> >  #define ID_SANITISED(name) {                 \
> >       SYS_DESC(SYS_##name),                   \
> > @@ -1357,7 +1400,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> >
> >       { SYS_DESC(SYS_CNTKCTL_EL1), NULL, reset_val, CNTKCTL_EL1, 0},
> >
> > -     { SYS_DESC(SYS_CSSELR_EL1), NULL, reset_unknown, CSSELR_EL1 },
> > +     { SYS_DESC(SYS_CCSIDR_EL1), access_ccsidr },
> > +     { SYS_DESC(SYS_CLIDR_EL1), access_clidr },
> > +     { SYS_DESC(SYS_CSSELR_EL1), access_csselr, reset_unknown, CSSELR_EL1 },
> > +     { SYS_DESC(SYS_CTR_EL0), access_ctr },
> >
> >       { SYS_DESC(SYS_PMCR_EL0), access_pmcr, reset_pmcr, },
> >       { SYS_DESC(SYS_PMCNTENSET_EL0), access_pmcnten, reset_unknown, PMCNTENSET_EL0 },
> > @@ -1657,6 +1703,7 @@ static const struct sys_reg_desc cp14_64_regs[] = {
> >   * register).
> >   */
> >  static const struct sys_reg_desc cp15_regs[] = {
> > +     { Op1( 0), CRn( 0), CRm( 0), Op2( 1), access_ctr },
> >       { Op1( 0), CRn( 1), CRm( 0), Op2( 0), access_vm_reg, NULL, c1_SCTLR },
> >       { Op1( 0), CRn( 2), CRm( 0), Op2( 0), access_vm_reg, NULL, c2_TTBR0 },
> >       { Op1( 0), CRn( 2), CRm( 0), Op2( 1), access_vm_reg, NULL, c2_TTBR1 },
> > @@ -1774,6 +1821,10 @@ static const struct sys_reg_desc cp15_regs[] = {
> >       PMU_PMEVTYPER(30),
> >       /* PMCCFILTR */
> >       { Op1(0), CRn(14), CRm(15), Op2(7), access_pmu_evtyper },
> > +
> > +     { Op1(1), CRn( 0), CRm( 0), Op2(0), access_ccsidr },
> > +     { Op1(1), CRn( 0), CRm( 0), Op2(1), access_clidr },
> > +     { Op1(2), CRn( 0), CRm( 0), Op2(0), access_csselr, NULL, c0_CSSELR },
> >  };
> >
> >  static const struct sys_reg_desc cp15_64_regs[] = {
> > @@ -2196,11 +2247,15 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
> >       }
> >
> >  FUNCTION_INVARIANT(midr_el1)
> > -FUNCTION_INVARIANT(ctr_el0)
> >  FUNCTION_INVARIANT(revidr_el1)
> >  FUNCTION_INVARIANT(clidr_el1)
> >  FUNCTION_INVARIANT(aidr_el1)
> >
> > +static void get_ctr_el0(struct kvm_vcpu *v, const struct sys_reg_desc *r)
> > +{
> > +     ((struct sys_reg_desc *)r)->val = read_sanitised_ftr_reg(SYS_CTR_EL0);
> > +}
> > +
> >  /* ->val is filled in by kvm_sys_reg_table_init() */
> >  static struct sys_reg_desc invariant_sys_regs[] = {
> >       { SYS_DESC(SYS_MIDR_EL1), NULL, get_midr_el1 },
> >
>
>
> --
> Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH 1/2] arm64: kvm: expose sanitised cache type register to guest
  2019-01-31 11:24     ` Ard Biesheuvel
@ 2019-01-31 11:44       ` Marc Zyngier
  2019-01-31 11:45         ` Ard Biesheuvel
  0 siblings, 1 reply; 10+ messages in thread
From: Marc Zyngier @ 2019-01-31 11:44 UTC (permalink / raw)
  To: Ard Biesheuvel; +Cc: Christoffer Dall, linux-arm-kernel, Suzuki K. Poulose

On 31/01/2019 11:24, Ard Biesheuvel wrote:
> On Thu, 31 Jan 2019 at 12:22, Marc Zyngier <marc.zyngier@arm.com> wrote:
>>
>> Hi Ard,
>>
>> On 17/12/2018 15:02, Ard Biesheuvel wrote:
>>> We currently permit CPUs in the same system to deviate in the exact
>>> topology of the caches, and we subsequently hide this fact from user
>>> space by exposing a sanitised value of the cache type register CTR_EL0.
>>>
>>> However, guests running under KVM see the bare value of CTR_EL0, which
>>> could potentially result in issues with, e.g., JITs or other pieces of
>>> code that are sensitive to misreported cache line sizes.
>>>
>>> So let's start trapping cache ID instructions, and expose the sanitised
>>> version of CTR_EL0 to guests. Note that CTR_EL0 is treated as an invariant
>>> to KVM user space, so update that part as well.
>>
>> I'm a bit uneasy with this. We rely on the kernel to perform this
>> sanitization for userspace when absolutely required, and this is so far
>> the exception.
>>
>> If we start trapping it unconditionally, we're likely to introduce
>> performance regressions on system where there is no need to perform any
>> form of sanitization.
>>
>> Could we instead only do this if ARM64_MISMATCHED_CACHE_TYPE is set?
>>
> 
> I suppose. Note that the next patch relies on the trapping as well,
> but we could enable that piece for only 32-bit guests.

I think that'd be fine. 32bit has no EL0 access to this, so it shouldn't
see any major hit (nor should it use S/W ops, but hey, odd fixes).

Do you mind trying to have a go at that?

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH 1/2] arm64: kvm: expose sanitised cache type register to guest
  2019-01-31 11:44       ` Marc Zyngier
@ 2019-01-31 11:45         ` Ard Biesheuvel
  0 siblings, 0 replies; 10+ messages in thread
From: Ard Biesheuvel @ 2019-01-31 11:45 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: Christoffer Dall, linux-arm-kernel, Suzuki K. Poulose

On Thu, 31 Jan 2019 at 12:44, Marc Zyngier <marc.zyngier@arm.com> wrote:
>
> On 31/01/2019 11:24, Ard Biesheuvel wrote:
> > On Thu, 31 Jan 2019 at 12:22, Marc Zyngier <marc.zyngier@arm.com> wrote:
> >>
> >> Hi Ard,
> >>
> >> On 17/12/2018 15:02, Ard Biesheuvel wrote:
> >>> We currently permit CPUs in the same system to deviate in the exact
> >>> topology of the caches, and we subsequently hide this fact from user
> >>> space by exposing a sanitised value of the cache type register CTR_EL0.
> >>>
> >>> However, guests running under KVM see the bare value of CTR_EL0, which
> >>> could potentially result in issues with, e.g., JITs or other pieces of
> >>> code that are sensitive to misreported cache line sizes.
> >>>
> >>> So let's start trapping cache ID instructions, and expose the sanitised
> >>> version of CTR_EL0 to guests. Note that CTR_EL0 is treated as an invariant
> >>> to KVM user space, so update that part as well.
> >>
> >> I'm a bit uneasy with this. We rely on the kernel to perform this
> >> sanitization for userspace when absolutely required, and this is so far
> >> the exception.
> >>
> >> If we start trapping it unconditionally, we're likely to introduce
> >> performance regressions on system where there is no need to perform any
> >> form of sanitization.
> >>
> >> Could we instead only do this if ARM64_MISMATCHED_CACHE_TYPE is set?
> >>
> >
> > I suppose. Note that the next patch relies on the trapping as well,
> > but we could enable that piece for only 32-bit guests.
>
> I think that'd be fine. 32bit has no EL0 access to this, so it shouldn't
> see any major hit (nor should it use S/W ops, but hey, odd fixes).
>
> Do you mind trying to have a go at that?
>

Not at all.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-01-31 11:46 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-17 15:02 [RFC PATCH 0/2] arm64: kvm: cache ID register trapping Ard Biesheuvel
2018-12-17 15:02 ` [RFC PATCH 1/2] arm64: kvm: expose sanitised cache type register to guest Ard Biesheuvel
2019-01-31 11:22   ` Marc Zyngier
2019-01-31 11:24     ` Ard Biesheuvel
2019-01-31 11:44       ` Marc Zyngier
2019-01-31 11:45         ` Ard Biesheuvel
2018-12-17 15:02 ` [RFC PATCH 2/2] arm64: kvm: describe data or unified caches as having 1 set and 1 way Ard Biesheuvel
2019-01-08 11:02   ` Christoffer Dall
2019-01-08 11:11     ` Ard Biesheuvel
2019-01-08 11:14       ` Christoffer Dall

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).