archive mirror
 help / color / mirror / Atom feed
From: Marc Zyngier <>
To: Akihiko Odaki <>
	Mathieu Poirier <>,
	Oliver Upton <>,
	Suzuki K Poulose <>,
	Alexandru Elisei <>,
	James Morse <>, Will Deacon <>,
	Catalin Marinas <>,, Alyssa Rosenzweig <>,
	Sven Peter <>, Hector Martin <>
Subject: Re: [PATCH 0/3] KVM: arm64: Handle CCSIDR associativity mismatches
Date: Fri, 02 Dec 2022 09:40:04 +0000	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

On Fri, 02 Dec 2022 05:17:12 +0000,
Akihiko Odaki <> wrote:
> >> On M2 MacBook Air, I have seen no other difference in standard ID
> >> registers and CCSIDRs are exceptions. Perhaps Apple designed this way
> >> so that macOS's Hypervisor can freely migrate vCPU, but I can't assure
> >> that without more analysis. This is still enough to migrate vCPU
> >> running Linux at least.
> > 
> > I guess that MacOS hides more of the underlying HW than KVM does. And
> > KVM definitely doesn't hide the MIDR_EL1 registers, which *are*
> > different between the two clusters.
> It seems KVM stores a MIDR value of a CPU and reuse it as "invariant"
> value for ioctls while it exposes the MIDR value each physical CPU
> owns to vCPU.

This only affects the VMM though, and not the guest which sees the
MIDR of the CPU it runs on. The problem is that at short of pinning
the vcpus, you don't know where they will run. So any value is fair

> This may be a problem worth fixing. My understanding is that while
> there is no serious application which requires vCPU migration among
> physical clusters,

Hey, I do that all the time with kvmtool! It's just that my guest do
not care about being run on a CPU or another.

> crosvm uses KVM on big.LITTLE processors by pinning
> vCPU to physical CPU, and it is a real-world application which needs
> to be supported.
> For an application like crosvm, you would expect the vCPU thread gets
> the MIDR value of the physical CPU which the thread is pinned to when
> it calls ioctl, but it can get one of another arbitrary CPU in
> reality.

No. It will get the MIDR of the CPU it runs on. Check again. What you
describing above is solely for userspace.

> Fixing this problem will pose two design questions:
> 1. Should it expose a value consistent among clusters?
> For example, we can change the KVM initialization code so that it
> initializes VPIDR with the value stored as "invariant". This would
> help migrating vCPU among clusters, but if you pin each vCPU thread to
> a distinct phyiscal CPU, you may rather want the vCPU to see the MIDR
> value specific to each physical CPU and to apply quirks or tuning
> parameters according to the value.

Which is what happens. Not at the cluster level, but at the CPU
level. The architecture doesn't describe what a *cluster* is.

> 2. Should it be invariant or variable?
> Fortunately making it variable is easy. Arm provides VPIDR_EL1
> register to specify the value exposed as MPIDR_EL0 so there is no
> trapping cost.

And if you do that you make it impossible for the guest to mitigate
errata, as most of the errata handling is based on the MIDR values.

> ...or we may just say the value of MPIDR_EL0 (and possibly other

I assume you meant MIDR_EL1 here, as MPIDR_EL1 is something else (and
it has no _EL0 equivalent).

> "invariant" registers) exposed via ioctl are useless and deprecated.

Useless? Not really. The all are meaningful to the guest, and a change
there will cause issues.

CTR_EL0 must, for example, be an invariant. Otherwise, you need to
trap all the CMOs when the {I,D}minLine values that are restored from
userspace are bigger than the ones the HW has. Even worse, when the
DIC/IDC bits are set from userspace while the HW has them cleared: you
cannot mitigate that one, and you'll end up with memory corruption.

I've been toying with the idea of exposing to guests the list of
MIDR/REVIDR the guest is allowed to run on, as a PV service. This
would allow that guest to enable all the mitigations it wants in one

Not sure I have time for this at the moment, but that'd be something
to explore.


> > So let's first build on top of HCR_EL2.TID2, and only then once we
> > have an idea of the overhead add support for HCR_EL2.TID4 for the
> > systems that have FEAT_EVT.
> That sounds good, I'll write a new series according to this idea.



Without deviation from the norm, progress is not possible.

  reply	other threads:[~2022-12-02  9:40 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-01 10:49 [PATCH 0/3] KVM: arm64: Handle CCSIDR associativity mismatches Akihiko Odaki
2022-12-01 10:49 ` [PATCH 1/3] KVM: arm64: Make CCSIDRs consistent Akihiko Odaki
2022-12-01 10:49 ` [PATCH 2/3] arm64: errata: Check for mismatched cache associativity Akihiko Odaki
2022-12-01 10:49 ` [PATCH 3/3] KVM: arm64: Handle CCSIDR associativity mismatches Akihiko Odaki
2022-12-01 11:06 ` [PATCH 0/3] " Marc Zyngier
2022-12-01 17:26   ` Akihiko Odaki
2022-12-01 23:13     ` Marc Zyngier
2022-12-02  5:17       ` Akihiko Odaki
2022-12-02  9:40         ` Marc Zyngier [this message]
2022-12-02  9:55           ` Akihiko Odaki
2022-12-04 14:57             ` Marc Zyngier
2022-12-11  5:25               ` Akihiko Odaki
2022-12-11 10:21                 ` Marc Zyngier
2022-12-11 10:44                   ` Akihiko Odaki
2022-12-01 18:29   ` Oliver Upton
2022-12-01 23:14     ` Marc Zyngier
2022-12-02 18:54       ` Oliver Upton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).