* [PATCH v2 00/15] x86: Merge cpuid and msr policy objects
@ 2023-04-04 9:52 Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 01/15] x86: Rename struct cpu_policy to struct old_cpuid_policy Andrew Cooper
` (14 more replies)
0 siblings, 15 replies; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 9:52 UTC (permalink / raw)
To: Xen-devel; +Cc: Andrew Cooper, Jan Beulich, Roger Pau Monné, Wei Liu
This is in order to be able to put MSR_ARCH_CAPS in a featureset. In
hindsight it was a mistake separating CPUID and read-only MSR data into
separate structs.
Patches 1-8 were posted previously and have had the feedback addressed.
Patches 9-15 are the result of splitting the older RFC patch 9 apart, with the
discussed adjustment to aliases accounted for.
Gitlab run showing the series to be buildable at each changeset:
https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4053607867
P.S. I'm not sure I believe the net diffstat below. I think the cpuid.c =>
cpu-policy.c rename is confusing it.
Andrew Cooper (15):
x86: Rename struct cpu_policy to struct old_cpuid_policy
x86: Rename {domctl,sysctl}.cpu_policy.{cpuid,msr}_policy fields
x86: Rename struct cpuid_policy to struct cpu_policy
x86: Merge struct msr_policy into struct cpu_policy
x86: Merge the system {cpuid,msr} policy objects
x86: Merge a domain's {cpuid,msr} policy objects
x86: Merge xc_cpu_policy's cpuid and msr objects
x86: Drop struct old_cpu_policy
x86: Out-of-inline the policy<->featureset convertors
x86/boot: Move MSR policy initialisation logic into cpu-policy.c
x86/boot: Merge CPUID policy initialisation logic into cpu-policy.c
x86/emul: Switch x86_emulate_ctxt to cpu_policy
tools/fuzz: Rework afl-policy-fuzzer
libx86: Update library API for cpu_policy
x86: Remove temporary {cpuid,msr}_policy defines
tools/fuzz/cpu-policy/afl-policy-fuzzer.c | 64 +-
.../fuzz/x86_instruction_emulator/fuzz-emul.c | 2 +-
tools/libs/guest/xg_cpuid_x86.c | 50 +-
tools/libs/guest/xg_private.h | 5 +-
tools/tests/cpu-policy/test-cpu-policy.c | 54 +-
tools/tests/tsx/test-tsx.c | 71 +-
tools/tests/x86_emulator/Makefile | 2 +-
tools/tests/x86_emulator/test_x86_emulator.c | 2 +-
tools/tests/x86_emulator/x86-emulate.c | 4 +-
tools/tests/x86_emulator/x86-emulate.h | 2 +-
xen/arch/x86/Makefile | 1 +
xen/arch/x86/{cpuid.c => cpu-policy.c} | 692 ++++----------
xen/arch/x86/cpu/common.c | 4 +-
xen/arch/x86/cpu/mcheck/mce_intel.c | 2 +-
xen/arch/x86/cpuid.c | 856 +-----------------
xen/arch/x86/domain.c | 17 +-
xen/arch/x86/domctl.c | 51 +-
xen/arch/x86/hvm/emulate.c | 4 +-
xen/arch/x86/hvm/hvm.c | 5 +-
xen/arch/x86/hvm/svm/svm.c | 2 +-
xen/arch/x86/hvm/vlapic.c | 2 +-
xen/arch/x86/hvm/vmx/vmx.c | 8 +-
xen/arch/x86/include/asm/cpu-policy.h | 27 +
xen/arch/x86/include/asm/cpuid.h | 21 +-
xen/arch/x86/include/asm/domain.h | 13 +-
xen/arch/x86/include/asm/msr.h | 14 +-
xen/arch/x86/mm/mem_sharing.c | 3 +-
xen/arch/x86/mm/shadow/hvm.c | 2 +-
xen/arch/x86/msr.c | 160 +---
xen/arch/x86/pv/domain.c | 3 +-
xen/arch/x86/pv/emul-priv-op.c | 6 +-
xen/arch/x86/pv/ro-page-fault.c | 2 +-
xen/arch/x86/setup.c | 5 +-
xen/arch/x86/sysctl.c | 79 +-
xen/arch/x86/traps.c | 2 +-
xen/arch/x86/x86_emulate/private.h | 4 +-
xen/arch/x86/x86_emulate/x86_emulate.c | 2 +-
xen/arch/x86/x86_emulate/x86_emulate.h | 9 +-
xen/arch/x86/xstate.c | 4 +-
xen/include/public/domctl.h | 4 +-
xen/include/public/sysctl.h | 4 +-
xen/include/xen/lib/x86/cpu-policy.h | 510 ++++++++++-
xen/include/xen/lib/x86/cpuid.h | 475 ----------
xen/include/xen/lib/x86/msr.h | 104 ---
xen/lib/x86/cpuid.c | 68 +-
xen/lib/x86/msr.c | 6 +-
xen/lib/x86/policy.c | 8 +-
47 files changed, 1005 insertions(+), 2430 deletions(-)
copy xen/arch/x86/{cpuid.c => cpu-policy.c} (52%)
create mode 100644 xen/arch/x86/include/asm/cpu-policy.h
delete mode 100644 xen/include/xen/lib/x86/cpuid.h
delete mode 100644 xen/include/xen/lib/x86/msr.h
--
2.30.2
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v2 01/15] x86: Rename struct cpu_policy to struct old_cpuid_policy
2023-04-04 9:52 [PATCH v2 00/15] x86: Merge cpuid and msr policy objects Andrew Cooper
@ 2023-04-04 9:52 ` Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 02/15] x86: Rename {domctl,sysctl}.cpu_policy.{cpuid,msr}_policy fields Andrew Cooper
` (13 subsequent siblings)
14 siblings, 0 replies; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 9:52 UTC (permalink / raw)
To: Xen-devel
Cc: Andrew Cooper, Jan Beulich, Jan Beulich, Roger Pau Monné, Wei Liu
We want to merge struct cpuid_policy and struct msr_policy together, and the
result wants to be called struct cpu_policy.
The current struct cpu_policy, being a pair of pointers, isn't terribly
useful. Rename the type to struct old_cpu_policy, but it will disappear
entirely once the merge is complete.
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
tools/libs/guest/xg_cpuid_x86.c | 4 ++--
tools/tests/cpu-policy/test-cpu-policy.c | 4 ++--
xen/arch/x86/domctl.c | 4 ++--
xen/arch/x86/include/asm/cpuid.h | 2 +-
xen/arch/x86/sysctl.c | 4 ++--
xen/include/xen/lib/x86/cpu-policy.h | 6 +++---
xen/lib/x86/policy.c | 4 ++--
7 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 4542878bbe88..1b02bc987af7 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -868,8 +868,8 @@ bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
xc_cpu_policy_t *guest)
{
struct cpu_policy_errors err = INIT_CPU_POLICY_ERRORS;
- struct cpu_policy h = { &host->cpuid, &host->msr };
- struct cpu_policy g = { &guest->cpuid, &guest->msr };
+ struct old_cpu_policy h = { &host->cpuid, &host->msr };
+ struct old_cpu_policy g = { &guest->cpuid, &guest->msr };
int rc = x86_cpu_policies_are_compatible(&h, &g, &err);
if ( !rc )
diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
index d3f24fd6d274..909d6272f875 100644
--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -602,7 +602,7 @@ static void test_is_compatible_success(void)
for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
{
struct test *t = &tests[i];
- struct cpu_policy sys = {
+ struct old_cpu_policy sys = {
&t->host_cpuid,
&t->host_msr,
}, new = {
@@ -654,7 +654,7 @@ static void test_is_compatible_failure(void)
for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
{
struct test *t = &tests[i];
- struct cpu_policy sys = {
+ struct old_cpu_policy sys = {
&t->host_cpuid,
&t->host_msr,
}, new = {
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 2118fcad5dfe..0b41b279507e 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -40,8 +40,8 @@
static int update_domain_cpu_policy(struct domain *d,
xen_domctl_cpu_policy_t *xdpc)
{
- struct cpu_policy new = {};
- const struct cpu_policy *sys = is_pv_domain(d)
+ struct old_cpu_policy new = {};
+ const struct old_cpu_policy *sys = is_pv_domain(d)
? &system_policies[XEN_SYSCTL_cpu_policy_pv_max]
: &system_policies[XEN_SYSCTL_cpu_policy_hvm_max];
struct cpu_policy_errors err = INIT_CPU_POLICY_ERRORS;
diff --git a/xen/arch/x86/include/asm/cpuid.h b/xen/arch/x86/include/asm/cpuid.h
index 9c3637549a10..49b3128f06f9 100644
--- a/xen/arch/x86/include/asm/cpuid.h
+++ b/xen/arch/x86/include/asm/cpuid.h
@@ -51,7 +51,7 @@ extern struct cpuid_policy raw_cpuid_policy, host_cpuid_policy,
pv_max_cpuid_policy, pv_def_cpuid_policy,
hvm_max_cpuid_policy, hvm_def_cpuid_policy;
-extern const struct cpu_policy system_policies[];
+extern const struct old_cpu_policy system_policies[];
/* Check that all previously present features are still available. */
bool recheck_cpu_features(unsigned int cpu);
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 16625b57f01f..3f5b092df16a 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -32,7 +32,7 @@
#include <asm/psr.h>
#include <asm/cpuid.h>
-const struct cpu_policy system_policies[6] = {
+const struct old_cpu_policy system_policies[6] = {
[ XEN_SYSCTL_cpu_policy_raw ] = {
&raw_cpuid_policy,
&raw_msr_policy,
@@ -391,7 +391,7 @@ long arch_do_sysctl(
case XEN_SYSCTL_get_cpu_policy:
{
- const struct cpu_policy *policy;
+ const struct old_cpu_policy *policy;
/* Reserved field set, or bad policy index? */
if ( sysctl->u.cpu_policy._rsvd ||
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index 5a2c4c7b2d90..3a5300d1078c 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -5,7 +5,7 @@
#include <xen/lib/x86/cpuid.h>
#include <xen/lib/x86/msr.h>
-struct cpu_policy
+struct old_cpu_policy
{
struct cpuid_policy *cpuid;
struct msr_policy *msr;
@@ -33,8 +33,8 @@ struct cpu_policy_errors
* incompatibility is detected, the optional err pointer may identify the
* problematic leaf/subleaf and/or MSR.
*/
-int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
- const struct cpu_policy *guest,
+int x86_cpu_policies_are_compatible(const struct old_cpu_policy *host,
+ const struct old_cpu_policy *guest,
struct cpu_policy_errors *err);
#endif /* !XEN_LIB_X86_POLICIES_H */
diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c
index f6cea4e2f9bd..2975711d7c6c 100644
--- a/xen/lib/x86/policy.c
+++ b/xen/lib/x86/policy.c
@@ -2,8 +2,8 @@
#include <xen/lib/x86/cpu-policy.h>
-int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
- const struct cpu_policy *guest,
+int x86_cpu_policies_are_compatible(const struct old_cpu_policy *host,
+ const struct old_cpu_policy *guest,
struct cpu_policy_errors *err)
{
struct cpu_policy_errors e = INIT_CPU_POLICY_ERRORS;
--
2.30.2
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v2 02/15] x86: Rename {domctl,sysctl}.cpu_policy.{cpuid,msr}_policy fields
2023-04-04 9:52 [PATCH v2 00/15] x86: Merge cpuid and msr policy objects Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 01/15] x86: Rename struct cpu_policy to struct old_cpuid_policy Andrew Cooper
@ 2023-04-04 9:52 ` Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 03/15] x86: Rename struct cpuid_policy to struct cpu_policy Andrew Cooper
` (12 subsequent siblings)
14 siblings, 0 replies; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 9:52 UTC (permalink / raw)
To: Xen-devel
Cc: Andrew Cooper, Jan Beulich, Jan Beulich, Roger Pau Monné, Wei Liu
These weren't great names to begin with, and using {leaves,msrs} matches up
better with the existing nr_{leaves,msr} parameters anyway.
Furthermore, by renaming these fields we can get away with using some #define
trickery to avoid the struct {cpuid,msr}_policy merge needing to happen in a
single changeset.
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
v2:
* Fix typo in commit message
---
tools/libs/guest/xg_cpuid_x86.c | 12 ++++++------
xen/arch/x86/domctl.c | 12 ++++++------
xen/arch/x86/sysctl.c | 8 ++++----
xen/include/public/domctl.h | 4 ++--
xen/include/public/sysctl.h | 4 ++--
5 files changed, 20 insertions(+), 20 deletions(-)
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 1b02bc987af7..5fae06e77804 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -145,9 +145,9 @@ static int get_system_cpu_policy(xc_interface *xch, uint32_t index,
sysctl.cmd = XEN_SYSCTL_get_cpu_policy;
sysctl.u.cpu_policy.index = index;
sysctl.u.cpu_policy.nr_leaves = *nr_leaves;
- set_xen_guest_handle(sysctl.u.cpu_policy.cpuid_policy, leaves);
+ set_xen_guest_handle(sysctl.u.cpu_policy.leaves, leaves);
sysctl.u.cpu_policy.nr_msrs = *nr_msrs;
- set_xen_guest_handle(sysctl.u.cpu_policy.msr_policy, msrs);
+ set_xen_guest_handle(sysctl.u.cpu_policy.msrs, msrs);
ret = do_sysctl(xch, &sysctl);
@@ -183,9 +183,9 @@ static int get_domain_cpu_policy(xc_interface *xch, uint32_t domid,
domctl.cmd = XEN_DOMCTL_get_cpu_policy;
domctl.domain = domid;
domctl.u.cpu_policy.nr_leaves = *nr_leaves;
- set_xen_guest_handle(domctl.u.cpu_policy.cpuid_policy, leaves);
+ set_xen_guest_handle(domctl.u.cpu_policy.leaves, leaves);
domctl.u.cpu_policy.nr_msrs = *nr_msrs;
- set_xen_guest_handle(domctl.u.cpu_policy.msr_policy, msrs);
+ set_xen_guest_handle(domctl.u.cpu_policy.msrs, msrs);
ret = do_domctl(xch, &domctl);
@@ -232,9 +232,9 @@ int xc_set_domain_cpu_policy(xc_interface *xch, uint32_t domid,
domctl.cmd = XEN_DOMCTL_set_cpu_policy;
domctl.domain = domid;
domctl.u.cpu_policy.nr_leaves = nr_leaves;
- set_xen_guest_handle(domctl.u.cpu_policy.cpuid_policy, leaves);
+ set_xen_guest_handle(domctl.u.cpu_policy.leaves, leaves);
domctl.u.cpu_policy.nr_msrs = nr_msrs;
- set_xen_guest_handle(domctl.u.cpu_policy.msr_policy, msrs);
+ set_xen_guest_handle(domctl.u.cpu_policy.msrs, msrs);
domctl.u.cpu_policy.err_leaf = -1;
domctl.u.cpu_policy.err_subleaf = -1;
domctl.u.cpu_policy.err_msr = -1;
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 0b41b279507e..944af63e68d0 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -54,10 +54,10 @@ static int update_domain_cpu_policy(struct domain *d,
/* Merge the toolstack provided data. */
if ( (ret = x86_cpuid_copy_from_buffer(
- new.cpuid, xdpc->cpuid_policy, xdpc->nr_leaves,
+ new.cpuid, xdpc->leaves, xdpc->nr_leaves,
&err.leaf, &err.subleaf)) ||
(ret = x86_msr_copy_from_buffer(
- new.msr, xdpc->msr_policy, xdpc->nr_msrs, &err.msr)) )
+ new.msr, xdpc->msrs, xdpc->nr_msrs, &err.msr)) )
goto out;
/* Trim any newly-stale out-of-range leaves. */
@@ -1317,20 +1317,20 @@ long arch_do_domctl(
case XEN_DOMCTL_get_cpu_policy:
/* Process the CPUID leaves. */
- if ( guest_handle_is_null(domctl->u.cpu_policy.cpuid_policy) )
+ if ( guest_handle_is_null(domctl->u.cpu_policy.leaves) )
domctl->u.cpu_policy.nr_leaves = CPUID_MAX_SERIALISED_LEAVES;
else if ( (ret = x86_cpuid_copy_to_buffer(
d->arch.cpuid,
- domctl->u.cpu_policy.cpuid_policy,
+ domctl->u.cpu_policy.leaves,
&domctl->u.cpu_policy.nr_leaves)) )
break;
/* Process the MSR entries. */
- if ( guest_handle_is_null(domctl->u.cpu_policy.msr_policy) )
+ if ( guest_handle_is_null(domctl->u.cpu_policy.msrs) )
domctl->u.cpu_policy.nr_msrs = MSR_MAX_SERIALISED_ENTRIES;
else if ( (ret = x86_msr_copy_to_buffer(
d->arch.msr,
- domctl->u.cpu_policy.msr_policy,
+ domctl->u.cpu_policy.msrs,
&domctl->u.cpu_policy.nr_msrs)) )
break;
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 3f5b092df16a..3ed7c69f4315 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -411,11 +411,11 @@ long arch_do_sysctl(
}
/* Process the CPUID leaves. */
- if ( guest_handle_is_null(sysctl->u.cpu_policy.cpuid_policy) )
+ if ( guest_handle_is_null(sysctl->u.cpu_policy.leaves) )
sysctl->u.cpu_policy.nr_leaves = CPUID_MAX_SERIALISED_LEAVES;
else if ( (ret = x86_cpuid_copy_to_buffer(
policy->cpuid,
- sysctl->u.cpu_policy.cpuid_policy,
+ sysctl->u.cpu_policy.leaves,
&sysctl->u.cpu_policy.nr_leaves)) )
break;
@@ -427,11 +427,11 @@ long arch_do_sysctl(
}
/* Process the MSR entries. */
- if ( guest_handle_is_null(sysctl->u.cpu_policy.msr_policy) )
+ if ( guest_handle_is_null(sysctl->u.cpu_policy.msrs) )
sysctl->u.cpu_policy.nr_msrs = MSR_MAX_SERIALISED_ENTRIES;
else if ( (ret = x86_msr_copy_to_buffer(
policy->msr,
- sysctl->u.cpu_policy.msr_policy,
+ sysctl->u.cpu_policy.msrs,
&sysctl->u.cpu_policy.nr_msrs)) )
break;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 7280e9f96816..529801c89ba3 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -683,8 +683,8 @@ struct xen_domctl_cpu_policy {
* 'cpuid_policy'. */
uint32_t nr_msrs; /* IN/OUT: Number of MSRs in/written to
* 'msr_policy' */
- XEN_GUEST_HANDLE_64(xen_cpuid_leaf_t) cpuid_policy; /* IN/OUT */
- XEN_GUEST_HANDLE_64(xen_msr_entry_t) msr_policy; /* IN/OUT */
+ XEN_GUEST_HANDLE_64(xen_cpuid_leaf_t) leaves; /* IN/OUT */
+ XEN_GUEST_HANDLE_64(xen_msr_entry_t) msrs; /* IN/OUT */
/*
* OUT, set_policy only. Written in some (but not all) error cases to
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index e8dded9fb94a..2b24d6bfd00e 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -1050,8 +1050,8 @@ struct xen_sysctl_cpu_policy {
* 'msr_policy', or the maximum number of MSRs if
* the guest handle is NULL. */
uint32_t _rsvd; /* Must be zero. */
- XEN_GUEST_HANDLE_64(xen_cpuid_leaf_t) cpuid_policy; /* OUT */
- XEN_GUEST_HANDLE_64(xen_msr_entry_t) msr_policy; /* OUT */
+ XEN_GUEST_HANDLE_64(xen_cpuid_leaf_t) leaves; /* OUT */
+ XEN_GUEST_HANDLE_64(xen_msr_entry_t) msrs; /* OUT */
};
typedef struct xen_sysctl_cpu_policy xen_sysctl_cpu_policy_t;
DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpu_policy_t);
--
2.30.2
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v2 03/15] x86: Rename struct cpuid_policy to struct cpu_policy
2023-04-04 9:52 [PATCH v2 00/15] x86: Merge cpuid and msr policy objects Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 01/15] x86: Rename struct cpu_policy to struct old_cpuid_policy Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 02/15] x86: Rename {domctl,sysctl}.cpu_policy.{cpuid,msr}_policy fields Andrew Cooper
@ 2023-04-04 9:52 ` Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 04/15] x86: Merge struct msr_policy into " Andrew Cooper
` (11 subsequent siblings)
14 siblings, 0 replies; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 9:52 UTC (permalink / raw)
To: Xen-devel
Cc: Andrew Cooper, Jan Beulich, Jan Beulich, Roger Pau Monné, Wei Liu
Also merge lib/x86/cpuid.h entirely into lib/x86/cpu-policy.h
Use a temporary define to make struct cpuid_policy still work.
There's one forward declaration of struct cpuid_policy in
tools/tests/x86_emulator/x86-emulate.h that isn't covered by the define, and
it's easier to rename that now than to rearrange the includes.
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
v2:
* Retain test/x86_emulator handcoded dependency on cpuid-autogen.h
* Rebase over x86_emulate() split
---
tools/fuzz/cpu-policy/afl-policy-fuzzer.c | 2 +-
tools/tests/x86_emulator/Makefile | 2 +-
tools/tests/x86_emulator/x86-emulate.h | 2 +-
xen/arch/x86/include/asm/cpuid.h | 1 -
xen/arch/x86/x86_emulate/x86_emulate.h | 2 +-
xen/include/xen/lib/x86/cpu-policy.h | 463 ++++++++++++++++++++-
xen/include/xen/lib/x86/cpuid.h | 475 ----------------------
xen/lib/x86/cpuid.c | 2 +-
8 files changed, 467 insertions(+), 482 deletions(-)
delete mode 100644 xen/include/xen/lib/x86/cpuid.h
diff --git a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
index 7d0f274c6cdd..79e42e8bfd04 100644
--- a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
+++ b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
@@ -9,7 +9,7 @@
#include <getopt.h>
#include <xen-tools/common-macros.h>
-#include <xen/lib/x86/cpuid.h>
+#include <xen/lib/x86/cpu-policy.h>
#include <xen/lib/x86/msr.h>
#include <xen/domctl.h>
diff --git a/tools/tests/x86_emulator/Makefile b/tools/tests/x86_emulator/Makefile
index f5d88fb9f681..4b1f75de052e 100644
--- a/tools/tests/x86_emulator/Makefile
+++ b/tools/tests/x86_emulator/Makefile
@@ -292,7 +292,7 @@ HOSTCFLAGS += $(CFLAGS_xeninclude) -I. $(HOSTCFLAGS-$(XEN_COMPILE_ARCH))
x86.h := $(addprefix $(XEN_ROOT)/tools/include/xen/asm/,\
x86-vendors.h x86-defns.h msr-index.h) \
$(addprefix $(XEN_ROOT)/tools/include/xen/lib/x86/, \
- cpuid.h cpuid-autogen.h)
+ cpu-policy.h cpuid-autogen.h)
x86_emulate.h := x86-emulate.h x86_emulate/x86_emulate.h x86_emulate/private.h $(x86.h)
$(OBJS): %.o: %.c $(x86_emulate.h)
diff --git a/tools/tests/x86_emulator/x86-emulate.h b/tools/tests/x86_emulator/x86-emulate.h
index 942b4cdd47d1..02922d0c5a19 100644
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -77,7 +77,7 @@
#define is_canonical_address(x) (((int64_t)(x) >> 47) == ((int64_t)(x) >> 63))
extern uint32_t mxcsr_mask;
-extern struct cpuid_policy cp;
+extern struct cpu_policy cp;
#define MMAP_SZ 16384
bool emul_test_init(void);
diff --git a/xen/arch/x86/include/asm/cpuid.h b/xen/arch/x86/include/asm/cpuid.h
index 49b3128f06f9..d418e8100dde 100644
--- a/xen/arch/x86/include/asm/cpuid.h
+++ b/xen/arch/x86/include/asm/cpuid.h
@@ -9,7 +9,6 @@
#include <xen/percpu.h>
#include <xen/lib/x86/cpu-policy.h>
-#include <xen/lib/x86/cpuid.h>
#include <public/sysctl.h>
diff --git a/xen/arch/x86/x86_emulate/x86_emulate.h b/xen/arch/x86/x86_emulate/x86_emulate.h
index bb7af967ffee..75015104fbdb 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.h
+++ b/xen/arch/x86/x86_emulate/x86_emulate.h
@@ -23,7 +23,7 @@
#ifndef __X86_EMULATE_H__
#define __X86_EMULATE_H__
-#include <xen/lib/x86/cpuid.h>
+#include <xen/lib/x86/cpu-policy.h>
#define MAX_INST_LEN 15
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index 3a5300d1078c..666505964d00 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -2,9 +2,342 @@
#ifndef XEN_LIB_X86_POLICIES_H
#define XEN_LIB_X86_POLICIES_H
-#include <xen/lib/x86/cpuid.h>
+#include <xen/lib/x86/cpuid-autogen.h>
#include <xen/lib/x86/msr.h>
+#define FEATURESET_1d 0 /* 0x00000001.edx */
+#define FEATURESET_1c 1 /* 0x00000001.ecx */
+#define FEATURESET_e1d 2 /* 0x80000001.edx */
+#define FEATURESET_e1c 3 /* 0x80000001.ecx */
+#define FEATURESET_Da1 4 /* 0x0000000d:1.eax */
+#define FEATURESET_7b0 5 /* 0x00000007:0.ebx */
+#define FEATURESET_7c0 6 /* 0x00000007:0.ecx */
+#define FEATURESET_e7d 7 /* 0x80000007.edx */
+#define FEATURESET_e8b 8 /* 0x80000008.ebx */
+#define FEATURESET_7d0 9 /* 0x00000007:0.edx */
+#define FEATURESET_7a1 10 /* 0x00000007:1.eax */
+#define FEATURESET_e21a 11 /* 0x80000021.eax */
+#define FEATURESET_7b1 12 /* 0x00000007:1.ebx */
+#define FEATURESET_7d2 13 /* 0x00000007:2.edx */
+#define FEATURESET_7c1 14 /* 0x00000007:1.ecx */
+#define FEATURESET_7d1 15 /* 0x00000007:1.edx */
+
+struct cpuid_leaf
+{
+ uint32_t a, b, c, d;
+};
+
+/*
+ * Versions of GCC before 5 unconditionally reserve %rBX as the PIC hard
+ * register, and are unable to cope with spilling it. This results in a
+ * rather cryptic error:
+ * error: inconsistent operand constraints in an ‘asm’
+ *
+ * In affected situations, work around the issue by using a separate register
+ * to hold the the %rBX output, and xchg twice to leave %rBX preserved around
+ * the asm() statement.
+ */
+#if defined(__PIC__) && __GNUC__ < 5 && !defined(__clang__) && defined(__i386__)
+# define XCHG_BX "xchg %%ebx, %[bx];"
+# define BX_CON [bx] "=&r"
+#elif defined(__PIC__) && __GNUC__ < 5 && !defined(__clang__) && \
+ defined(__x86_64__) && (defined(__code_model_medium__) || \
+ defined(__code_model_large__))
+# define XCHG_BX "xchg %%rbx, %q[bx];"
+# define BX_CON [bx] "=&r"
+#else
+# define XCHG_BX ""
+# define BX_CON "=&b"
+#endif
+
+static inline void cpuid_leaf(uint32_t leaf, struct cpuid_leaf *l)
+{
+ asm ( XCHG_BX
+ "cpuid;"
+ XCHG_BX
+ : "=a" (l->a), BX_CON (l->b), "=&c" (l->c), "=&d" (l->d)
+ : "a" (leaf) );
+}
+
+static inline void cpuid_count_leaf(
+ uint32_t leaf, uint32_t subleaf, struct cpuid_leaf *l)
+{
+ asm ( XCHG_BX
+ "cpuid;"
+ XCHG_BX
+ : "=a" (l->a), BX_CON (l->b), "=c" (l->c), "=&d" (l->d)
+ : "a" (leaf), "c" (subleaf) );
+}
+
+#undef BX_CON
+#undef XCHG
+
+/**
+ * Given the vendor id from CPUID leaf 0, look up Xen's internal integer
+ * vendor ID. Returns X86_VENDOR_UNKNOWN for any unknown vendor.
+ */
+unsigned int x86_cpuid_lookup_vendor(uint32_t ebx, uint32_t ecx, uint32_t edx);
+
+/**
+ * Given Xen's internal vendor ID, return a string suitable for printing.
+ * Returns "Unknown" for any unrecognised ID.
+ */
+const char *x86_cpuid_vendor_to_str(unsigned int vendor);
+
+#define CPUID_GUEST_NR_BASIC (0xdu + 1)
+#define CPUID_GUEST_NR_CACHE (5u + 1)
+#define CPUID_GUEST_NR_FEAT (2u + 1)
+#define CPUID_GUEST_NR_TOPO (1u + 1)
+#define CPUID_GUEST_NR_XSTATE (62u + 1)
+#define CPUID_GUEST_NR_EXTD_INTEL (0x8u + 1)
+#define CPUID_GUEST_NR_EXTD_AMD (0x21u + 1)
+#define CPUID_GUEST_NR_EXTD MAX(CPUID_GUEST_NR_EXTD_INTEL, \
+ CPUID_GUEST_NR_EXTD_AMD)
+
+/*
+ * Maximum number of leaves a struct cpu_policy turns into when serialised for
+ * interaction with the toolstack. (Sum of all leaves in each union, less the
+ * entries in basic which sub-unions hang off of.)
+ */
+#define CPUID_MAX_SERIALISED_LEAVES \
+ (CPUID_GUEST_NR_BASIC + \
+ CPUID_GUEST_NR_FEAT - !!CPUID_GUEST_NR_FEAT + \
+ CPUID_GUEST_NR_CACHE - !!CPUID_GUEST_NR_CACHE + \
+ CPUID_GUEST_NR_TOPO - !!CPUID_GUEST_NR_TOPO + \
+ CPUID_GUEST_NR_XSTATE - !!CPUID_GUEST_NR_XSTATE + \
+ CPUID_GUEST_NR_EXTD + 2 /* hv_limit and hv2_limit */ )
+
+struct cpu_policy
+{
+#define DECL_BITFIELD(word) _DECL_BITFIELD(FEATURESET_ ## word)
+#define _DECL_BITFIELD(x) __DECL_BITFIELD(x)
+#define __DECL_BITFIELD(x) CPUID_BITFIELD_ ## x
+
+ /* Basic leaves: 0x000000xx */
+ union {
+ struct cpuid_leaf raw[CPUID_GUEST_NR_BASIC];
+ struct {
+ /* Leaf 0x0 - Max and vendor. */
+ uint32_t max_leaf, vendor_ebx, vendor_ecx, vendor_edx;
+
+ /* Leaf 0x1 - Family/model/stepping and features. */
+ uint32_t raw_fms;
+ uint8_t :8, /* Brand ID. */
+ clflush_size, /* Number of 8-byte blocks per cache line. */
+ lppp, /* Logical processors per package. */
+ apic_id; /* Initial APIC ID. */
+ union {
+ uint32_t _1c;
+ struct { DECL_BITFIELD(1c); };
+ };
+ union {
+ uint32_t _1d;
+ struct { DECL_BITFIELD(1d); };
+ };
+
+ /* Leaf 0x2 - TLB/Cache/Prefetch. */
+ uint8_t l2_nr_queries; /* Documented as fixed to 1. */
+ uint8_t l2_desc[15];
+
+ uint64_t :64, :64; /* Leaf 0x3 - PSN. */
+ uint64_t :64, :64; /* Leaf 0x4 - Structured Cache. */
+ uint64_t :64, :64; /* Leaf 0x5 - MONITOR. */
+ uint64_t :64, :64; /* Leaf 0x6 - Therm/Perf. */
+ uint64_t :64, :64; /* Leaf 0x7 - Structured Features. */
+ uint64_t :64, :64; /* Leaf 0x8 - rsvd */
+ uint64_t :64, :64; /* Leaf 0x9 - DCA */
+
+ /* Leaf 0xa - Intel PMU. */
+ uint8_t pmu_version, _pmu[15];
+
+ uint64_t :64, :64; /* Leaf 0xb - Topology. */
+ uint64_t :64, :64; /* Leaf 0xc - rsvd */
+ uint64_t :64, :64; /* Leaf 0xd - XSTATE. */
+ };
+ } basic;
+
+ /* Structured cache leaf: 0x00000004[xx] */
+ union {
+ struct cpuid_leaf raw[CPUID_GUEST_NR_CACHE];
+ struct cpuid_cache_leaf {
+ uint32_t /* a */ type:5, level:3;
+ bool self_init:1, fully_assoc:1;
+ uint32_t :4, threads_per_cache:12, cores_per_package:6;
+ uint32_t /* b */ line_size:12, partitions:10, ways:10;
+ uint32_t /* c */ sets;
+ bool /* d */ wbinvd:1, inclusive:1, complex:1;
+ } subleaf[CPUID_GUEST_NR_CACHE];
+ } cache;
+
+ /* Structured feature leaf: 0x00000007[xx] */
+ union {
+ struct cpuid_leaf raw[CPUID_GUEST_NR_FEAT];
+ struct {
+ /* Subleaf 0. */
+ uint32_t max_subleaf;
+ union {
+ uint32_t _7b0;
+ struct { DECL_BITFIELD(7b0); };
+ };
+ union {
+ uint32_t _7c0;
+ struct { DECL_BITFIELD(7c0); };
+ };
+ union {
+ uint32_t _7d0;
+ struct { DECL_BITFIELD(7d0); };
+ };
+
+ /* Subleaf 1. */
+ union {
+ uint32_t _7a1;
+ struct { DECL_BITFIELD(7a1); };
+ };
+ union {
+ uint32_t _7b1;
+ struct { DECL_BITFIELD(7b1); };
+ };
+ union {
+ uint32_t _7c1;
+ struct { DECL_BITFIELD(7c1); };
+ };
+ union {
+ uint32_t _7d1;
+ struct { DECL_BITFIELD(7d1); };
+ };
+
+ /* Subleaf 2. */
+ uint32_t /* a */:32, /* b */:32, /* c */:32;
+ union {
+ uint32_t _7d2;
+ struct { DECL_BITFIELD(7d2); };
+ };
+ };
+ } feat;
+
+ /* Extended topology enumeration: 0x0000000B[xx] */
+ union {
+ struct cpuid_leaf raw[CPUID_GUEST_NR_TOPO];
+ struct cpuid_topo_leaf {
+ uint32_t id_shift:5, :27;
+ uint16_t nr_logical, :16;
+ uint8_t level, type, :8, :8;
+ uint32_t x2apic_id;
+ } subleaf[CPUID_GUEST_NR_TOPO];
+ } topo;
+
+ /* Xstate feature leaf: 0x0000000D[xx] */
+ union {
+ struct cpuid_leaf raw[CPUID_GUEST_NR_XSTATE];
+
+ struct {
+ /* Subleaf 0. */
+ uint32_t xcr0_low, /* b */:32, max_size, xcr0_high;
+
+ /* Subleaf 1. */
+ union {
+ uint32_t Da1;
+ struct { DECL_BITFIELD(Da1); };
+ };
+ uint32_t /* b */:32, xss_low, xss_high;
+ };
+
+ /* Per-component common state. Valid for i >= 2. */
+ struct {
+ uint32_t size, offset;
+ bool xss:1, align:1;
+ uint32_t _res_d;
+ } comp[CPUID_GUEST_NR_XSTATE];
+ } xstate;
+
+ /* Extended leaves: 0x800000xx */
+ union {
+ struct cpuid_leaf raw[CPUID_GUEST_NR_EXTD];
+ struct {
+ /* Leaf 0x80000000 - Max and vendor. */
+ uint32_t max_leaf, vendor_ebx, vendor_ecx, vendor_edx;
+
+ /* Leaf 0x80000001 - Family/model/stepping and features. */
+ uint32_t raw_fms, /* b */:32;
+ union {
+ uint32_t e1c;
+ struct { DECL_BITFIELD(e1c); };
+ };
+ union {
+ uint32_t e1d;
+ struct { DECL_BITFIELD(e1d); };
+ };
+
+ uint64_t :64, :64; /* Brand string. */
+ uint64_t :64, :64; /* Brand string. */
+ uint64_t :64, :64; /* Brand string. */
+ uint64_t :64, :64; /* L1 cache/TLB. */
+ uint64_t :64, :64; /* L2/3 cache/TLB. */
+
+ /* Leaf 0x80000007 - Advanced Power Management. */
+ uint32_t /* a */:32, /* b */:32, /* c */:32;
+ union {
+ uint32_t e7d;
+ struct { DECL_BITFIELD(e7d); };
+ };
+
+ /* Leaf 0x80000008 - Misc addr/feature info. */
+ uint8_t maxphysaddr, maxlinaddr, :8, :8;
+ union {
+ uint32_t e8b;
+ struct { DECL_BITFIELD(e8b); };
+ };
+ uint32_t nc:8, :4, apic_id_size:4, :16;
+ uint32_t /* d */:32;
+
+ uint64_t :64, :64; /* Leaf 0x80000009. */
+ uint64_t :64, :64; /* Leaf 0x8000000a - SVM rev and features. */
+ uint64_t :64, :64; /* Leaf 0x8000000b. */
+ uint64_t :64, :64; /* Leaf 0x8000000c. */
+ uint64_t :64, :64; /* Leaf 0x8000000d. */
+ uint64_t :64, :64; /* Leaf 0x8000000e. */
+ uint64_t :64, :64; /* Leaf 0x8000000f. */
+ uint64_t :64, :64; /* Leaf 0x80000010. */
+ uint64_t :64, :64; /* Leaf 0x80000011. */
+ uint64_t :64, :64; /* Leaf 0x80000012. */
+ uint64_t :64, :64; /* Leaf 0x80000013. */
+ uint64_t :64, :64; /* Leaf 0x80000014. */
+ uint64_t :64, :64; /* Leaf 0x80000015. */
+ uint64_t :64, :64; /* Leaf 0x80000016. */
+ uint64_t :64, :64; /* Leaf 0x80000017. */
+ uint64_t :64, :64; /* Leaf 0x80000018. */
+ uint64_t :64, :64; /* Leaf 0x80000019 - TLB 1GB Identifiers. */
+ uint64_t :64, :64; /* Leaf 0x8000001a - Performance related info. */
+ uint64_t :64, :64; /* Leaf 0x8000001b - IBS feature information. */
+ uint64_t :64, :64; /* Leaf 0x8000001c. */
+ uint64_t :64, :64; /* Leaf 0x8000001d - Cache properties. */
+ uint64_t :64, :64; /* Leaf 0x8000001e - Extd APIC/Core/Node IDs. */
+ uint64_t :64, :64; /* Leaf 0x8000001f - AMD Secure Encryption. */
+ uint64_t :64, :64; /* Leaf 0x80000020 - Platform QoS. */
+
+ /* Leaf 0x80000021 - Extended Feature 2 */
+ union {
+ uint32_t e21a;
+ struct { DECL_BITFIELD(e21a); };
+ };
+ uint32_t /* b */:32, /* c */:32, /* d */:32;
+ };
+ } extd;
+
+#undef __DECL_BITFIELD
+#undef _DECL_BITFIELD
+#undef DECL_BITFIELD
+
+ /* Toolstack selected Hypervisor max_leaf (if non-zero). */
+ uint8_t hv_limit, hv2_limit;
+
+ /* Value calculated from raw data above. */
+ uint8_t x86_vendor;
+};
+
+/* Temporary */
+#define cpuid_policy cpu_policy
+
struct old_cpu_policy
{
struct cpuid_policy *cpuid;
@@ -19,6 +352,134 @@ struct cpu_policy_errors
#define INIT_CPU_POLICY_ERRORS { -1, -1, -1 }
+/* Fill in a featureset bitmap from a CPUID policy. */
+static inline void cpuid_policy_to_featureset(
+ const struct cpuid_policy *p, uint32_t fs[FEATURESET_NR_ENTRIES])
+{
+ fs[FEATURESET_1d] = p->basic._1d;
+ fs[FEATURESET_1c] = p->basic._1c;
+ fs[FEATURESET_e1d] = p->extd.e1d;
+ fs[FEATURESET_e1c] = p->extd.e1c;
+ fs[FEATURESET_Da1] = p->xstate.Da1;
+ fs[FEATURESET_7b0] = p->feat._7b0;
+ fs[FEATURESET_7c0] = p->feat._7c0;
+ fs[FEATURESET_e7d] = p->extd.e7d;
+ fs[FEATURESET_e8b] = p->extd.e8b;
+ fs[FEATURESET_7d0] = p->feat._7d0;
+ fs[FEATURESET_7a1] = p->feat._7a1;
+ fs[FEATURESET_e21a] = p->extd.e21a;
+ fs[FEATURESET_7b1] = p->feat._7b1;
+ fs[FEATURESET_7d2] = p->feat._7d2;
+ fs[FEATURESET_7c1] = p->feat._7c1;
+ fs[FEATURESET_7d1] = p->feat._7d1;
+}
+
+/* Fill in a CPUID policy from a featureset bitmap. */
+static inline void cpuid_featureset_to_policy(
+ const uint32_t fs[FEATURESET_NR_ENTRIES], struct cpuid_policy *p)
+{
+ p->basic._1d = fs[FEATURESET_1d];
+ p->basic._1c = fs[FEATURESET_1c];
+ p->extd.e1d = fs[FEATURESET_e1d];
+ p->extd.e1c = fs[FEATURESET_e1c];
+ p->xstate.Da1 = fs[FEATURESET_Da1];
+ p->feat._7b0 = fs[FEATURESET_7b0];
+ p->feat._7c0 = fs[FEATURESET_7c0];
+ p->extd.e7d = fs[FEATURESET_e7d];
+ p->extd.e8b = fs[FEATURESET_e8b];
+ p->feat._7d0 = fs[FEATURESET_7d0];
+ p->feat._7a1 = fs[FEATURESET_7a1];
+ p->extd.e21a = fs[FEATURESET_e21a];
+ p->feat._7b1 = fs[FEATURESET_7b1];
+ p->feat._7d2 = fs[FEATURESET_7d2];
+ p->feat._7c1 = fs[FEATURESET_7c1];
+ p->feat._7d1 = fs[FEATURESET_7d1];
+}
+
+static inline uint64_t cpuid_policy_xcr0_max(const struct cpuid_policy *p)
+{
+ return ((uint64_t)p->xstate.xcr0_high << 32) | p->xstate.xcr0_low;
+}
+
+static inline uint64_t cpuid_policy_xstates(const struct cpuid_policy *p)
+{
+ uint64_t val = p->xstate.xcr0_high | p->xstate.xss_high;
+
+ return (val << 32) | p->xstate.xcr0_low | p->xstate.xss_low;
+}
+
+const uint32_t *x86_cpuid_lookup_deep_deps(uint32_t feature);
+
+/**
+ * Recalculate the content in a CPUID policy which is derived from raw data.
+ */
+void x86_cpuid_policy_recalc_synth(struct cpuid_policy *p);
+
+/**
+ * Fill a CPUID policy using the native CPUID instruction.
+ *
+ * No sanitisation is performed, but synthesised values are calculated.
+ * Values may be influenced by a hypervisor or from masking/faulting
+ * configuration.
+ */
+void x86_cpuid_policy_fill_native(struct cpuid_policy *p);
+
+/**
+ * Clear leaf data beyond the policies max leaf/subleaf settings.
+ *
+ * Policy serialisation purposefully omits out-of-range leaves, because there
+ * are a large number of them due to vendor differences. However, when
+ * constructing new policies (e.g. levelling down), it is possible to end up
+ * with out-of-range leaves with stale content in them. This helper clears
+ * them.
+ */
+void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p);
+
+#ifdef __XEN__
+#include <public/arch-x86/xen.h>
+typedef XEN_GUEST_HANDLE_64(xen_cpuid_leaf_t) cpuid_leaf_buffer_t;
+#else
+#include <xen/arch-x86/xen.h>
+typedef xen_cpuid_leaf_t cpuid_leaf_buffer_t[];
+#endif
+
+/**
+ * Serialise a cpuid_policy object into an array of cpuid leaves.
+ *
+ * @param policy The cpuid_policy to serialise.
+ * @param leaves The array of leaves to serialise into.
+ * @param nr_entries The number of entries in 'leaves'.
+ * @returns -errno
+ *
+ * Writes at most CPUID_MAX_SERIALISED_LEAVES. May fail with -ENOBUFS if the
+ * leaves array is too short. On success, nr_entries is updated with the
+ * actual number of leaves written.
+ */
+int x86_cpuid_copy_to_buffer(const struct cpuid_policy *policy,
+ cpuid_leaf_buffer_t leaves, uint32_t *nr_entries);
+
+/**
+ * Unserialise a cpuid_policy object from an array of cpuid leaves.
+ *
+ * @param policy The cpuid_policy to unserialise into.
+ * @param leaves The array of leaves to unserialise from.
+ * @param nr_entries The number of entries in 'leaves'.
+ * @param err_leaf Optional hint for error diagnostics.
+ * @param err_subleaf Optional hint for error diagnostics.
+ * @returns -errno
+ *
+ * Reads at most CPUID_MAX_SERIALISED_LEAVES. May return -ERANGE if an
+ * incoming leaf is out of range of cpuid_policy, in which case the optional
+ * err_* pointers will identify the out-of-range indicies.
+ *
+ * No content validation of in-range leaves is performed. Synthesised data is
+ * recalculated.
+ */
+int x86_cpuid_copy_from_buffer(struct cpuid_policy *policy,
+ const cpuid_leaf_buffer_t leaves,
+ uint32_t nr_entries, uint32_t *err_leaf,
+ uint32_t *err_subleaf);
+
/*
* Calculate whether two policies are compatible.
*
diff --git a/xen/include/xen/lib/x86/cpuid.h b/xen/include/xen/lib/x86/cpuid.h
deleted file mode 100644
index fa98b371eef4..000000000000
--- a/xen/include/xen/lib/x86/cpuid.h
+++ /dev/null
@@ -1,475 +0,0 @@
-/* Common data structures and functions consumed by hypervisor and toolstack */
-#ifndef XEN_LIB_X86_CPUID_H
-#define XEN_LIB_X86_CPUID_H
-
-#include <xen/lib/x86/cpuid-autogen.h>
-
-#define FEATURESET_1d 0 /* 0x00000001.edx */
-#define FEATURESET_1c 1 /* 0x00000001.ecx */
-#define FEATURESET_e1d 2 /* 0x80000001.edx */
-#define FEATURESET_e1c 3 /* 0x80000001.ecx */
-#define FEATURESET_Da1 4 /* 0x0000000d:1.eax */
-#define FEATURESET_7b0 5 /* 0x00000007:0.ebx */
-#define FEATURESET_7c0 6 /* 0x00000007:0.ecx */
-#define FEATURESET_e7d 7 /* 0x80000007.edx */
-#define FEATURESET_e8b 8 /* 0x80000008.ebx */
-#define FEATURESET_7d0 9 /* 0x00000007:0.edx */
-#define FEATURESET_7a1 10 /* 0x00000007:1.eax */
-#define FEATURESET_e21a 11 /* 0x80000021.eax */
-#define FEATURESET_7b1 12 /* 0x00000007:1.ebx */
-#define FEATURESET_7d2 13 /* 0x00000007:2.edx */
-#define FEATURESET_7c1 14 /* 0x00000007:1.ecx */
-#define FEATURESET_7d1 15 /* 0x00000007:1.edx */
-
-struct cpuid_leaf
-{
- uint32_t a, b, c, d;
-};
-
-/*
- * Versions of GCC before 5 unconditionally reserve %rBX as the PIC hard
- * register, and are unable to cope with spilling it. This results in a
- * rather cryptic error:
- * error: inconsistent operand constraints in an ‘asm’
- *
- * In affected situations, work around the issue by using a separate register
- * to hold the the %rBX output, and xchg twice to leave %rBX preserved around
- * the asm() statement.
- */
-#if defined(__PIC__) && __GNUC__ < 5 && !defined(__clang__) && defined(__i386__)
-# define XCHG_BX "xchg %%ebx, %[bx];"
-# define BX_CON [bx] "=&r"
-#elif defined(__PIC__) && __GNUC__ < 5 && !defined(__clang__) && \
- defined(__x86_64__) && (defined(__code_model_medium__) || \
- defined(__code_model_large__))
-# define XCHG_BX "xchg %%rbx, %q[bx];"
-# define BX_CON [bx] "=&r"
-#else
-# define XCHG_BX ""
-# define BX_CON "=&b"
-#endif
-
-static inline void cpuid_leaf(uint32_t leaf, struct cpuid_leaf *l)
-{
- asm ( XCHG_BX
- "cpuid;"
- XCHG_BX
- : "=a" (l->a), BX_CON (l->b), "=&c" (l->c), "=&d" (l->d)
- : "a" (leaf) );
-}
-
-static inline void cpuid_count_leaf(
- uint32_t leaf, uint32_t subleaf, struct cpuid_leaf *l)
-{
- asm ( XCHG_BX
- "cpuid;"
- XCHG_BX
- : "=a" (l->a), BX_CON (l->b), "=c" (l->c), "=&d" (l->d)
- : "a" (leaf), "c" (subleaf) );
-}
-
-#undef BX_CON
-#undef XCHG
-
-/**
- * Given the vendor id from CPUID leaf 0, look up Xen's internal integer
- * vendor ID. Returns X86_VENDOR_UNKNOWN for any unknown vendor.
- */
-unsigned int x86_cpuid_lookup_vendor(uint32_t ebx, uint32_t ecx, uint32_t edx);
-
-/**
- * Given Xen's internal vendor ID, return a string suitable for printing.
- * Returns "Unknown" for any unrecognised ID.
- */
-const char *x86_cpuid_vendor_to_str(unsigned int vendor);
-
-#define CPUID_GUEST_NR_BASIC (0xdu + 1)
-#define CPUID_GUEST_NR_CACHE (5u + 1)
-#define CPUID_GUEST_NR_FEAT (2u + 1)
-#define CPUID_GUEST_NR_TOPO (1u + 1)
-#define CPUID_GUEST_NR_XSTATE (62u + 1)
-#define CPUID_GUEST_NR_EXTD_INTEL (0x8u + 1)
-#define CPUID_GUEST_NR_EXTD_AMD (0x21u + 1)
-#define CPUID_GUEST_NR_EXTD MAX(CPUID_GUEST_NR_EXTD_INTEL, \
- CPUID_GUEST_NR_EXTD_AMD)
-
-/*
- * Maximum number of leaves a struct cpuid_policy turns into when serialised
- * for interaction with the toolstack. (Sum of all leaves in each union, less
- * the entries in basic which sub-unions hang off of.)
- */
-#define CPUID_MAX_SERIALISED_LEAVES \
- (CPUID_GUEST_NR_BASIC + \
- CPUID_GUEST_NR_FEAT - !!CPUID_GUEST_NR_FEAT + \
- CPUID_GUEST_NR_CACHE - !!CPUID_GUEST_NR_CACHE + \
- CPUID_GUEST_NR_TOPO - !!CPUID_GUEST_NR_TOPO + \
- CPUID_GUEST_NR_XSTATE - !!CPUID_GUEST_NR_XSTATE + \
- CPUID_GUEST_NR_EXTD + 2 /* hv_limit and hv2_limit */ )
-
-struct cpuid_policy
-{
-#define DECL_BITFIELD(word) _DECL_BITFIELD(FEATURESET_ ## word)
-#define _DECL_BITFIELD(x) __DECL_BITFIELD(x)
-#define __DECL_BITFIELD(x) CPUID_BITFIELD_ ## x
-
- /* Basic leaves: 0x000000xx */
- union {
- struct cpuid_leaf raw[CPUID_GUEST_NR_BASIC];
- struct {
- /* Leaf 0x0 - Max and vendor. */
- uint32_t max_leaf, vendor_ebx, vendor_ecx, vendor_edx;
-
- /* Leaf 0x1 - Family/model/stepping and features. */
- uint32_t raw_fms;
- uint8_t :8, /* Brand ID. */
- clflush_size, /* Number of 8-byte blocks per cache line. */
- lppp, /* Logical processors per package. */
- apic_id; /* Initial APIC ID. */
- union {
- uint32_t _1c;
- struct { DECL_BITFIELD(1c); };
- };
- union {
- uint32_t _1d;
- struct { DECL_BITFIELD(1d); };
- };
-
- /* Leaf 0x2 - TLB/Cache/Prefetch. */
- uint8_t l2_nr_queries; /* Documented as fixed to 1. */
- uint8_t l2_desc[15];
-
- uint64_t :64, :64; /* Leaf 0x3 - PSN. */
- uint64_t :64, :64; /* Leaf 0x4 - Structured Cache. */
- uint64_t :64, :64; /* Leaf 0x5 - MONITOR. */
- uint64_t :64, :64; /* Leaf 0x6 - Therm/Perf. */
- uint64_t :64, :64; /* Leaf 0x7 - Structured Features. */
- uint64_t :64, :64; /* Leaf 0x8 - rsvd */
- uint64_t :64, :64; /* Leaf 0x9 - DCA */
-
- /* Leaf 0xa - Intel PMU. */
- uint8_t pmu_version, _pmu[15];
-
- uint64_t :64, :64; /* Leaf 0xb - Topology. */
- uint64_t :64, :64; /* Leaf 0xc - rsvd */
- uint64_t :64, :64; /* Leaf 0xd - XSTATE. */
- };
- } basic;
-
- /* Structured cache leaf: 0x00000004[xx] */
- union {
- struct cpuid_leaf raw[CPUID_GUEST_NR_CACHE];
- struct cpuid_cache_leaf {
- uint32_t /* a */ type:5, level:3;
- bool self_init:1, fully_assoc:1;
- uint32_t :4, threads_per_cache:12, cores_per_package:6;
- uint32_t /* b */ line_size:12, partitions:10, ways:10;
- uint32_t /* c */ sets;
- bool /* d */ wbinvd:1, inclusive:1, complex:1;
- } subleaf[CPUID_GUEST_NR_CACHE];
- } cache;
-
- /* Structured feature leaf: 0x00000007[xx] */
- union {
- struct cpuid_leaf raw[CPUID_GUEST_NR_FEAT];
- struct {
- /* Subleaf 0. */
- uint32_t max_subleaf;
- union {
- uint32_t _7b0;
- struct { DECL_BITFIELD(7b0); };
- };
- union {
- uint32_t _7c0;
- struct { DECL_BITFIELD(7c0); };
- };
- union {
- uint32_t _7d0;
- struct { DECL_BITFIELD(7d0); };
- };
-
- /* Subleaf 1. */
- union {
- uint32_t _7a1;
- struct { DECL_BITFIELD(7a1); };
- };
- union {
- uint32_t _7b1;
- struct { DECL_BITFIELD(7b1); };
- };
- union {
- uint32_t _7c1;
- struct { DECL_BITFIELD(7c1); };
- };
- union {
- uint32_t _7d1;
- struct { DECL_BITFIELD(7d1); };
- };
-
- /* Subleaf 2. */
- uint32_t /* a */:32, /* b */:32, /* c */:32;
- union {
- uint32_t _7d2;
- struct { DECL_BITFIELD(7d2); };
- };
- };
- } feat;
-
- /* Extended topology enumeration: 0x0000000B[xx] */
- union {
- struct cpuid_leaf raw[CPUID_GUEST_NR_TOPO];
- struct cpuid_topo_leaf {
- uint32_t id_shift:5, :27;
- uint16_t nr_logical, :16;
- uint8_t level, type, :8, :8;
- uint32_t x2apic_id;
- } subleaf[CPUID_GUEST_NR_TOPO];
- } topo;
-
- /* Xstate feature leaf: 0x0000000D[xx] */
- union {
- struct cpuid_leaf raw[CPUID_GUEST_NR_XSTATE];
-
- struct {
- /* Subleaf 0. */
- uint32_t xcr0_low, /* b */:32, max_size, xcr0_high;
-
- /* Subleaf 1. */
- union {
- uint32_t Da1;
- struct { DECL_BITFIELD(Da1); };
- };
- uint32_t /* b */:32, xss_low, xss_high;
- };
-
- /* Per-component common state. Valid for i >= 2. */
- struct {
- uint32_t size, offset;
- bool xss:1, align:1;
- uint32_t _res_d;
- } comp[CPUID_GUEST_NR_XSTATE];
- } xstate;
-
- /* Extended leaves: 0x800000xx */
- union {
- struct cpuid_leaf raw[CPUID_GUEST_NR_EXTD];
- struct {
- /* Leaf 0x80000000 - Max and vendor. */
- uint32_t max_leaf, vendor_ebx, vendor_ecx, vendor_edx;
-
- /* Leaf 0x80000001 - Family/model/stepping and features. */
- uint32_t raw_fms, /* b */:32;
- union {
- uint32_t e1c;
- struct { DECL_BITFIELD(e1c); };
- };
- union {
- uint32_t e1d;
- struct { DECL_BITFIELD(e1d); };
- };
-
- uint64_t :64, :64; /* Brand string. */
- uint64_t :64, :64; /* Brand string. */
- uint64_t :64, :64; /* Brand string. */
- uint64_t :64, :64; /* L1 cache/TLB. */
- uint64_t :64, :64; /* L2/3 cache/TLB. */
-
- /* Leaf 0x80000007 - Advanced Power Management. */
- uint32_t /* a */:32, /* b */:32, /* c */:32;
- union {
- uint32_t e7d;
- struct { DECL_BITFIELD(e7d); };
- };
-
- /* Leaf 0x80000008 - Misc addr/feature info. */
- uint8_t maxphysaddr, maxlinaddr, :8, :8;
- union {
- uint32_t e8b;
- struct { DECL_BITFIELD(e8b); };
- };
- uint32_t nc:8, :4, apic_id_size:4, :16;
- uint32_t /* d */:32;
-
- uint64_t :64, :64; /* Leaf 0x80000009. */
- uint64_t :64, :64; /* Leaf 0x8000000a - SVM rev and features. */
- uint64_t :64, :64; /* Leaf 0x8000000b. */
- uint64_t :64, :64; /* Leaf 0x8000000c. */
- uint64_t :64, :64; /* Leaf 0x8000000d. */
- uint64_t :64, :64; /* Leaf 0x8000000e. */
- uint64_t :64, :64; /* Leaf 0x8000000f. */
- uint64_t :64, :64; /* Leaf 0x80000010. */
- uint64_t :64, :64; /* Leaf 0x80000011. */
- uint64_t :64, :64; /* Leaf 0x80000012. */
- uint64_t :64, :64; /* Leaf 0x80000013. */
- uint64_t :64, :64; /* Leaf 0x80000014. */
- uint64_t :64, :64; /* Leaf 0x80000015. */
- uint64_t :64, :64; /* Leaf 0x80000016. */
- uint64_t :64, :64; /* Leaf 0x80000017. */
- uint64_t :64, :64; /* Leaf 0x80000018. */
- uint64_t :64, :64; /* Leaf 0x80000019 - TLB 1GB Identifiers. */
- uint64_t :64, :64; /* Leaf 0x8000001a - Performance related info. */
- uint64_t :64, :64; /* Leaf 0x8000001b - IBS feature information. */
- uint64_t :64, :64; /* Leaf 0x8000001c. */
- uint64_t :64, :64; /* Leaf 0x8000001d - Cache properties. */
- uint64_t :64, :64; /* Leaf 0x8000001e - Extd APIC/Core/Node IDs. */
- uint64_t :64, :64; /* Leaf 0x8000001f - AMD Secure Encryption. */
- uint64_t :64, :64; /* Leaf 0x80000020 - Platform QoS. */
-
- /* Leaf 0x80000021 - Extended Feature 2 */
- union {
- uint32_t e21a;
- struct { DECL_BITFIELD(e21a); };
- };
- uint32_t /* b */:32, /* c */:32, /* d */:32;
- };
- } extd;
-
-#undef __DECL_BITFIELD
-#undef _DECL_BITFIELD
-#undef DECL_BITFIELD
-
- /* Toolstack selected Hypervisor max_leaf (if non-zero). */
- uint8_t hv_limit, hv2_limit;
-
- /* Value calculated from raw data above. */
- uint8_t x86_vendor;
-};
-
-/* Fill in a featureset bitmap from a CPUID policy. */
-static inline void cpuid_policy_to_featureset(
- const struct cpuid_policy *p, uint32_t fs[FEATURESET_NR_ENTRIES])
-{
- fs[FEATURESET_1d] = p->basic._1d;
- fs[FEATURESET_1c] = p->basic._1c;
- fs[FEATURESET_e1d] = p->extd.e1d;
- fs[FEATURESET_e1c] = p->extd.e1c;
- fs[FEATURESET_Da1] = p->xstate.Da1;
- fs[FEATURESET_7b0] = p->feat._7b0;
- fs[FEATURESET_7c0] = p->feat._7c0;
- fs[FEATURESET_e7d] = p->extd.e7d;
- fs[FEATURESET_e8b] = p->extd.e8b;
- fs[FEATURESET_7d0] = p->feat._7d0;
- fs[FEATURESET_7a1] = p->feat._7a1;
- fs[FEATURESET_e21a] = p->extd.e21a;
- fs[FEATURESET_7b1] = p->feat._7b1;
- fs[FEATURESET_7d2] = p->feat._7d2;
- fs[FEATURESET_7c1] = p->feat._7c1;
- fs[FEATURESET_7d1] = p->feat._7d1;
-}
-
-/* Fill in a CPUID policy from a featureset bitmap. */
-static inline void cpuid_featureset_to_policy(
- const uint32_t fs[FEATURESET_NR_ENTRIES], struct cpuid_policy *p)
-{
- p->basic._1d = fs[FEATURESET_1d];
- p->basic._1c = fs[FEATURESET_1c];
- p->extd.e1d = fs[FEATURESET_e1d];
- p->extd.e1c = fs[FEATURESET_e1c];
- p->xstate.Da1 = fs[FEATURESET_Da1];
- p->feat._7b0 = fs[FEATURESET_7b0];
- p->feat._7c0 = fs[FEATURESET_7c0];
- p->extd.e7d = fs[FEATURESET_e7d];
- p->extd.e8b = fs[FEATURESET_e8b];
- p->feat._7d0 = fs[FEATURESET_7d0];
- p->feat._7a1 = fs[FEATURESET_7a1];
- p->extd.e21a = fs[FEATURESET_e21a];
- p->feat._7b1 = fs[FEATURESET_7b1];
- p->feat._7d2 = fs[FEATURESET_7d2];
- p->feat._7c1 = fs[FEATURESET_7c1];
- p->feat._7d1 = fs[FEATURESET_7d1];
-}
-
-static inline uint64_t cpuid_policy_xcr0_max(const struct cpuid_policy *p)
-{
- return ((uint64_t)p->xstate.xcr0_high << 32) | p->xstate.xcr0_low;
-}
-
-static inline uint64_t cpuid_policy_xstates(const struct cpuid_policy *p)
-{
- uint64_t val = p->xstate.xcr0_high | p->xstate.xss_high;
-
- return (val << 32) | p->xstate.xcr0_low | p->xstate.xss_low;
-}
-
-const uint32_t *x86_cpuid_lookup_deep_deps(uint32_t feature);
-
-/**
- * Recalculate the content in a CPUID policy which is derived from raw data.
- */
-void x86_cpuid_policy_recalc_synth(struct cpuid_policy *p);
-
-/**
- * Fill a CPUID policy using the native CPUID instruction.
- *
- * No sanitisation is performed, but synthesised values are calculated.
- * Values may be influenced by a hypervisor or from masking/faulting
- * configuration.
- */
-void x86_cpuid_policy_fill_native(struct cpuid_policy *p);
-
-/**
- * Clear leaf data beyond the policies max leaf/subleaf settings.
- *
- * Policy serialisation purposefully omits out-of-range leaves, because there
- * are a large number of them due to vendor differences. However, when
- * constructing new policies (e.g. levelling down), it is possible to end up
- * with out-of-range leaves with stale content in them. This helper clears
- * them.
- */
-void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p);
-
-#ifdef __XEN__
-#include <public/arch-x86/xen.h>
-typedef XEN_GUEST_HANDLE_64(xen_cpuid_leaf_t) cpuid_leaf_buffer_t;
-#else
-#include <xen/arch-x86/xen.h>
-typedef xen_cpuid_leaf_t cpuid_leaf_buffer_t[];
-#endif
-
-/**
- * Serialise a cpuid_policy object into an array of cpuid leaves.
- *
- * @param policy The cpuid_policy to serialise.
- * @param leaves The array of leaves to serialise into.
- * @param nr_entries The number of entries in 'leaves'.
- * @returns -errno
- *
- * Writes at most CPUID_MAX_SERIALISED_LEAVES. May fail with -ENOBUFS if the
- * leaves array is too short. On success, nr_entries is updated with the
- * actual number of leaves written.
- */
-int x86_cpuid_copy_to_buffer(const struct cpuid_policy *policy,
- cpuid_leaf_buffer_t leaves, uint32_t *nr_entries);
-
-/**
- * Unserialise a cpuid_policy object from an array of cpuid leaves.
- *
- * @param policy The cpuid_policy to unserialise into.
- * @param leaves The array of leaves to unserialise from.
- * @param nr_entries The number of entries in 'leaves'.
- * @param err_leaf Optional hint for error diagnostics.
- * @param err_subleaf Optional hint for error diagnostics.
- * @returns -errno
- *
- * Reads at most CPUID_MAX_SERIALISED_LEAVES. May return -ERANGE if an
- * incoming leaf is out of range of cpuid_policy, in which case the optional
- * err_* pointers will identify the out-of-range indicies.
- *
- * No content validation of in-range leaves is performed. Synthesised data is
- * recalculated.
- */
-int x86_cpuid_copy_from_buffer(struct cpuid_policy *policy,
- const cpuid_leaf_buffer_t leaves,
- uint32_t nr_entries, uint32_t *err_leaf,
- uint32_t *err_subleaf);
-
-#endif /* !XEN_LIB_X86_CPUID_H */
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * tab-width: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/lib/x86/cpuid.c b/xen/lib/x86/cpuid.c
index 8eb88314f53c..e81f76c779c0 100644
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -1,6 +1,6 @@
#include "private.h"
-#include <xen/lib/x86/cpuid.h>
+#include <xen/lib/x86/cpu-policy.h>
static void zero_leaves(struct cpuid_leaf *l,
unsigned int first, unsigned int last)
--
2.30.2
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v2 04/15] x86: Merge struct msr_policy into struct cpu_policy
2023-04-04 9:52 [PATCH v2 00/15] x86: Merge cpuid and msr policy objects Andrew Cooper
` (2 preceding siblings ...)
2023-04-04 9:52 ` [PATCH v2 03/15] x86: Rename struct cpuid_policy to struct cpu_policy Andrew Cooper
@ 2023-04-04 9:52 ` Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 05/15] x86: Merge the system {cpuid,msr} policy objects Andrew Cooper
` (10 subsequent siblings)
14 siblings, 0 replies; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 9:52 UTC (permalink / raw)
To: Xen-devel
Cc: Andrew Cooper, Jan Beulich, Jan Beulich, Roger Pau Monné, Wei Liu
As with the cpuid side, use a temporary define to make struct msr_policy still
work.
Note, this means that domains now have two separate struct cpu_policy
allocations with disjoint information, and system policies are in a similar
position, as well as xc_cpu_policy objects in libxenguest. All of these
duplications will be addressed in the following patches.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
v2:
* Alter msr_policy -> cpu_policy in comments.
---
tools/fuzz/cpu-policy/afl-policy-fuzzer.c | 1 -
xen/arch/x86/include/asm/msr.h | 3 +-
xen/include/xen/lib/x86/cpu-policy.h | 81 ++++++++++++++++-
xen/include/xen/lib/x86/msr.h | 104 ----------------------
xen/lib/x86/msr.c | 2 +-
5 files changed, 83 insertions(+), 108 deletions(-)
delete mode 100644 xen/include/xen/lib/x86/msr.h
diff --git a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
index 79e42e8bfd04..0ce3d8e16626 100644
--- a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
+++ b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
@@ -10,7 +10,6 @@
#include <xen-tools/common-macros.h>
#include <xen/lib/x86/cpu-policy.h>
-#include <xen/lib/x86/msr.h>
#include <xen/domctl.h>
static bool debug;
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index 7946b6b24c11..02eddd919c27 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -6,8 +6,9 @@
#include <xen/types.h>
#include <xen/percpu.h>
#include <xen/errno.h>
+#include <xen/kernel.h>
-#include <xen/lib/x86/msr.h>
+#include <xen/lib/x86/cpu-policy.h>
#include <asm/asm_defns.h>
#include <asm/cpufeature.h>
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index 666505964d00..53fffca55211 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -3,7 +3,6 @@
#define XEN_LIB_X86_POLICIES_H
#include <xen/lib/x86/cpuid-autogen.h>
-#include <xen/lib/x86/msr.h>
#define FEATURESET_1d 0 /* 0x00000001.edx */
#define FEATURESET_1c 1 /* 0x00000001.ecx */
@@ -107,6 +106,9 @@ const char *x86_cpuid_vendor_to_str(unsigned int vendor);
CPUID_GUEST_NR_XSTATE - !!CPUID_GUEST_NR_XSTATE + \
CPUID_GUEST_NR_EXTD + 2 /* hv_limit and hv2_limit */ )
+/* Maximum number of MSRs written when serialising a cpu_policy. */
+#define MSR_MAX_SERIALISED_ENTRIES 2
+
struct cpu_policy
{
#define DECL_BITFIELD(word) _DECL_BITFIELD(FEATURESET_ ## word)
@@ -324,6 +326,44 @@ struct cpu_policy
};
} extd;
+ /*
+ * 0x000000ce - MSR_INTEL_PLATFORM_INFO
+ *
+ * This MSR is non-architectural, but for simplicy we allow it to be read
+ * unconditionally. CPUID Faulting support can be fully emulated for HVM
+ * guests so can be offered unconditionally, while support for PV guests
+ * is dependent on real hardware support.
+ */
+ union {
+ uint32_t raw;
+ struct {
+ uint32_t :31;
+ bool cpuid_faulting:1;
+ };
+ } platform_info;
+
+ /*
+ * 0x0000010a - MSR_ARCH_CAPABILITIES
+ *
+ * This is an Intel-only MSR, which provides miscellaneous enumeration,
+ * including those which indicate that microarchitectrual sidechannels are
+ * fixed in hardware.
+ */
+ union {
+ uint32_t raw;
+ struct {
+ bool rdcl_no:1;
+ bool ibrs_all:1;
+ bool rsba:1;
+ bool skip_l1dfl:1;
+ bool ssb_no:1;
+ bool mds_no:1;
+ bool if_pschange_mc_no:1;
+ bool tsx_ctrl:1;
+ bool taa_no:1;
+ };
+ } arch_caps;
+
#undef __DECL_BITFIELD
#undef _DECL_BITFIELD
#undef DECL_BITFIELD
@@ -337,6 +377,7 @@ struct cpu_policy
/* Temporary */
#define cpuid_policy cpu_policy
+#define msr_policy cpu_policy
struct old_cpu_policy
{
@@ -438,9 +479,11 @@ void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p);
#ifdef __XEN__
#include <public/arch-x86/xen.h>
typedef XEN_GUEST_HANDLE_64(xen_cpuid_leaf_t) cpuid_leaf_buffer_t;
+typedef XEN_GUEST_HANDLE_64(xen_msr_entry_t) msr_entry_buffer_t;
#else
#include <xen/arch-x86/xen.h>
typedef xen_cpuid_leaf_t cpuid_leaf_buffer_t[];
+typedef xen_msr_entry_t msr_entry_buffer_t[];
#endif
/**
@@ -480,6 +523,42 @@ int x86_cpuid_copy_from_buffer(struct cpuid_policy *policy,
uint32_t nr_entries, uint32_t *err_leaf,
uint32_t *err_subleaf);
+/**
+ * Serialise an msr_policy object into an array.
+ *
+ * @param policy The msr_policy to serialise.
+ * @param msrs The array of msrs to serialise into.
+ * @param nr_entries The number of entries in 'msrs'.
+ * @returns -errno
+ *
+ * Writes at most MSR_MAX_SERIALISED_ENTRIES. May fail with -ENOBUFS if the
+ * buffer array is too short. On success, nr_entries is updated with the
+ * actual number of msrs written.
+ */
+int x86_msr_copy_to_buffer(const struct msr_policy *policy,
+ msr_entry_buffer_t msrs, uint32_t *nr_entries);
+
+/**
+ * Unserialise an msr_policy object from an array of msrs.
+ *
+ * @param policy The msr_policy object to unserialise into.
+ * @param msrs The array of msrs to unserialise from.
+ * @param nr_entries The number of entries in 'msrs'.
+ * @param err_msr Optional hint for error diagnostics.
+ * @returns -errno
+ *
+ * Reads at most MSR_MAX_SERIALISED_ENTRIES. May fail for a number of reasons
+ * based on the content in an individual 'msrs' entry, including the MSR index
+ * not being valid in the policy, the flags field being nonzero, or if the
+ * value provided would truncate when stored in the policy. In such cases,
+ * the optional err_* pointer will identify the problematic MSR.
+ *
+ * No content validation is performed on the data stored in the policy object.
+ */
+int x86_msr_copy_from_buffer(struct msr_policy *policy,
+ const msr_entry_buffer_t msrs, uint32_t nr_entries,
+ uint32_t *err_msr);
+
/*
* Calculate whether two policies are compatible.
*
diff --git a/xen/include/xen/lib/x86/msr.h b/xen/include/xen/lib/x86/msr.h
deleted file mode 100644
index 48ba4a59c036..000000000000
--- a/xen/include/xen/lib/x86/msr.h
+++ /dev/null
@@ -1,104 +0,0 @@
-/* Common data structures and functions consumed by hypervisor and toolstack */
-#ifndef XEN_LIB_X86_MSR_H
-#define XEN_LIB_X86_MSR_H
-
-/* Maximum number of MSRs written when serialising msr_policy. */
-#define MSR_MAX_SERIALISED_ENTRIES 2
-
-/* MSR policy object for shared per-domain MSRs */
-struct msr_policy
-{
- /*
- * 0x000000ce - MSR_INTEL_PLATFORM_INFO
- *
- * This MSR is non-architectural, but for simplicy we allow it to be read
- * unconditionally. CPUID Faulting support can be fully emulated for HVM
- * guests so can be offered unconditionally, while support for PV guests
- * is dependent on real hardware support.
- */
- union {
- uint32_t raw;
- struct {
- uint32_t :31;
- bool cpuid_faulting:1;
- };
- } platform_info;
-
- /*
- * 0x0000010a - MSR_ARCH_CAPABILITIES
- *
- * This is an Intel-only MSR, which provides miscellaneous enumeration,
- * including those which indicate that microarchitectrual sidechannels are
- * fixed in hardware.
- */
- union {
- uint32_t raw;
- struct {
- bool rdcl_no:1;
- bool ibrs_all:1;
- bool rsba:1;
- bool skip_l1dfl:1;
- bool ssb_no:1;
- bool mds_no:1;
- bool if_pschange_mc_no:1;
- bool tsx_ctrl:1;
- bool taa_no:1;
- };
- } arch_caps;
-};
-
-#ifdef __XEN__
-#include <public/arch-x86/xen.h>
-typedef XEN_GUEST_HANDLE_64(xen_msr_entry_t) msr_entry_buffer_t;
-#else
-#include <xen/arch-x86/xen.h>
-typedef xen_msr_entry_t msr_entry_buffer_t[];
-#endif
-
-/**
- * Serialise an msr_policy object into an array.
- *
- * @param policy The msr_policy to serialise.
- * @param msrs The array of msrs to serialise into.
- * @param nr_entries The number of entries in 'msrs'.
- * @returns -errno
- *
- * Writes at most MSR_MAX_SERIALISED_ENTRIES. May fail with -ENOBUFS if the
- * buffer array is too short. On success, nr_entries is updated with the
- * actual number of msrs written.
- */
-int x86_msr_copy_to_buffer(const struct msr_policy *policy,
- msr_entry_buffer_t msrs, uint32_t *nr_entries);
-
-/**
- * Unserialise an msr_policy object from an array of msrs.
- *
- * @param policy The msr_policy object to unserialise into.
- * @param msrs The array of msrs to unserialise from.
- * @param nr_entries The number of entries in 'msrs'.
- * @param err_msr Optional hint for error diagnostics.
- * @returns -errno
- *
- * Reads at most MSR_MAX_SERIALISED_ENTRIES. May fail for a number of reasons
- * based on the content in an individual 'msrs' entry, including the MSR index
- * not being valid in the policy, the flags field being nonzero, or if the
- * value provided would truncate when stored in the policy. In such cases,
- * the optional err_* pointer will identify the problematic MSR.
- *
- * No content validation is performed on the data stored in the policy object.
- */
-int x86_msr_copy_from_buffer(struct msr_policy *policy,
- const msr_entry_buffer_t msrs, uint32_t nr_entries,
- uint32_t *err_msr);
-
-#endif /* !XEN_LIB_X86_MSR_H */
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * tab-width: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/lib/x86/msr.c b/xen/lib/x86/msr.c
index 7d71e92a380a..c4d885e7b568 100644
--- a/xen/lib/x86/msr.c
+++ b/xen/lib/x86/msr.c
@@ -1,6 +1,6 @@
#include "private.h"
-#include <xen/lib/x86/msr.h>
+#include <xen/lib/x86/cpu-policy.h>
/*
* Copy a single MSR into the provided msr_entry_buffer_t buffer, performing a
--
2.30.2
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v2 05/15] x86: Merge the system {cpuid,msr} policy objects
2023-04-04 9:52 [PATCH v2 00/15] x86: Merge cpuid and msr policy objects Andrew Cooper
` (3 preceding siblings ...)
2023-04-04 9:52 ` [PATCH v2 04/15] x86: Merge struct msr_policy into " Andrew Cooper
@ 2023-04-04 9:52 ` Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 06/15] x86: Merge a domain's " Andrew Cooper
` (9 subsequent siblings)
14 siblings, 0 replies; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 9:52 UTC (permalink / raw)
To: Xen-devel
Cc: Andrew Cooper, Jan Beulich, Jan Beulich, Roger Pau Monné, Wei Liu
Right now, they're the same underlying type, containing disjoint information.
Introduce a new cpu-policy.{h,c} to be the new location for all policy
handling logic. Place the combined objects in __ro_after_init, which is new
since the original logic was written.
As we're trying to phase out the use of struct old_cpu_policy entirely, rework
update_domain_cpu_policy() to not pointer-chase through system_policies[].
This in turn allows system_policies[] in sysctl.c to become static and reduced
in scope to XEN_SYSCTL_get_cpu_policy.
No practical change. This undoes the transient doubling of storage space from
earlier patches.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
v2:
* Reword commit message
* Reintroduce dropped const from system_policies[]
---
xen/arch/x86/Makefile | 1 +
xen/arch/x86/cpu-policy.c | 18 +++++++
xen/arch/x86/cpu/common.c | 4 +-
xen/arch/x86/cpuid.c | 66 +++++++++++--------------
xen/arch/x86/domctl.c | 17 +++++--
xen/arch/x86/include/asm/cpu-policy.h | 14 ++++++
xen/arch/x86/include/asm/cpuid.h | 6 ---
xen/arch/x86/include/asm/msr.h | 7 ---
xen/arch/x86/msr.c | 38 ++++++--------
xen/arch/x86/sysctl.c | 71 ++++++++++-----------------
10 files changed, 116 insertions(+), 126 deletions(-)
create mode 100644 xen/arch/x86/cpu-policy.c
create mode 100644 xen/arch/x86/include/asm/cpu-policy.h
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 08b110592dcc..fc9487aa4023 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -19,6 +19,7 @@ obj-y += bitops.o
obj-bin-y += bzimage.init.o
obj-bin-y += clear_page.o
obj-bin-y += copy_page.o
+obj-y += cpu-policy.o
obj-y += cpuid.o
obj-$(CONFIG_PV) += compat.o
obj-$(CONFIG_PV32) += x86_64/compat.o
diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
new file mode 100644
index 000000000000..663e9a084c53
--- /dev/null
+++ b/xen/arch/x86/cpu-policy.c
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#include <xen/cache.h>
+#include <xen/kernel.h>
+
+#include <xen/lib/x86/cpu-policy.h>
+
+#include <asm/cpu-policy.h>
+
+struct cpu_policy __ro_after_init raw_cpu_policy;
+struct cpu_policy __ro_after_init host_cpu_policy;
+#ifdef CONFIG_PV
+struct cpu_policy __ro_after_init pv_max_cpu_policy;
+struct cpu_policy __ro_after_init pv_def_cpu_policy;
+#endif
+#ifdef CONFIG_HVM
+struct cpu_policy __ro_after_init hvm_max_cpu_policy;
+struct cpu_policy __ro_after_init hvm_def_cpu_policy;
+#endif
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 5ad347534a22..f11dcda57a69 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -3,6 +3,8 @@
#include <xen/delay.h>
#include <xen/param.h>
#include <xen/smp.h>
+
+#include <asm/cpu-policy.h>
#include <asm/current.h>
#include <asm/debugreg.h>
#include <asm/processor.h>
@@ -141,7 +143,7 @@ bool __init probe_cpuid_faulting(void)
return false;
if ((rc = rdmsr_safe(MSR_INTEL_PLATFORM_INFO, val)) == 0)
- raw_msr_policy.platform_info.cpuid_faulting =
+ raw_cpu_policy.platform_info.cpuid_faulting =
val & MSR_PLATFORM_INFO_CPUID_FAULTING;
if (rc ||
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index b22725c492e7..0916bfe175c8 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -4,6 +4,7 @@
#include <xen/sched.h>
#include <xen/nospec.h>
#include <asm/amd.h>
+#include <asm/cpu-policy.h>
#include <asm/cpuid.h>
#include <asm/hvm/hvm.h>
#include <asm/hvm/nestedhvm.h>
@@ -142,17 +143,6 @@ static void zero_leaves(struct cpuid_leaf *l,
memset(&l[first], 0, sizeof(*l) * (last - first + 1));
}
-struct cpuid_policy __read_mostly raw_cpuid_policy,
- __read_mostly host_cpuid_policy;
-#ifdef CONFIG_PV
-struct cpuid_policy __read_mostly pv_max_cpuid_policy;
-struct cpuid_policy __read_mostly pv_def_cpuid_policy;
-#endif
-#ifdef CONFIG_HVM
-struct cpuid_policy __read_mostly hvm_max_cpuid_policy;
-struct cpuid_policy __read_mostly hvm_def_cpuid_policy;
-#endif
-
static void sanitise_featureset(uint32_t *fs)
{
/* for_each_set_bit() uses unsigned longs. Extend with zeroes. */
@@ -344,7 +334,7 @@ static void recalculate_misc(struct cpuid_policy *p)
static void __init calculate_raw_policy(void)
{
- struct cpuid_policy *p = &raw_cpuid_policy;
+ struct cpuid_policy *p = &raw_cpu_policy;
x86_cpuid_policy_fill_native(p);
@@ -354,10 +344,10 @@ static void __init calculate_raw_policy(void)
static void __init calculate_host_policy(void)
{
- struct cpuid_policy *p = &host_cpuid_policy;
+ struct cpuid_policy *p = &host_cpu_policy;
unsigned int max_extd_leaf;
- *p = raw_cpuid_policy;
+ *p = raw_cpu_policy;
p->basic.max_leaf =
min_t(uint32_t, p->basic.max_leaf, ARRAY_SIZE(p->basic.raw) - 1);
@@ -449,17 +439,17 @@ static void __init guest_common_feature_adjustments(uint32_t *fs)
* of IBRS by using the AMD feature bit. An administrator may wish for
* performance reasons to offer IBPB without IBRS.
*/
- if ( host_cpuid_policy.feat.ibrsb )
+ if ( host_cpu_policy.feat.ibrsb )
__set_bit(X86_FEATURE_IBPB, fs);
}
static void __init calculate_pv_max_policy(void)
{
- struct cpuid_policy *p = &pv_max_cpuid_policy;
+ struct cpuid_policy *p = &pv_max_cpu_policy;
uint32_t pv_featureset[FSCAPINTS];
unsigned int i;
- *p = host_cpuid_policy;
+ *p = host_cpu_policy;
cpuid_policy_to_featureset(p, pv_featureset);
for ( i = 0; i < ARRAY_SIZE(pv_featureset); ++i )
@@ -486,11 +476,11 @@ static void __init calculate_pv_max_policy(void)
static void __init calculate_pv_def_policy(void)
{
- struct cpuid_policy *p = &pv_def_cpuid_policy;
+ struct cpuid_policy *p = &pv_def_cpu_policy;
uint32_t pv_featureset[FSCAPINTS];
unsigned int i;
- *p = pv_max_cpuid_policy;
+ *p = pv_max_cpu_policy;
cpuid_policy_to_featureset(p, pv_featureset);
for ( i = 0; i < ARRAY_SIZE(pv_featureset); ++i )
@@ -506,12 +496,12 @@ static void __init calculate_pv_def_policy(void)
static void __init calculate_hvm_max_policy(void)
{
- struct cpuid_policy *p = &hvm_max_cpuid_policy;
+ struct cpuid_policy *p = &hvm_max_cpu_policy;
uint32_t hvm_featureset[FSCAPINTS];
unsigned int i;
const uint32_t *hvm_featuremask;
- *p = host_cpuid_policy;
+ *p = host_cpu_policy;
cpuid_policy_to_featureset(p, hvm_featureset);
hvm_featuremask = hvm_hap_supported() ?
@@ -539,7 +529,7 @@ static void __init calculate_hvm_max_policy(void)
* HVM guests are able if running in protected mode.
*/
if ( (boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) &&
- raw_cpuid_policy.basic.sep )
+ raw_cpu_policy.basic.sep )
__set_bit(X86_FEATURE_SEP, hvm_featureset);
/*
@@ -597,12 +587,12 @@ static void __init calculate_hvm_max_policy(void)
static void __init calculate_hvm_def_policy(void)
{
- struct cpuid_policy *p = &hvm_def_cpuid_policy;
+ struct cpuid_policy *p = &hvm_def_cpu_policy;
uint32_t hvm_featureset[FSCAPINTS];
unsigned int i;
const uint32_t *hvm_featuremask;
- *p = hvm_max_cpuid_policy;
+ *p = hvm_max_cpu_policy;
cpuid_policy_to_featureset(p, hvm_featureset);
hvm_featuremask = hvm_hap_supported() ?
@@ -670,8 +660,8 @@ void recalculate_cpuid_policy(struct domain *d)
{
struct cpuid_policy *p = d->arch.cpuid;
const struct cpuid_policy *max = is_pv_domain(d)
- ? (IS_ENABLED(CONFIG_PV) ? &pv_max_cpuid_policy : NULL)
- : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpuid_policy : NULL);
+ ? (IS_ENABLED(CONFIG_PV) ? &pv_max_cpu_policy : NULL)
+ : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
uint32_t fs[FSCAPINTS], max_fs[FSCAPINTS];
unsigned int i;
@@ -746,7 +736,7 @@ void recalculate_cpuid_policy(struct domain *d)
/* Fold host's FDP_EXCP_ONLY and NO_FPU_SEL into guest's view. */
fs[FEATURESET_7b0] &= ~(cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
cpufeat_mask(X86_FEATURE_NO_FPU_SEL));
- fs[FEATURESET_7b0] |= (host_cpuid_policy.feat._7b0 &
+ fs[FEATURESET_7b0] |= (host_cpu_policy.feat._7b0 &
(cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
cpufeat_mask(X86_FEATURE_NO_FPU_SEL)));
@@ -797,8 +787,8 @@ void recalculate_cpuid_policy(struct domain *d)
int init_domain_cpuid_policy(struct domain *d)
{
struct cpuid_policy *p = is_pv_domain(d)
- ? (IS_ENABLED(CONFIG_PV) ? &pv_def_cpuid_policy : NULL)
- : (IS_ENABLED(CONFIG_HVM) ? &hvm_def_cpuid_policy : NULL);
+ ? (IS_ENABLED(CONFIG_PV) ? &pv_def_cpu_policy : NULL)
+ : (IS_ENABLED(CONFIG_HVM) ? &hvm_def_cpu_policy : NULL);
if ( !p )
{
@@ -1102,7 +1092,7 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
if ( is_pv_domain(d) && is_hardware_domain(d) &&
guest_kernel_mode(v, regs) && cpu_has_monitor &&
regs->entry_vector == TRAP_gp_fault )
- *res = raw_cpuid_policy.basic.raw[5];
+ *res = raw_cpu_policy.basic.raw[5];
break;
case 0x7:
@@ -1234,14 +1224,14 @@ static void __init __maybe_unused build_assertions(void)
/* Find some more clever allocation scheme if this trips. */
BUILD_BUG_ON(sizeof(struct cpuid_policy) > PAGE_SIZE);
- BUILD_BUG_ON(sizeof(raw_cpuid_policy.basic) !=
- sizeof(raw_cpuid_policy.basic.raw));
- BUILD_BUG_ON(sizeof(raw_cpuid_policy.feat) !=
- sizeof(raw_cpuid_policy.feat.raw));
- BUILD_BUG_ON(sizeof(raw_cpuid_policy.xstate) !=
- sizeof(raw_cpuid_policy.xstate.raw));
- BUILD_BUG_ON(sizeof(raw_cpuid_policy.extd) !=
- sizeof(raw_cpuid_policy.extd.raw));
+ BUILD_BUG_ON(sizeof(raw_cpu_policy.basic) !=
+ sizeof(raw_cpu_policy.basic.raw));
+ BUILD_BUG_ON(sizeof(raw_cpu_policy.feat) !=
+ sizeof(raw_cpu_policy.feat.raw));
+ BUILD_BUG_ON(sizeof(raw_cpu_policy.xstate) !=
+ sizeof(raw_cpu_policy.xstate.raw));
+ BUILD_BUG_ON(sizeof(raw_cpu_policy.extd) !=
+ sizeof(raw_cpu_policy.extd.raw));
}
/*
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 944af63e68d0..5800bb10bc4a 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -35,18 +35,25 @@
#include <asm/mem_sharing.h>
#include <asm/xstate.h>
#include <asm/psr.h>
-#include <asm/cpuid.h>
+#include <asm/cpu-policy.h>
static int update_domain_cpu_policy(struct domain *d,
xen_domctl_cpu_policy_t *xdpc)
{
struct old_cpu_policy new = {};
- const struct old_cpu_policy *sys = is_pv_domain(d)
- ? &system_policies[XEN_SYSCTL_cpu_policy_pv_max]
- : &system_policies[XEN_SYSCTL_cpu_policy_hvm_max];
+ struct cpu_policy *sys = is_pv_domain(d)
+ ? (IS_ENABLED(CONFIG_PV) ? &pv_max_cpu_policy : NULL)
+ : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
+ struct old_cpu_policy old_sys = { sys, sys };
struct cpu_policy_errors err = INIT_CPU_POLICY_ERRORS;
int ret = -ENOMEM;
+ if ( !sys )
+ {
+ ASSERT_UNREACHABLE();
+ return -EOPNOTSUPP;
+ }
+
/* Start by copying the domain's existing policies. */
if ( !(new.cpuid = xmemdup(d->arch.cpuid)) ||
!(new.msr = xmemdup(d->arch.msr)) )
@@ -64,7 +71,7 @@ static int update_domain_cpu_policy(struct domain *d,
x86_cpuid_policy_clear_out_of_range_leaves(new.cpuid);
/* Audit the combined dataset. */
- ret = x86_cpu_policies_are_compatible(sys, &new, &err);
+ ret = x86_cpu_policies_are_compatible(&old_sys, &new, &err);
if ( ret )
goto out;
diff --git a/xen/arch/x86/include/asm/cpu-policy.h b/xen/arch/x86/include/asm/cpu-policy.h
new file mode 100644
index 000000000000..eef14bb4267e
--- /dev/null
+++ b/xen/arch/x86/include/asm/cpu-policy.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef X86_CPU_POLICY_H
+#define X86_CPU_POLICY_H
+
+struct cpu_policy;
+
+extern struct cpu_policy raw_cpu_policy;
+extern struct cpu_policy host_cpu_policy;
+extern struct cpu_policy pv_max_cpu_policy;
+extern struct cpu_policy pv_def_cpu_policy;
+extern struct cpu_policy hvm_max_cpu_policy;
+extern struct cpu_policy hvm_def_cpu_policy;
+
+#endif /* X86_CPU_POLICY_H */
diff --git a/xen/arch/x86/include/asm/cpuid.h b/xen/arch/x86/include/asm/cpuid.h
index d418e8100dde..ea0586277331 100644
--- a/xen/arch/x86/include/asm/cpuid.h
+++ b/xen/arch/x86/include/asm/cpuid.h
@@ -46,12 +46,6 @@ DECLARE_PER_CPU(struct cpuidmasks, cpuidmasks);
/* Default masking MSR values, calculated at boot. */
extern struct cpuidmasks cpuidmask_defaults;
-extern struct cpuid_policy raw_cpuid_policy, host_cpuid_policy,
- pv_max_cpuid_policy, pv_def_cpuid_policy,
- hvm_max_cpuid_policy, hvm_def_cpuid_policy;
-
-extern const struct old_cpu_policy system_policies[];
-
/* Check that all previously present features are still available. */
bool recheck_cpu_features(unsigned int cpu);
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index 02eddd919c27..022230acc0af 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -292,13 +292,6 @@ static inline void wrmsr_tsc_aux(uint32_t val)
uint64_t msr_spec_ctrl_valid_bits(const struct cpuid_policy *cp);
-extern struct msr_policy raw_msr_policy,
- host_msr_policy,
- pv_max_msr_policy,
- pv_def_msr_policy,
- hvm_max_msr_policy,
- hvm_def_msr_policy;
-
/* Container object for per-vCPU MSRs */
struct vcpu_msrs
{
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 7ddf0078c3a2..bff26bc4e2b5 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -25,6 +25,7 @@
#include <xen/sched.h>
#include <asm/amd.h>
+#include <asm/cpu-policy.h>
#include <asm/debugreg.h>
#include <asm/hvm/nestedhvm.h>
#include <asm/hvm/viridian.h>
@@ -37,20 +38,9 @@
DEFINE_PER_CPU(uint32_t, tsc_aux);
-struct msr_policy __read_mostly raw_msr_policy,
- __read_mostly host_msr_policy;
-#ifdef CONFIG_PV
-struct msr_policy __read_mostly pv_max_msr_policy;
-struct msr_policy __read_mostly pv_def_msr_policy;
-#endif
-#ifdef CONFIG_HVM
-struct msr_policy __read_mostly hvm_max_msr_policy;
-struct msr_policy __read_mostly hvm_def_msr_policy;
-#endif
-
static void __init calculate_raw_policy(void)
{
- struct msr_policy *mp = &raw_msr_policy;
+ struct msr_policy *mp = &raw_cpu_policy;
/* 0x000000ce MSR_INTEL_PLATFORM_INFO */
/* Was already added by probe_cpuid_faulting() */
@@ -61,9 +51,9 @@ static void __init calculate_raw_policy(void)
static void __init calculate_host_policy(void)
{
- struct msr_policy *mp = &host_msr_policy;
+ struct msr_policy *mp = &host_cpu_policy;
- *mp = raw_msr_policy;
+ *mp = raw_cpu_policy;
/* 0x000000ce MSR_INTEL_PLATFORM_INFO */
/* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_ENABLES */
@@ -81,25 +71,25 @@ static void __init calculate_host_policy(void)
static void __init calculate_pv_max_policy(void)
{
- struct msr_policy *mp = &pv_max_msr_policy;
+ struct msr_policy *mp = &pv_max_cpu_policy;
- *mp = host_msr_policy;
+ *mp = host_cpu_policy;
mp->arch_caps.raw = 0; /* Not supported yet. */
}
static void __init calculate_pv_def_policy(void)
{
- struct msr_policy *mp = &pv_def_msr_policy;
+ struct msr_policy *mp = &pv_def_cpu_policy;
- *mp = pv_max_msr_policy;
+ *mp = pv_max_cpu_policy;
}
static void __init calculate_hvm_max_policy(void)
{
- struct msr_policy *mp = &hvm_max_msr_policy;
+ struct msr_policy *mp = &hvm_max_cpu_policy;
- *mp = host_msr_policy;
+ *mp = host_cpu_policy;
/* It's always possible to emulate CPUID faulting for HVM guests */
mp->platform_info.cpuid_faulting = true;
@@ -109,9 +99,9 @@ static void __init calculate_hvm_max_policy(void)
static void __init calculate_hvm_def_policy(void)
{
- struct msr_policy *mp = &hvm_def_msr_policy;
+ struct msr_policy *mp = &hvm_def_cpu_policy;
- *mp = hvm_max_msr_policy;
+ *mp = hvm_max_cpu_policy;
}
void __init init_guest_msr_policy(void)
@@ -135,8 +125,8 @@ void __init init_guest_msr_policy(void)
int init_domain_msr_policy(struct domain *d)
{
struct msr_policy *mp = is_pv_domain(d)
- ? (IS_ENABLED(CONFIG_PV) ? &pv_def_msr_policy : NULL)
- : (IS_ENABLED(CONFIG_HVM) ? &hvm_def_msr_policy : NULL);
+ ? (IS_ENABLED(CONFIG_PV) ? &pv_def_cpu_policy : NULL)
+ : (IS_ENABLED(CONFIG_HVM) ? &hvm_def_cpu_policy : NULL);
if ( !mp )
{
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 3ed7c69f4315..43a241f2090f 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -30,38 +30,7 @@
#include <xen/cpu.h>
#include <xsm/xsm.h>
#include <asm/psr.h>
-#include <asm/cpuid.h>
-
-const struct old_cpu_policy system_policies[6] = {
- [ XEN_SYSCTL_cpu_policy_raw ] = {
- &raw_cpuid_policy,
- &raw_msr_policy,
- },
- [ XEN_SYSCTL_cpu_policy_host ] = {
- &host_cpuid_policy,
- &host_msr_policy,
- },
-#ifdef CONFIG_PV
- [ XEN_SYSCTL_cpu_policy_pv_max ] = {
- &pv_max_cpuid_policy,
- &pv_max_msr_policy,
- },
- [ XEN_SYSCTL_cpu_policy_pv_default ] = {
- &pv_def_cpuid_policy,
- &pv_def_msr_policy,
- },
-#endif
-#ifdef CONFIG_HVM
- [ XEN_SYSCTL_cpu_policy_hvm_max ] = {
- &hvm_max_cpuid_policy,
- &hvm_max_msr_policy,
- },
- [ XEN_SYSCTL_cpu_policy_hvm_default ] = {
- &hvm_def_cpuid_policy,
- &hvm_def_msr_policy,
- },
-#endif
-};
+#include <asm/cpu-policy.h>
struct l3_cache_info {
int ret;
@@ -326,19 +295,19 @@ long arch_do_sysctl(
case XEN_SYSCTL_get_cpu_featureset:
{
- static const struct cpuid_policy *const policy_table[6] = {
- [XEN_SYSCTL_cpu_featureset_raw] = &raw_cpuid_policy,
- [XEN_SYSCTL_cpu_featureset_host] = &host_cpuid_policy,
+ static const struct cpu_policy *const policy_table[6] = {
+ [XEN_SYSCTL_cpu_featureset_raw] = &raw_cpu_policy,
+ [XEN_SYSCTL_cpu_featureset_host] = &host_cpu_policy,
#ifdef CONFIG_PV
- [XEN_SYSCTL_cpu_featureset_pv] = &pv_def_cpuid_policy,
- [XEN_SYSCTL_cpu_featureset_pv_max] = &pv_max_cpuid_policy,
+ [XEN_SYSCTL_cpu_featureset_pv] = &pv_def_cpu_policy,
+ [XEN_SYSCTL_cpu_featureset_pv_max] = &pv_max_cpu_policy,
#endif
#ifdef CONFIG_HVM
- [XEN_SYSCTL_cpu_featureset_hvm] = &hvm_def_cpuid_policy,
- [XEN_SYSCTL_cpu_featureset_hvm_max] = &hvm_max_cpuid_policy,
+ [XEN_SYSCTL_cpu_featureset_hvm] = &hvm_def_cpu_policy,
+ [XEN_SYSCTL_cpu_featureset_hvm_max] = &hvm_max_cpu_policy,
#endif
};
- const struct cpuid_policy *p = NULL;
+ const struct cpu_policy *p = NULL;
uint32_t featureset[FSCAPINTS];
unsigned int nr;
@@ -391,7 +360,19 @@ long arch_do_sysctl(
case XEN_SYSCTL_get_cpu_policy:
{
- const struct old_cpu_policy *policy;
+ static const struct cpu_policy *const system_policies[6] = {
+ [XEN_SYSCTL_cpu_policy_raw] = &raw_cpu_policy,
+ [XEN_SYSCTL_cpu_policy_host] = &host_cpu_policy,
+#ifdef CONFIG_PV
+ [XEN_SYSCTL_cpu_policy_pv_max] = &pv_max_cpu_policy,
+ [XEN_SYSCTL_cpu_policy_pv_default] = &pv_def_cpu_policy,
+#endif
+#ifdef CONFIG_HVM
+ [XEN_SYSCTL_cpu_policy_hvm_max] = &hvm_max_cpu_policy,
+ [XEN_SYSCTL_cpu_policy_hvm_default] = &hvm_def_cpu_policy,
+#endif
+ };
+ const struct cpu_policy *policy;
/* Reserved field set, or bad policy index? */
if ( sysctl->u.cpu_policy._rsvd ||
@@ -400,11 +381,11 @@ long arch_do_sysctl(
ret = -EINVAL;
break;
}
- policy = &system_policies[
+ policy = system_policies[
array_index_nospec(sysctl->u.cpu_policy.index,
ARRAY_SIZE(system_policies))];
- if ( !policy->cpuid || !policy->msr )
+ if ( !policy )
{
ret = -EOPNOTSUPP;
break;
@@ -414,7 +395,7 @@ long arch_do_sysctl(
if ( guest_handle_is_null(sysctl->u.cpu_policy.leaves) )
sysctl->u.cpu_policy.nr_leaves = CPUID_MAX_SERIALISED_LEAVES;
else if ( (ret = x86_cpuid_copy_to_buffer(
- policy->cpuid,
+ policy,
sysctl->u.cpu_policy.leaves,
&sysctl->u.cpu_policy.nr_leaves)) )
break;
@@ -430,7 +411,7 @@ long arch_do_sysctl(
if ( guest_handle_is_null(sysctl->u.cpu_policy.msrs) )
sysctl->u.cpu_policy.nr_msrs = MSR_MAX_SERIALISED_ENTRIES;
else if ( (ret = x86_msr_copy_to_buffer(
- policy->msr,
+ policy,
sysctl->u.cpu_policy.msrs,
&sysctl->u.cpu_policy.nr_msrs)) )
break;
--
2.30.2
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v2 06/15] x86: Merge a domain's {cpuid,msr} policy objects
2023-04-04 9:52 [PATCH v2 00/15] x86: Merge cpuid and msr policy objects Andrew Cooper
` (4 preceding siblings ...)
2023-04-04 9:52 ` [PATCH v2 05/15] x86: Merge the system {cpuid,msr} policy objects Andrew Cooper
@ 2023-04-04 9:52 ` Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 07/15] x86: Merge xc_cpu_policy's cpuid and msr objects Andrew Cooper
` (8 subsequent siblings)
14 siblings, 0 replies; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 9:52 UTC (permalink / raw)
To: Xen-devel
Cc: Andrew Cooper, Jan Beulich, Jan Beulich, Roger Pau Monné, Wei Liu
Right now, they're the same underlying type, containing disjoint information.
Drop the d->arch.msr pointer, and union d->arch.cpuid to give it a second name
of cpu_policy in the interim.
Merge init_domain_{cpuid,msr}_policy() into a single init_domain_cpu_policy(),
moving the implementation into cpu-policy.c
No practical change. This undoes the transient doubling of storage space from
earlier patches.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
v2:
* Reword commit message.
* Undo accidental deletion of v->arch.msrs.
---
xen/arch/x86/cpu-policy.c | 49 +++++++++++++++++++++++++++
xen/arch/x86/cpuid.c | 23 -------------
xen/arch/x86/domain.c | 15 +++-----
xen/arch/x86/domctl.c | 35 ++++++++++---------
xen/arch/x86/include/asm/cpu-policy.h | 4 +++
xen/arch/x86/include/asm/cpuid.h | 3 --
xen/arch/x86/include/asm/domain.h | 13 +++++--
xen/arch/x86/include/asm/msr.h | 1 -
xen/arch/x86/mm/mem_sharing.c | 3 +-
xen/arch/x86/msr.c | 44 ------------------------
10 files changed, 86 insertions(+), 104 deletions(-)
diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index 663e9a084c53..e9ac1269c35a 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -1,10 +1,13 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#include <xen/cache.h>
#include <xen/kernel.h>
+#include <xen/sched.h>
#include <xen/lib/x86/cpu-policy.h>
#include <asm/cpu-policy.h>
+#include <asm/msr-index.h>
+#include <asm/setup.h>
struct cpu_policy __ro_after_init raw_cpu_policy;
struct cpu_policy __ro_after_init host_cpu_policy;
@@ -16,3 +19,49 @@ struct cpu_policy __ro_after_init pv_def_cpu_policy;
struct cpu_policy __ro_after_init hvm_max_cpu_policy;
struct cpu_policy __ro_after_init hvm_def_cpu_policy;
#endif
+
+int init_domain_cpu_policy(struct domain *d)
+{
+ struct cpu_policy *p = is_pv_domain(d)
+ ? (IS_ENABLED(CONFIG_PV) ? &pv_def_cpu_policy : NULL)
+ : (IS_ENABLED(CONFIG_HVM) ? &hvm_def_cpu_policy : NULL);
+
+ if ( !p )
+ {
+ ASSERT_UNREACHABLE();
+ return -EOPNOTSUPP;
+ }
+
+ p = xmemdup(p);
+ if ( !p )
+ return -ENOMEM;
+
+ /* See comment in ctxt_switch_levelling() */
+ if ( !opt_dom0_cpuid_faulting && is_control_domain(d) && is_pv_domain(d) )
+ p->platform_info.cpuid_faulting = false;
+
+ /*
+ * Expose the "hardware speculation behaviour" bits of ARCH_CAPS to dom0,
+ * so dom0 can turn off workarounds as appropriate. Temporary, until the
+ * domain policy logic gains a better understanding of MSRs.
+ */
+ if ( is_hardware_domain(d) && cpu_has_arch_caps )
+ {
+ uint64_t val;
+
+ rdmsrl(MSR_ARCH_CAPABILITIES, val);
+
+ p->arch_caps.raw = val &
+ (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
+ ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO | ARCH_CAPS_IF_PSCHANGE_MC_NO |
+ ARCH_CAPS_TAA_NO | ARCH_CAPS_SBDR_SSDP_NO | ARCH_CAPS_FBSDP_NO |
+ ARCH_CAPS_PSDP_NO | ARCH_CAPS_FB_CLEAR | ARCH_CAPS_RRSBA |
+ ARCH_CAPS_BHI_NO | ARCH_CAPS_PBRSB_NO);
+ }
+
+ d->arch.cpu_policy = p;
+
+ recalculate_cpuid_policy(d);
+
+ return 0;
+}
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 0916bfe175c8..df3e503ced9d 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -784,29 +784,6 @@ void recalculate_cpuid_policy(struct domain *d)
p->extd.raw[0x19] = EMPTY_LEAF;
}
-int init_domain_cpuid_policy(struct domain *d)
-{
- struct cpuid_policy *p = is_pv_domain(d)
- ? (IS_ENABLED(CONFIG_PV) ? &pv_def_cpu_policy : NULL)
- : (IS_ENABLED(CONFIG_HVM) ? &hvm_def_cpu_policy : NULL);
-
- if ( !p )
- {
- ASSERT_UNREACHABLE();
- return -EOPNOTSUPP;
- }
-
- p = xmemdup(p);
- if ( !p )
- return -ENOMEM;
-
- d->arch.cpuid = p;
-
- recalculate_cpuid_policy(d);
-
- return 0;
-}
-
void __init init_dom0_cpuid_policy(struct domain *d)
{
struct cpuid_policy *p = d->arch.cpuid;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index d5847f70f890..b23e5014d1d3 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -66,6 +66,7 @@
#ifdef CONFIG_COMPAT
#include <compat/vcpu.h>
#endif
+#include <asm/cpu-policy.h>
#include <asm/psr.h>
#include <asm/pv/domain.h>
#include <asm/pv/mm.h>
@@ -743,8 +744,7 @@ int arch_domain_create(struct domain *d,
d->arch.ctxt_switch = &idle_csw;
- d->arch.cpuid = ZERO_BLOCK_PTR; /* Catch stray misuses. */
- d->arch.msr = ZERO_BLOCK_PTR;
+ d->arch.cpu_policy = ZERO_BLOCK_PTR; /* Catch stray misuses. */
return 0;
}
@@ -799,10 +799,7 @@ int arch_domain_create(struct domain *d,
goto fail;
paging_initialised = true;
- if ( (rc = init_domain_cpuid_policy(d)) )
- goto fail;
-
- if ( (rc = init_domain_msr_policy(d)) )
+ if ( (rc = init_domain_cpu_policy(d)) )
goto fail;
d->arch.ioport_caps =
@@ -873,8 +870,7 @@ int arch_domain_create(struct domain *d,
iommu_domain_destroy(d);
cleanup_domain_irq_mapping(d);
free_xenheap_page(d->shared_info);
- xfree(d->arch.cpuid);
- xfree(d->arch.msr);
+ XFREE(d->arch.cpu_policy);
if ( paging_initialised )
paging_final_teardown(d);
free_perdomain_mappings(d);
@@ -888,8 +884,7 @@ void arch_domain_destroy(struct domain *d)
hvm_domain_destroy(d);
xfree(d->arch.e820);
- xfree(d->arch.cpuid);
- xfree(d->arch.msr);
+ XFREE(d->arch.cpu_policy);
free_domain_pirqs(d);
if ( !is_idle_domain(d) )
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 5800bb10bc4a..81be25c67731 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -40,11 +40,11 @@
static int update_domain_cpu_policy(struct domain *d,
xen_domctl_cpu_policy_t *xdpc)
{
- struct old_cpu_policy new = {};
+ struct cpu_policy *new;
struct cpu_policy *sys = is_pv_domain(d)
? (IS_ENABLED(CONFIG_PV) ? &pv_max_cpu_policy : NULL)
: (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
- struct old_cpu_policy old_sys = { sys, sys };
+ struct old_cpu_policy old_sys = { sys, sys }, old_new;
struct cpu_policy_errors err = INIT_CPU_POLICY_ERRORS;
int ret = -ENOMEM;
@@ -54,33 +54,33 @@ static int update_domain_cpu_policy(struct domain *d,
return -EOPNOTSUPP;
}
- /* Start by copying the domain's existing policies. */
- if ( !(new.cpuid = xmemdup(d->arch.cpuid)) ||
- !(new.msr = xmemdup(d->arch.msr)) )
+ /* Start by copying the domain's existing policy. */
+ if ( !(new = xmemdup(d->arch.cpu_policy)) )
goto out;
+ old_new = (struct old_cpu_policy){ new, new };
+
/* Merge the toolstack provided data. */
if ( (ret = x86_cpuid_copy_from_buffer(
- new.cpuid, xdpc->leaves, xdpc->nr_leaves,
+ new, xdpc->leaves, xdpc->nr_leaves,
&err.leaf, &err.subleaf)) ||
(ret = x86_msr_copy_from_buffer(
- new.msr, xdpc->msrs, xdpc->nr_msrs, &err.msr)) )
+ new, xdpc->msrs, xdpc->nr_msrs, &err.msr)) )
goto out;
/* Trim any newly-stale out-of-range leaves. */
- x86_cpuid_policy_clear_out_of_range_leaves(new.cpuid);
+ x86_cpuid_policy_clear_out_of_range_leaves(new);
/* Audit the combined dataset. */
- ret = x86_cpu_policies_are_compatible(&old_sys, &new, &err);
+ ret = x86_cpu_policies_are_compatible(&old_sys, &old_new, &err);
if ( ret )
goto out;
/*
- * Audit was successful. Replace existing policies, leaving the old
- * policies to be freed.
+ * Audit was successful. Replace the existing policy, leaving the old one
+ * to be freed.
*/
- SWAP(new.cpuid, d->arch.cpuid);
- SWAP(new.msr, d->arch.msr);
+ SWAP(new, d->arch.cpu_policy);
/* TODO: Drop when x86_cpu_policies_are_compatible() is completed. */
recalculate_cpuid_policy(d);
@@ -89,9 +89,8 @@ static int update_domain_cpu_policy(struct domain *d,
domain_cpu_policy_changed(d);
out:
- /* Free whichever cpuid/msr structs are not installed in struct domain. */
- xfree(new.cpuid);
- xfree(new.msr);
+ /* Free whichever struct is not installed in struct domain. */
+ xfree(new);
if ( ret )
{
@@ -1327,7 +1326,7 @@ long arch_do_domctl(
if ( guest_handle_is_null(domctl->u.cpu_policy.leaves) )
domctl->u.cpu_policy.nr_leaves = CPUID_MAX_SERIALISED_LEAVES;
else if ( (ret = x86_cpuid_copy_to_buffer(
- d->arch.cpuid,
+ d->arch.cpu_policy,
domctl->u.cpu_policy.leaves,
&domctl->u.cpu_policy.nr_leaves)) )
break;
@@ -1336,7 +1335,7 @@ long arch_do_domctl(
if ( guest_handle_is_null(domctl->u.cpu_policy.msrs) )
domctl->u.cpu_policy.nr_msrs = MSR_MAX_SERIALISED_ENTRIES;
else if ( (ret = x86_msr_copy_to_buffer(
- d->arch.msr,
+ d->arch.cpu_policy,
domctl->u.cpu_policy.msrs,
&domctl->u.cpu_policy.nr_msrs)) )
break;
diff --git a/xen/arch/x86/include/asm/cpu-policy.h b/xen/arch/x86/include/asm/cpu-policy.h
index eef14bb4267e..9ba34bbf5ea1 100644
--- a/xen/arch/x86/include/asm/cpu-policy.h
+++ b/xen/arch/x86/include/asm/cpu-policy.h
@@ -3,6 +3,7 @@
#define X86_CPU_POLICY_H
struct cpu_policy;
+struct domain;
extern struct cpu_policy raw_cpu_policy;
extern struct cpu_policy host_cpu_policy;
@@ -11,4 +12,7 @@ extern struct cpu_policy pv_def_cpu_policy;
extern struct cpu_policy hvm_max_cpu_policy;
extern struct cpu_policy hvm_def_cpu_policy;
+/* Allocate and initialise a CPU policy suitable for the domain. */
+int init_domain_cpu_policy(struct domain *d);
+
#endif /* X86_CPU_POLICY_H */
diff --git a/xen/arch/x86/include/asm/cpuid.h b/xen/arch/x86/include/asm/cpuid.h
index ea0586277331..7f81b998ce01 100644
--- a/xen/arch/x86/include/asm/cpuid.h
+++ b/xen/arch/x86/include/asm/cpuid.h
@@ -49,9 +49,6 @@ extern struct cpuidmasks cpuidmask_defaults;
/* Check that all previously present features are still available. */
bool recheck_cpu_features(unsigned int cpu);
-/* Allocate and initialise a CPUID policy suitable for the domain. */
-int init_domain_cpuid_policy(struct domain *d);
-
/* Apply dom0-specific tweaks to the CPUID policy. */
void init_dom0_cpuid_policy(struct domain *d);
diff --git a/xen/arch/x86/include/asm/domain.h b/xen/arch/x86/include/asm/domain.h
index 17780ad9db2f..466388a98e12 100644
--- a/xen/arch/x86/include/asm/domain.h
+++ b/xen/arch/x86/include/asm/domain.h
@@ -386,9 +386,16 @@ struct arch_domain
*/
uint8_t x87_fip_width;
- /* CPUID and MSR policy objects. */
- struct cpuid_policy *cpuid;
- struct msr_policy *msr;
+ /*
+ * The domain's CPU Policy. "cpu_policy" is considered the canonical
+ * pointer, but the "cpuid" and "msr" aliases exist so the most
+ * appropriate one can be used for local code clarity.
+ */
+ union {
+ struct cpu_policy *cpu_policy;
+ struct cpu_policy *cpuid;
+ struct cpu_policy *msr;
+ };
struct PITState vpit;
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index 022230acc0af..b59a51d238a7 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -419,7 +419,6 @@ struct vcpu_msrs
};
void init_guest_msr_policy(void);
-int init_domain_msr_policy(struct domain *d);
int init_vcpu_msr_policy(struct vcpu *v);
/*
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 649d93dc5444..5b3449db7a11 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1902,8 +1902,7 @@ static int fork(struct domain *cd, struct domain *d)
domain_pause(d);
cd->max_pages = d->max_pages;
- *cd->arch.cpuid = *d->arch.cpuid;
- *cd->arch.msr = *d->arch.msr;
+ *cd->arch.cpu_policy = *d->arch.cpu_policy;
cd->vmtrace_size = d->vmtrace_size;
cd->parent = d;
}
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index bff26bc4e2b5..93bd93feb644 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -122,50 +122,6 @@ void __init init_guest_msr_policy(void)
}
}
-int init_domain_msr_policy(struct domain *d)
-{
- struct msr_policy *mp = is_pv_domain(d)
- ? (IS_ENABLED(CONFIG_PV) ? &pv_def_cpu_policy : NULL)
- : (IS_ENABLED(CONFIG_HVM) ? &hvm_def_cpu_policy : NULL);
-
- if ( !mp )
- {
- ASSERT_UNREACHABLE();
- return -EOPNOTSUPP;
- }
-
- mp = xmemdup(mp);
- if ( !mp )
- return -ENOMEM;
-
- /* See comment in ctxt_switch_levelling() */
- if ( !opt_dom0_cpuid_faulting && is_control_domain(d) && is_pv_domain(d) )
- mp->platform_info.cpuid_faulting = false;
-
- /*
- * Expose the "hardware speculation behaviour" bits of ARCH_CAPS to dom0,
- * so dom0 can turn off workarounds as appropriate. Temporary, until the
- * domain policy logic gains a better understanding of MSRs.
- */
- if ( is_hardware_domain(d) && cpu_has_arch_caps )
- {
- uint64_t val;
-
- rdmsrl(MSR_ARCH_CAPABILITIES, val);
-
- mp->arch_caps.raw = val &
- (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
- ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO | ARCH_CAPS_IF_PSCHANGE_MC_NO |
- ARCH_CAPS_TAA_NO | ARCH_CAPS_SBDR_SSDP_NO | ARCH_CAPS_FBSDP_NO |
- ARCH_CAPS_PSDP_NO | ARCH_CAPS_FB_CLEAR | ARCH_CAPS_RRSBA |
- ARCH_CAPS_BHI_NO | ARCH_CAPS_PBRSB_NO);
- }
-
- d->arch.msr = mp;
-
- return 0;
-}
-
int init_vcpu_msr_policy(struct vcpu *v)
{
struct vcpu_msrs *msrs = xzalloc(struct vcpu_msrs);
--
2.30.2
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v2 07/15] x86: Merge xc_cpu_policy's cpuid and msr objects
2023-04-04 9:52 [PATCH v2 00/15] x86: Merge cpuid and msr policy objects Andrew Cooper
` (5 preceding siblings ...)
2023-04-04 9:52 ` [PATCH v2 06/15] x86: Merge a domain's " Andrew Cooper
@ 2023-04-04 9:52 ` Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 08/15] x86: Drop struct old_cpu_policy Andrew Cooper
` (7 subsequent siblings)
14 siblings, 0 replies; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 9:52 UTC (permalink / raw)
To: Xen-devel
Cc: Andrew Cooper, Jan Beulich, Jan Beulich, Roger Pau Monné, Wei Liu
Right now, they're the same underlying type, containing disjoint information.
Use a single object instead. Also take the opportunity to rename 'entries' to
'msrs' which is more descriptive, and more in line with nr_msrs being the
count of MSR entries in the API.
test-tsx uses xg_private.h to access the internals of xc_cpu_policy, so needs
updating at the same time. Take the opportunity to improve the code clarity
by passing a cpu_policy rather than an xc_cpu_policy into some functions.
No practical change. This undoes the transient doubling of storage space from
earlier patches.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
v2:
* Reword the commit message.
* Clean up test-tsx a bit more.
---
tools/libs/guest/xg_cpuid_x86.c | 36 ++++++++---------
tools/libs/guest/xg_private.h | 5 +--
tools/tests/tsx/test-tsx.c | 71 +++++++++++++++------------------
3 files changed, 53 insertions(+), 59 deletions(-)
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 5fae06e77804..5061fe357767 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -431,7 +431,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
xc_dominfo_t di;
unsigned int i, nr_leaves, nr_msrs;
xen_cpuid_leaf_t *leaves = NULL;
- struct cpuid_policy *p = NULL;
+ struct cpu_policy *p = NULL;
uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
uint32_t host_featureset[FEATURESET_NR_ENTRIES] = {};
uint32_t len = ARRAY_SIZE(host_featureset);
@@ -692,7 +692,7 @@ static int deserialize_policy(xc_interface *xch, xc_cpu_policy_t *policy,
uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
int rc;
- rc = x86_cpuid_copy_from_buffer(&policy->cpuid, policy->leaves,
+ rc = x86_cpuid_copy_from_buffer(&policy->policy, policy->leaves,
nr_leaves, &err_leaf, &err_subleaf);
if ( rc )
{
@@ -702,7 +702,7 @@ static int deserialize_policy(xc_interface *xch, xc_cpu_policy_t *policy,
return rc;
}
- rc = x86_msr_copy_from_buffer(&policy->msr, policy->entries,
+ rc = x86_msr_copy_from_buffer(&policy->policy, policy->msrs,
nr_entries, &err_msr);
if ( rc )
{
@@ -719,18 +719,18 @@ int xc_cpu_policy_get_system(xc_interface *xch, unsigned int policy_idx,
xc_cpu_policy_t *policy)
{
unsigned int nr_leaves = ARRAY_SIZE(policy->leaves);
- unsigned int nr_entries = ARRAY_SIZE(policy->entries);
+ unsigned int nr_msrs = ARRAY_SIZE(policy->msrs);
int rc;
rc = get_system_cpu_policy(xch, policy_idx, &nr_leaves, policy->leaves,
- &nr_entries, policy->entries);
+ &nr_msrs, policy->msrs);
if ( rc )
{
PERROR("Failed to obtain %u policy", policy_idx);
return rc;
}
- rc = deserialize_policy(xch, policy, nr_leaves, nr_entries);
+ rc = deserialize_policy(xch, policy, nr_leaves, nr_msrs);
if ( rc )
{
errno = -rc;
@@ -744,18 +744,18 @@ int xc_cpu_policy_get_domain(xc_interface *xch, uint32_t domid,
xc_cpu_policy_t *policy)
{
unsigned int nr_leaves = ARRAY_SIZE(policy->leaves);
- unsigned int nr_entries = ARRAY_SIZE(policy->entries);
+ unsigned int nr_msrs = ARRAY_SIZE(policy->msrs);
int rc;
rc = get_domain_cpu_policy(xch, domid, &nr_leaves, policy->leaves,
- &nr_entries, policy->entries);
+ &nr_msrs, policy->msrs);
if ( rc )
{
PERROR("Failed to obtain domain %u policy", domid);
return rc;
}
- rc = deserialize_policy(xch, policy, nr_leaves, nr_entries);
+ rc = deserialize_policy(xch, policy, nr_leaves, nr_msrs);
if ( rc )
{
errno = -rc;
@@ -770,16 +770,16 @@ int xc_cpu_policy_set_domain(xc_interface *xch, uint32_t domid,
{
uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
unsigned int nr_leaves = ARRAY_SIZE(policy->leaves);
- unsigned int nr_entries = ARRAY_SIZE(policy->entries);
+ unsigned int nr_msrs = ARRAY_SIZE(policy->msrs);
int rc;
rc = xc_cpu_policy_serialise(xch, policy, policy->leaves, &nr_leaves,
- policy->entries, &nr_entries);
+ policy->msrs, &nr_msrs);
if ( rc )
return rc;
rc = xc_set_domain_cpu_policy(xch, domid, nr_leaves, policy->leaves,
- nr_entries, policy->entries,
+ nr_msrs, policy->msrs,
&err_leaf, &err_subleaf, &err_msr);
if ( rc )
{
@@ -802,7 +802,7 @@ int xc_cpu_policy_serialise(xc_interface *xch, const xc_cpu_policy_t *p,
if ( leaves )
{
- rc = x86_cpuid_copy_to_buffer(&p->cpuid, leaves, nr_leaves);
+ rc = x86_cpuid_copy_to_buffer(&p->policy, leaves, nr_leaves);
if ( rc )
{
ERROR("Failed to serialize CPUID policy");
@@ -813,7 +813,7 @@ int xc_cpu_policy_serialise(xc_interface *xch, const xc_cpu_policy_t *p,
if ( msrs )
{
- rc = x86_msr_copy_to_buffer(&p->msr, msrs, nr_msrs);
+ rc = x86_msr_copy_to_buffer(&p->policy, msrs, nr_msrs);
if ( rc )
{
ERROR("Failed to serialize MSR policy");
@@ -831,7 +831,7 @@ int xc_cpu_policy_update_cpuid(xc_interface *xch, xc_cpu_policy_t *policy,
uint32_t nr)
{
unsigned int err_leaf = -1, err_subleaf = -1;
- int rc = x86_cpuid_copy_from_buffer(&policy->cpuid, leaves, nr,
+ int rc = x86_cpuid_copy_from_buffer(&policy->policy, leaves, nr,
&err_leaf, &err_subleaf);
if ( rc )
@@ -850,7 +850,7 @@ int xc_cpu_policy_update_msrs(xc_interface *xch, xc_cpu_policy_t *policy,
const xen_msr_entry_t *msrs, uint32_t nr)
{
unsigned int err_msr = -1;
- int rc = x86_msr_copy_from_buffer(&policy->msr, msrs, nr, &err_msr);
+ int rc = x86_msr_copy_from_buffer(&policy->policy, msrs, nr, &err_msr);
if ( rc )
{
@@ -868,8 +868,8 @@ bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
xc_cpu_policy_t *guest)
{
struct cpu_policy_errors err = INIT_CPU_POLICY_ERRORS;
- struct old_cpu_policy h = { &host->cpuid, &host->msr };
- struct old_cpu_policy g = { &guest->cpuid, &guest->msr };
+ struct old_cpu_policy h = { &host->policy, &host->policy };
+ struct old_cpu_policy g = { &guest->policy, &guest->policy };
int rc = x86_cpu_policies_are_compatible(&h, &g, &err);
if ( !rc )
diff --git a/tools/libs/guest/xg_private.h b/tools/libs/guest/xg_private.h
index 09e24f122760..e729a8106c3e 100644
--- a/tools/libs/guest/xg_private.h
+++ b/tools/libs/guest/xg_private.h
@@ -173,10 +173,9 @@ int pin_table(xc_interface *xch, unsigned int type, unsigned long mfn,
#include <xen/lib/x86/cpu-policy.h>
struct xc_cpu_policy {
- struct cpuid_policy cpuid;
- struct msr_policy msr;
+ struct cpu_policy policy;
xen_cpuid_leaf_t leaves[CPUID_MAX_SERIALISED_LEAVES];
- xen_msr_entry_t entries[MSR_MAX_SERIALISED_ENTRIES];
+ xen_msr_entry_t msrs[MSR_MAX_SERIALISED_ENTRIES];
};
#endif /* x86 */
diff --git a/tools/tests/tsx/test-tsx.c b/tools/tests/tsx/test-tsx.c
index d6d98c299bf9..b7e1972ce8a7 100644
--- a/tools/tests/tsx/test-tsx.c
+++ b/tools/tests/tsx/test-tsx.c
@@ -151,15 +151,15 @@ static void test_tsx_msrs(void)
{
printf("Testing MSR_TSX_FORCE_ABORT consistency\n");
test_tsx_msr_consistency(
- MSR_TSX_FORCE_ABORT, host.cpuid.feat.tsx_force_abort);
+ MSR_TSX_FORCE_ABORT, host.policy.feat.tsx_force_abort);
printf("Testing MSR_TSX_CTRL consistency\n");
test_tsx_msr_consistency(
- MSR_TSX_CTRL, host.msr.arch_caps.tsx_ctrl);
+ MSR_TSX_CTRL, host.policy.arch_caps.tsx_ctrl);
printf("Testing MSR_MCU_OPT_CTRL consistency\n");
test_tsx_msr_consistency(
- MSR_MCU_OPT_CTRL, host.cpuid.feat.srbds_ctrl);
+ MSR_MCU_OPT_CTRL, host.policy.feat.srbds_ctrl);
}
/*
@@ -281,7 +281,7 @@ static void test_rtm_behaviour(void)
else
return fail(" Got unexpected behaviour %d\n", rtm_behaviour);
- if ( host.cpuid.feat.rtm )
+ if ( host.policy.feat.rtm )
{
if ( rtm_behaviour == RTM_UD )
fail(" Host reports RTM, but appears unavailable\n");
@@ -293,57 +293,52 @@ static void test_rtm_behaviour(void)
}
}
-static void dump_tsx_details(const struct xc_cpu_policy *p, const char *pref)
+static void dump_tsx_details(const struct cpu_policy *p, const char *pref)
{
printf(" %s RTM %u, HLE %u, TSX_FORCE_ABORT %u, RTM_ALWAYS_ABORT %u, TSX_CTRL %u\n",
pref,
- p->cpuid.feat.rtm,
- p->cpuid.feat.hle,
- p->cpuid.feat.tsx_force_abort,
- p->cpuid.feat.rtm_always_abort,
- p->msr.arch_caps.tsx_ctrl);
+ p->feat.rtm,
+ p->feat.hle,
+ p->feat.tsx_force_abort,
+ p->feat.rtm_always_abort,
+ p->arch_caps.tsx_ctrl);
}
/* Sanity test various invariants we expect in the default/max policies. */
-static void test_guest_policies(const struct xc_cpu_policy *max,
- const struct xc_cpu_policy *def)
+static void test_guest_policies(const struct cpu_policy *max,
+ const struct cpu_policy *def)
{
- const struct cpuid_policy *cm = &max->cpuid;
- const struct cpuid_policy *cd = &def->cpuid;
- const struct msr_policy *mm = &max->msr;
- const struct msr_policy *md = &def->msr;
-
dump_tsx_details(max, "Max:");
dump_tsx_details(def, "Def:");
- if ( ((cm->feat.raw[0].d | cd->feat.raw[0].d) &
+ if ( ((max->feat.raw[0].d | def->feat.raw[0].d) &
(bitmaskof(X86_FEATURE_TSX_FORCE_ABORT) |
bitmaskof(X86_FEATURE_RTM_ALWAYS_ABORT) |
bitmaskof(X86_FEATURE_SRBDS_CTRL))) ||
- ((mm->arch_caps.raw | md->arch_caps.raw) & ARCH_CAPS_TSX_CTRL) )
+ ((max->arch_caps.raw | def->arch_caps.raw) & ARCH_CAPS_TSX_CTRL) )
fail(" Xen-only TSX controls offered to guest\n");
switch ( rtm_behaviour )
{
case RTM_UD:
- if ( (cm->feat.raw[0].b | cd->feat.raw[0].b) &
+ if ( (max->feat.raw[0].b | def->feat.raw[0].b) &
(bitmaskof(X86_FEATURE_HLE) | bitmaskof(X86_FEATURE_RTM)) )
fail(" HLE/RTM offered to guests despite not being available\n");
break;
case RTM_ABORT:
- if ( cd->feat.raw[0].b &
+ if ( def->feat.raw[0].b &
(bitmaskof(X86_FEATURE_HLE) | bitmaskof(X86_FEATURE_RTM)) )
fail(" HLE/RTM offered to guests by default despite not being usable\n");
break;
case RTM_OK:
- if ( !cm->feat.rtm || !cd->feat.rtm )
+ if ( !max->feat.rtm || !def->feat.rtm )
fail(" RTM not offered to guests despite being available\n");
break;
}
- if ( cd->feat.hle )
+ if ( def->feat.hle )
fail(" Fail: HLE offered in default policy\n");
}
@@ -352,13 +347,13 @@ static void test_def_max_policies(void)
if ( xen_has_pv )
{
printf("Testing PV default/max policies\n");
- test_guest_policies(&pv_max, &pv_default);
+ test_guest_policies(&pv_max.policy, &pv_default.policy);
}
if ( xen_has_hvm )
{
printf("Testing HVM default/max policies\n");
- test_guest_policies(&hvm_max, &hvm_default);
+ test_guest_policies(&hvm_max.policy, &hvm_default.policy);
}
}
@@ -382,23 +377,23 @@ static void test_guest(struct xen_domctl_createdomain *c)
goto out;
}
- dump_tsx_details(&guest_policy, "Cur:");
+ dump_tsx_details(&guest_policy.policy, "Cur:");
/*
* Check defaults given to the guest.
*/
- if ( guest_policy.cpuid.feat.rtm != (rtm_behaviour == RTM_OK) )
+ if ( guest_policy.policy.feat.rtm != (rtm_behaviour == RTM_OK) )
fail(" RTM %u in guest, despite rtm behaviour\n",
- guest_policy.cpuid.feat.rtm);
+ guest_policy.policy.feat.rtm);
- if ( guest_policy.cpuid.feat.hle ||
- guest_policy.cpuid.feat.tsx_force_abort ||
- guest_policy.cpuid.feat.rtm_always_abort ||
- guest_policy.cpuid.feat.srbds_ctrl ||
- guest_policy.msr.arch_caps.tsx_ctrl )
+ if ( guest_policy.policy.feat.hle ||
+ guest_policy.policy.feat.tsx_force_abort ||
+ guest_policy.policy.feat.rtm_always_abort ||
+ guest_policy.policy.feat.srbds_ctrl ||
+ guest_policy.policy.arch_caps.tsx_ctrl )
fail(" Unexpected features advertised\n");
- if ( host.cpuid.feat.rtm )
+ if ( host.policy.feat.rtm )
{
unsigned int _7b0;
@@ -406,7 +401,7 @@ static void test_guest(struct xen_domctl_createdomain *c)
* If host RTM is available, all combinations of guest flags should be
* possible. Flip both HLE/RTM to check non-default settings.
*/
- _7b0 = (guest_policy.cpuid.feat.raw[0].b ^=
+ _7b0 = (guest_policy.policy.feat.raw[0].b ^=
(bitmaskof(X86_FEATURE_HLE) | bitmaskof(X86_FEATURE_RTM)));
/* Set the new policy. */
@@ -427,12 +422,12 @@ static void test_guest(struct xen_domctl_createdomain *c)
goto out;
}
- dump_tsx_details(&guest_policy, "Cur:");
+ dump_tsx_details(&guest_policy.policy, "Cur:");
- if ( guest_policy.cpuid.feat.raw[0].b != _7b0 )
+ if ( guest_policy.policy.feat.raw[0].b != _7b0 )
{
fail(" Expected CPUID.7[1].b 0x%08x differs from actual 0x%08x\n",
- _7b0, guest_policy.cpuid.feat.raw[0].b);
+ _7b0, guest_policy.policy.feat.raw[0].b);
goto out;
}
}
--
2.30.2
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v2 08/15] x86: Drop struct old_cpu_policy
2023-04-04 9:52 [PATCH v2 00/15] x86: Merge cpuid and msr policy objects Andrew Cooper
` (6 preceding siblings ...)
2023-04-04 9:52 ` [PATCH v2 07/15] x86: Merge xc_cpu_policy's cpuid and msr objects Andrew Cooper
@ 2023-04-04 9:52 ` Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 09/15] x86: Out-of-inline the policy<->featureset convertors Andrew Cooper
` (6 subsequent siblings)
14 siblings, 0 replies; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 9:52 UTC (permalink / raw)
To: Xen-devel
Cc: Andrew Cooper, Jan Beulich, Jan Beulich, Roger Pau Monné, Wei Liu
With all the complicated callers of x86_cpu_policies_are_compatible() updated
to use a single cpu_policy object, we can drop the final user of struct
old_cpu_policy.
Update x86_cpu_policies_are_compatible() to take (new) cpu_policy pointers,
reducing the amount of internal pointer chasing, and update all callers to
pass their cpu_policy objects directly.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
v2:
* Fix host/guest typo in xc_cpu_policy_is_compatible()
---
tools/libs/guest/xg_cpuid_x86.c | 4 +-
tools/tests/cpu-policy/test-cpu-policy.c | 50 +++++++-----------------
xen/arch/x86/domctl.c | 7 +---
xen/include/xen/lib/x86/cpu-policy.h | 12 ++----
xen/lib/x86/policy.c | 12 +++---
5 files changed, 27 insertions(+), 58 deletions(-)
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 5061fe357767..259029be8b36 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -868,9 +868,7 @@ bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
xc_cpu_policy_t *guest)
{
struct cpu_policy_errors err = INIT_CPU_POLICY_ERRORS;
- struct old_cpu_policy h = { &host->policy, &host->policy };
- struct old_cpu_policy g = { &guest->policy, &guest->policy };
- int rc = x86_cpu_policies_are_compatible(&h, &g, &err);
+ int rc = x86_cpu_policies_are_compatible(&host->policy, &guest->policy, &err);
if ( !rc )
return true;
diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
index 909d6272f875..a4ca07f33973 100644
--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -98,7 +98,7 @@ static bool msrs_are_sorted(const xen_msr_entry_t *entries, unsigned int nr)
static void test_cpuid_current(void)
{
- struct cpuid_policy p;
+ struct cpu_policy p;
xen_cpuid_leaf_t leaves[CPUID_MAX_SERIALISED_LEAVES];
unsigned int nr = ARRAY_SIZE(leaves);
int rc;
@@ -118,7 +118,7 @@ static void test_cpuid_current(void)
static void test_cpuid_serialise_success(void)
{
static const struct test {
- struct cpuid_policy p;
+ struct cpu_policy p;
const char *name;
unsigned int nr_leaves;
} tests[] = {
@@ -242,7 +242,7 @@ static void test_cpuid_serialise_success(void)
static void test_msr_serialise_success(void)
{
static const struct test {
- struct msr_policy p;
+ struct cpu_policy p;
const char *name;
unsigned int nr_msrs;
} tests[] = {
@@ -430,7 +430,7 @@ static void test_cpuid_out_of_range_clearing(void)
static const struct test {
const char *name;
unsigned int nr_markers;
- struct cpuid_policy p;
+ struct cpu_policy p;
} tests[] = {
{
.name = "basic",
@@ -550,7 +550,7 @@ static void test_cpuid_out_of_range_clearing(void)
for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
{
const struct test *t = &tests[i];
- struct cpuid_policy *p = memdup(&t->p);
+ struct cpu_policy *p = memdup(&t->p);
void *ptr;
unsigned int nr_markers;
@@ -574,23 +574,20 @@ static void test_is_compatible_success(void)
{
static struct test {
const char *name;
- struct cpuid_policy host_cpuid;
- struct cpuid_policy guest_cpuid;
- struct msr_policy host_msr;
- struct msr_policy guest_msr;
+ struct cpu_policy host, guest;
} tests[] = {
{
.name = "Host CPUID faulting, Guest not",
- .host_msr = {
+ .host = {
.platform_info.cpuid_faulting = true,
},
},
{
.name = "Host CPUID faulting, Guest wanted",
- .host_msr = {
+ .host = {
.platform_info.cpuid_faulting = true,
},
- .guest_msr = {
+ .guest = {
.platform_info.cpuid_faulting = true,
},
},
@@ -602,15 +599,8 @@ static void test_is_compatible_success(void)
for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
{
struct test *t = &tests[i];
- struct old_cpu_policy sys = {
- &t->host_cpuid,
- &t->host_msr,
- }, new = {
- &t->guest_cpuid,
- &t->guest_msr,
- };
struct cpu_policy_errors e;
- int res = x86_cpu_policies_are_compatible(&sys, &new, &e);
+ int res = x86_cpu_policies_are_compatible(&t->host, &t->guest, &e);
/* Check the expected error output. */
if ( res != 0 || memcmp(&no_errors, &e, sizeof(no_errors)) )
@@ -624,25 +614,22 @@ static void test_is_compatible_failure(void)
{
static struct test {
const char *name;
- struct cpuid_policy host_cpuid;
- struct cpuid_policy guest_cpuid;
- struct msr_policy host_msr;
- struct msr_policy guest_msr;
+ struct cpu_policy host, guest;
struct cpu_policy_errors e;
} tests[] = {
{
.name = "Host basic.max_leaf out of range",
- .guest_cpuid.basic.max_leaf = 1,
+ .guest.basic.max_leaf = 1,
.e = { 0, -1, -1 },
},
{
.name = "Host extd.max_leaf out of range",
- .guest_cpuid.extd.max_leaf = 1,
+ .guest.extd.max_leaf = 1,
.e = { 0x80000000, -1, -1 },
},
{
.name = "Host no CPUID faulting, Guest wanted",
- .guest_msr = {
+ .guest = {
.platform_info.cpuid_faulting = true,
},
.e = { -1, -1, 0xce },
@@ -654,15 +641,8 @@ static void test_is_compatible_failure(void)
for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
{
struct test *t = &tests[i];
- struct old_cpu_policy sys = {
- &t->host_cpuid,
- &t->host_msr,
- }, new = {
- &t->guest_cpuid,
- &t->guest_msr,
- };
struct cpu_policy_errors e;
- int res = x86_cpu_policies_are_compatible(&sys, &new, &e);
+ int res = x86_cpu_policies_are_compatible(&t->host, &t->guest, &e);
/* Check the expected error output. */
if ( res == 0 || memcmp(&t->e, &e, sizeof(t->e)) )
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 81be25c67731..c02528594102 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -41,10 +41,9 @@ static int update_domain_cpu_policy(struct domain *d,
xen_domctl_cpu_policy_t *xdpc)
{
struct cpu_policy *new;
- struct cpu_policy *sys = is_pv_domain(d)
+ const struct cpu_policy *sys = is_pv_domain(d)
? (IS_ENABLED(CONFIG_PV) ? &pv_max_cpu_policy : NULL)
: (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
- struct old_cpu_policy old_sys = { sys, sys }, old_new;
struct cpu_policy_errors err = INIT_CPU_POLICY_ERRORS;
int ret = -ENOMEM;
@@ -58,8 +57,6 @@ static int update_domain_cpu_policy(struct domain *d,
if ( !(new = xmemdup(d->arch.cpu_policy)) )
goto out;
- old_new = (struct old_cpu_policy){ new, new };
-
/* Merge the toolstack provided data. */
if ( (ret = x86_cpuid_copy_from_buffer(
new, xdpc->leaves, xdpc->nr_leaves,
@@ -72,7 +69,7 @@ static int update_domain_cpu_policy(struct domain *d,
x86_cpuid_policy_clear_out_of_range_leaves(new);
/* Audit the combined dataset. */
- ret = x86_cpu_policies_are_compatible(&old_sys, &old_new, &err);
+ ret = x86_cpu_policies_are_compatible(sys, new, &err);
if ( ret )
goto out;
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index 53fffca55211..8b27a0725b8e 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -379,12 +379,6 @@ struct cpu_policy
#define cpuid_policy cpu_policy
#define msr_policy cpu_policy
-struct old_cpu_policy
-{
- struct cpuid_policy *cpuid;
- struct msr_policy *msr;
-};
-
struct cpu_policy_errors
{
uint32_t leaf, subleaf;
@@ -559,7 +553,7 @@ int x86_msr_copy_from_buffer(struct msr_policy *policy,
const msr_entry_buffer_t msrs, uint32_t nr_entries,
uint32_t *err_msr);
-/*
+/**
* Calculate whether two policies are compatible.
*
* i.e. Can a VM configured with @guest run on a CPU supporting @host.
@@ -573,8 +567,8 @@ int x86_msr_copy_from_buffer(struct msr_policy *policy,
* incompatibility is detected, the optional err pointer may identify the
* problematic leaf/subleaf and/or MSR.
*/
-int x86_cpu_policies_are_compatible(const struct old_cpu_policy *host,
- const struct old_cpu_policy *guest,
+int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
+ const struct cpu_policy *guest,
struct cpu_policy_errors *err);
#endif /* !XEN_LIB_X86_POLICIES_H */
diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c
index 2975711d7c6c..a9c60000af9d 100644
--- a/xen/lib/x86/policy.c
+++ b/xen/lib/x86/policy.c
@@ -2,8 +2,8 @@
#include <xen/lib/x86/cpu-policy.h>
-int x86_cpu_policies_are_compatible(const struct old_cpu_policy *host,
- const struct old_cpu_policy *guest,
+int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
+ const struct cpu_policy *guest,
struct cpu_policy_errors *err)
{
struct cpu_policy_errors e = INIT_CPU_POLICY_ERRORS;
@@ -15,18 +15,18 @@ int x86_cpu_policies_are_compatible(const struct old_cpu_policy *host,
#define FAIL_MSR(m) \
do { e.msr = (m); goto out; } while ( 0 )
- if ( guest->cpuid->basic.max_leaf > host->cpuid->basic.max_leaf )
+ if ( guest->basic.max_leaf > host->basic.max_leaf )
FAIL_CPUID(0, NA);
- if ( guest->cpuid->feat.max_subleaf > host->cpuid->feat.max_subleaf )
+ if ( guest->feat.max_subleaf > host->feat.max_subleaf )
FAIL_CPUID(7, 0);
- if ( guest->cpuid->extd.max_leaf > host->cpuid->extd.max_leaf )
+ if ( guest->extd.max_leaf > host->extd.max_leaf )
FAIL_CPUID(0x80000000, NA);
/* TODO: Audit more CPUID data. */
- if ( ~host->msr->platform_info.raw & guest->msr->platform_info.raw )
+ if ( ~host->platform_info.raw & guest->platform_info.raw )
FAIL_MSR(MSR_INTEL_PLATFORM_INFO);
#undef FAIL_MSR
--
2.30.2
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v2 09/15] x86: Out-of-inline the policy<->featureset convertors
2023-04-04 9:52 [PATCH v2 00/15] x86: Merge cpuid and msr policy objects Andrew Cooper
` (7 preceding siblings ...)
2023-04-04 9:52 ` [PATCH v2 08/15] x86: Drop struct old_cpu_policy Andrew Cooper
@ 2023-04-04 9:52 ` Andrew Cooper
2023-04-04 15:01 ` Jan Beulich
2023-04-04 9:52 ` [PATCH v2 10/15] x86/boot: Move MSR policy initialisation logic into cpu-policy.c Andrew Cooper
` (5 subsequent siblings)
14 siblings, 1 reply; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 9:52 UTC (permalink / raw)
To: Xen-devel; +Cc: Andrew Cooper, Jan Beulich, Roger Pau Monné, Wei Liu
These are already getting over-large for being inline functions, and are only
going to grow more over time. Out of line them, yielding the following net
delta from bloat-o-meter:
add/remove: 2/0 grow/shrink: 0/4 up/down: 276/-1877 (-1601)
Switch to the newer cpu_policy terminology while doing so.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
v2:
* New
---
tools/libs/guest/xg_cpuid_x86.c | 2 +-
xen/arch/x86/cpuid.c | 28 +++++++--------
xen/arch/x86/sysctl.c | 2 +-
xen/include/xen/lib/x86/cpu-policy.h | 52 ++++++----------------------
xen/lib/x86/cpuid.c | 42 ++++++++++++++++++++++
5 files changed, 68 insertions(+), 58 deletions(-)
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 259029be8b36..33d366a8eb43 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -565,7 +565,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
}
}
- cpuid_featureset_to_policy(feat, p);
+ x86_cpu_featureset_to_policy(feat, p);
}
else
{
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index df3e503ced9d..5eb5f1893516 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -368,7 +368,7 @@ static void __init calculate_host_policy(void)
p->extd.max_leaf = 0x80000000 | min_t(uint32_t, max_extd_leaf & 0xffff,
ARRAY_SIZE(p->extd.raw) - 1);
- cpuid_featureset_to_policy(boot_cpu_data.x86_capability, p);
+ x86_cpu_featureset_to_policy(boot_cpu_data.x86_capability, p);
recalculate_xstate(p);
recalculate_misc(p);
@@ -450,7 +450,7 @@ static void __init calculate_pv_max_policy(void)
unsigned int i;
*p = host_cpu_policy;
- cpuid_policy_to_featureset(p, pv_featureset);
+ x86_cpu_policy_to_featureset(p, pv_featureset);
for ( i = 0; i < ARRAY_SIZE(pv_featureset); ++i )
pv_featureset[i] &= pv_max_featuremask[i];
@@ -468,7 +468,7 @@ static void __init calculate_pv_max_policy(void)
guest_common_feature_adjustments(pv_featureset);
sanitise_featureset(pv_featureset);
- cpuid_featureset_to_policy(pv_featureset, p);
+ x86_cpu_featureset_to_policy(pv_featureset, p);
recalculate_xstate(p);
p->extd.raw[0xa] = EMPTY_LEAF; /* No SVM for PV guests. */
@@ -481,7 +481,7 @@ static void __init calculate_pv_def_policy(void)
unsigned int i;
*p = pv_max_cpu_policy;
- cpuid_policy_to_featureset(p, pv_featureset);
+ x86_cpu_policy_to_featureset(p, pv_featureset);
for ( i = 0; i < ARRAY_SIZE(pv_featureset); ++i )
pv_featureset[i] &= pv_def_featuremask[i];
@@ -490,7 +490,7 @@ static void __init calculate_pv_def_policy(void)
guest_common_default_feature_adjustments(pv_featureset);
sanitise_featureset(pv_featureset);
- cpuid_featureset_to_policy(pv_featureset, p);
+ x86_cpu_featureset_to_policy(pv_featureset, p);
recalculate_xstate(p);
}
@@ -502,7 +502,7 @@ static void __init calculate_hvm_max_policy(void)
const uint32_t *hvm_featuremask;
*p = host_cpu_policy;
- cpuid_policy_to_featureset(p, hvm_featureset);
+ x86_cpu_policy_to_featureset(p, hvm_featureset);
hvm_featuremask = hvm_hap_supported() ?
hvm_hap_max_featuremask : hvm_shadow_max_featuremask;
@@ -581,7 +581,7 @@ static void __init calculate_hvm_max_policy(void)
guest_common_feature_adjustments(hvm_featureset);
sanitise_featureset(hvm_featureset);
- cpuid_featureset_to_policy(hvm_featureset, p);
+ x86_cpu_featureset_to_policy(hvm_featureset, p);
recalculate_xstate(p);
}
@@ -593,7 +593,7 @@ static void __init calculate_hvm_def_policy(void)
const uint32_t *hvm_featuremask;
*p = hvm_max_cpu_policy;
- cpuid_policy_to_featureset(p, hvm_featureset);
+ x86_cpu_policy_to_featureset(p, hvm_featureset);
hvm_featuremask = hvm_hap_supported() ?
hvm_hap_def_featuremask : hvm_shadow_def_featuremask;
@@ -612,7 +612,7 @@ static void __init calculate_hvm_def_policy(void)
__set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
sanitise_featureset(hvm_featureset);
- cpuid_featureset_to_policy(hvm_featureset, p);
+ x86_cpu_featureset_to_policy(hvm_featureset, p);
recalculate_xstate(p);
}
@@ -682,8 +682,8 @@ void recalculate_cpuid_policy(struct domain *d)
? CPUID_GUEST_NR_EXTD_AMD
: CPUID_GUEST_NR_EXTD_INTEL) - 1);
- cpuid_policy_to_featureset(p, fs);
- cpuid_policy_to_featureset(max, max_fs);
+ x86_cpu_policy_to_featureset(p, fs);
+ x86_cpu_policy_to_featureset(max, max_fs);
if ( is_hvm_domain(d) )
{
@@ -740,7 +740,7 @@ void recalculate_cpuid_policy(struct domain *d)
(cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
cpufeat_mask(X86_FEATURE_NO_FPU_SEL)));
- cpuid_featureset_to_policy(fs, p);
+ x86_cpu_featureset_to_policy(fs, p);
/* Pass host cacheline size through to guests. */
p->basic.clflush_size = max->basic.clflush_size;
@@ -806,7 +806,7 @@ void __init init_dom0_cpuid_policy(struct domain *d)
uint32_t fs[FSCAPINTS];
unsigned int i;
- cpuid_policy_to_featureset(p, fs);
+ x86_cpu_policy_to_featureset(p, fs);
for ( i = 0; i < ARRAY_SIZE(fs); ++i )
{
@@ -814,7 +814,7 @@ void __init init_dom0_cpuid_policy(struct domain *d)
fs[i] &= ~dom0_disable_feat[i];
}
- cpuid_featureset_to_policy(fs, p);
+ x86_cpu_featureset_to_policy(fs, p);
recalculate_cpuid_policy(d);
}
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 43a241f2090f..c107f40c6283 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -338,7 +338,7 @@ long arch_do_sysctl(
ret = -EINVAL;
if ( !ret )
- cpuid_policy_to_featureset(p, featureset);
+ x86_cpu_policy_to_featureset(p, featureset);
/* Copy the requested featureset into place. */
if ( !ret && copy_to_guest(sysctl->u.cpu_featureset.features,
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index 8b27a0725b8e..57b4633c861e 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -387,49 +387,17 @@ struct cpu_policy_errors
#define INIT_CPU_POLICY_ERRORS { -1, -1, -1 }
-/* Fill in a featureset bitmap from a CPUID policy. */
-static inline void cpuid_policy_to_featureset(
- const struct cpuid_policy *p, uint32_t fs[FEATURESET_NR_ENTRIES])
-{
- fs[FEATURESET_1d] = p->basic._1d;
- fs[FEATURESET_1c] = p->basic._1c;
- fs[FEATURESET_e1d] = p->extd.e1d;
- fs[FEATURESET_e1c] = p->extd.e1c;
- fs[FEATURESET_Da1] = p->xstate.Da1;
- fs[FEATURESET_7b0] = p->feat._7b0;
- fs[FEATURESET_7c0] = p->feat._7c0;
- fs[FEATURESET_e7d] = p->extd.e7d;
- fs[FEATURESET_e8b] = p->extd.e8b;
- fs[FEATURESET_7d0] = p->feat._7d0;
- fs[FEATURESET_7a1] = p->feat._7a1;
- fs[FEATURESET_e21a] = p->extd.e21a;
- fs[FEATURESET_7b1] = p->feat._7b1;
- fs[FEATURESET_7d2] = p->feat._7d2;
- fs[FEATURESET_7c1] = p->feat._7c1;
- fs[FEATURESET_7d1] = p->feat._7d1;
-}
+/**
+ * Copy the featureset words out of a cpu_policy object.
+ */
+void x86_cpu_policy_to_featureset(const struct cpu_policy *p,
+ uint32_t fs[FEATURESET_NR_ENTRIES]);
-/* Fill in a CPUID policy from a featureset bitmap. */
-static inline void cpuid_featureset_to_policy(
- const uint32_t fs[FEATURESET_NR_ENTRIES], struct cpuid_policy *p)
-{
- p->basic._1d = fs[FEATURESET_1d];
- p->basic._1c = fs[FEATURESET_1c];
- p->extd.e1d = fs[FEATURESET_e1d];
- p->extd.e1c = fs[FEATURESET_e1c];
- p->xstate.Da1 = fs[FEATURESET_Da1];
- p->feat._7b0 = fs[FEATURESET_7b0];
- p->feat._7c0 = fs[FEATURESET_7c0];
- p->extd.e7d = fs[FEATURESET_e7d];
- p->extd.e8b = fs[FEATURESET_e8b];
- p->feat._7d0 = fs[FEATURESET_7d0];
- p->feat._7a1 = fs[FEATURESET_7a1];
- p->extd.e21a = fs[FEATURESET_e21a];
- p->feat._7b1 = fs[FEATURESET_7b1];
- p->feat._7d2 = fs[FEATURESET_7d2];
- p->feat._7c1 = fs[FEATURESET_7c1];
- p->feat._7d1 = fs[FEATURESET_7d1];
-}
+/**
+ * Copy the featureset words back into a cpu_policy object.
+ */
+void x86_cpu_featureset_to_policy(const uint32_t fs[FEATURESET_NR_ENTRIES],
+ struct cpu_policy *p);
static inline uint64_t cpuid_policy_xcr0_max(const struct cpuid_policy *p)
{
diff --git a/xen/lib/x86/cpuid.c b/xen/lib/x86/cpuid.c
index e81f76c779c0..734e90823a63 100644
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -60,6 +60,48 @@ const char *x86_cpuid_vendor_to_str(unsigned int vendor)
}
}
+void x86_cpu_policy_to_featureset(
+ const struct cpu_policy *p, uint32_t fs[FEATURESET_NR_ENTRIES])
+{
+ fs[FEATURESET_1d] = p->basic._1d;
+ fs[FEATURESET_1c] = p->basic._1c;
+ fs[FEATURESET_e1d] = p->extd.e1d;
+ fs[FEATURESET_e1c] = p->extd.e1c;
+ fs[FEATURESET_Da1] = p->xstate.Da1;
+ fs[FEATURESET_7b0] = p->feat._7b0;
+ fs[FEATURESET_7c0] = p->feat._7c0;
+ fs[FEATURESET_e7d] = p->extd.e7d;
+ fs[FEATURESET_e8b] = p->extd.e8b;
+ fs[FEATURESET_7d0] = p->feat._7d0;
+ fs[FEATURESET_7a1] = p->feat._7a1;
+ fs[FEATURESET_e21a] = p->extd.e21a;
+ fs[FEATURESET_7b1] = p->feat._7b1;
+ fs[FEATURESET_7d2] = p->feat._7d2;
+ fs[FEATURESET_7c1] = p->feat._7c1;
+ fs[FEATURESET_7d1] = p->feat._7d1;
+}
+
+void x86_cpu_featureset_to_policy(
+ const uint32_t fs[FEATURESET_NR_ENTRIES], struct cpu_policy *p)
+{
+ p->basic._1d = fs[FEATURESET_1d];
+ p->basic._1c = fs[FEATURESET_1c];
+ p->extd.e1d = fs[FEATURESET_e1d];
+ p->extd.e1c = fs[FEATURESET_e1c];
+ p->xstate.Da1 = fs[FEATURESET_Da1];
+ p->feat._7b0 = fs[FEATURESET_7b0];
+ p->feat._7c0 = fs[FEATURESET_7c0];
+ p->extd.e7d = fs[FEATURESET_e7d];
+ p->extd.e8b = fs[FEATURESET_e8b];
+ p->feat._7d0 = fs[FEATURESET_7d0];
+ p->feat._7a1 = fs[FEATURESET_7a1];
+ p->extd.e21a = fs[FEATURESET_e21a];
+ p->feat._7b1 = fs[FEATURESET_7b1];
+ p->feat._7d2 = fs[FEATURESET_7d2];
+ p->feat._7c1 = fs[FEATURESET_7c1];
+ p->feat._7d1 = fs[FEATURESET_7d1];
+}
+
void x86_cpuid_policy_recalc_synth(struct cpuid_policy *p)
{
p->x86_vendor = x86_cpuid_lookup_vendor(
--
2.30.2
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v2 10/15] x86/boot: Move MSR policy initialisation logic into cpu-policy.c
2023-04-04 9:52 [PATCH v2 00/15] x86: Merge cpuid and msr policy objects Andrew Cooper
` (8 preceding siblings ...)
2023-04-04 9:52 ` [PATCH v2 09/15] x86: Out-of-inline the policy<->featureset convertors Andrew Cooper
@ 2023-04-04 9:52 ` Andrew Cooper
2023-04-04 15:04 ` Jan Beulich
2023-04-04 9:52 ` [PATCH v2 11/15] x86/boot: Merge CPUID " Andrew Cooper
` (4 subsequent siblings)
14 siblings, 1 reply; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 9:52 UTC (permalink / raw)
To: Xen-devel; +Cc: Andrew Cooper, Jan Beulich, Roger Pau Monné, Wei Liu
Switch to the newer cpu_policy nomenclature.
No practical change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
v2:
* New
---
xen/arch/x86/cpu-policy.c | 84 +++++++++++++++++++++++++++
xen/arch/x86/include/asm/cpu-policy.h | 3 +
xen/arch/x86/include/asm/msr.h | 1 -
xen/arch/x86/msr.c | 84 ---------------------------
xen/arch/x86/setup.c | 3 +-
5 files changed, 89 insertions(+), 86 deletions(-)
diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index e9ac1269c35a..f6a2317ed7bd 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -20,6 +20,90 @@ struct cpu_policy __ro_after_init hvm_max_cpu_policy;
struct cpu_policy __ro_after_init hvm_def_cpu_policy;
#endif
+static void __init calculate_raw_policy(void)
+{
+ struct cpu_policy *p = &raw_cpu_policy;
+
+ /* 0x000000ce MSR_INTEL_PLATFORM_INFO */
+ /* Was already added by probe_cpuid_faulting() */
+
+ if ( cpu_has_arch_caps )
+ rdmsrl(MSR_ARCH_CAPABILITIES, p->arch_caps.raw);
+}
+
+static void __init calculate_host_policy(void)
+{
+ struct cpu_policy *p = &host_cpu_policy;
+
+ *p = raw_cpu_policy;
+
+ /* 0x000000ce MSR_INTEL_PLATFORM_INFO */
+ /* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_ENABLES */
+ p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
+
+ /* Temporary, until we have known_features[] for feature bits in MSRs. */
+ p->arch_caps.raw &=
+ (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
+ ARCH_CAPS_SKIP_L1DFL | ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO |
+ ARCH_CAPS_IF_PSCHANGE_MC_NO | ARCH_CAPS_TSX_CTRL | ARCH_CAPS_TAA_NO |
+ ARCH_CAPS_SBDR_SSDP_NO | ARCH_CAPS_FBSDP_NO | ARCH_CAPS_PSDP_NO |
+ ARCH_CAPS_FB_CLEAR | ARCH_CAPS_RRSBA | ARCH_CAPS_BHI_NO |
+ ARCH_CAPS_PBRSB_NO);
+}
+
+static void __init calculate_pv_max_policy(void)
+{
+ struct cpu_policy *p = &pv_max_cpu_policy;
+
+ *p = host_cpu_policy;
+
+ p->arch_caps.raw = 0; /* Not supported yet. */
+}
+
+static void __init calculate_pv_def_policy(void)
+{
+ struct cpu_policy *p = &pv_def_cpu_policy;
+
+ *p = pv_max_cpu_policy;
+}
+
+static void __init calculate_hvm_max_policy(void)
+{
+ struct cpu_policy *p = &hvm_max_cpu_policy;
+
+ *p = host_cpu_policy;
+
+ /* It's always possible to emulate CPUID faulting for HVM guests */
+ p->platform_info.cpuid_faulting = true;
+
+ p->arch_caps.raw = 0; /* Not supported yet. */
+}
+
+static void __init calculate_hvm_def_policy(void)
+{
+ struct cpu_policy *p = &hvm_def_cpu_policy;
+
+ *p = hvm_max_cpu_policy;
+}
+
+void __init init_guest_cpu_policies(void)
+{
+ calculate_raw_policy();
+ calculate_host_policy();
+
+ if ( IS_ENABLED(CONFIG_PV) )
+ {
+ calculate_pv_max_policy();
+ calculate_pv_def_policy();
+ }
+
+ if ( hvm_enabled )
+ {
+ calculate_hvm_max_policy();
+ calculate_hvm_def_policy();
+ }
+}
+
int init_domain_cpu_policy(struct domain *d)
{
struct cpu_policy *p = is_pv_domain(d)
diff --git a/xen/arch/x86/include/asm/cpu-policy.h b/xen/arch/x86/include/asm/cpu-policy.h
index 9ba34bbf5ea1..13e2a1f86d13 100644
--- a/xen/arch/x86/include/asm/cpu-policy.h
+++ b/xen/arch/x86/include/asm/cpu-policy.h
@@ -12,6 +12,9 @@ extern struct cpu_policy pv_def_cpu_policy;
extern struct cpu_policy hvm_max_cpu_policy;
extern struct cpu_policy hvm_def_cpu_policy;
+/* Initialise the guest cpu_policy objects. */
+void init_guest_cpu_policies(void);
+
/* Allocate and initialise a CPU policy suitable for the domain. */
int init_domain_cpu_policy(struct domain *d);
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index b59a51d238a7..458841733e18 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -418,7 +418,6 @@ struct vcpu_msrs
uint32_t dr_mask[4];
};
-void init_guest_msr_policy(void);
int init_vcpu_msr_policy(struct vcpu *v);
/*
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 93bd93feb644..802fc60baf81 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -38,90 +38,6 @@
DEFINE_PER_CPU(uint32_t, tsc_aux);
-static void __init calculate_raw_policy(void)
-{
- struct msr_policy *mp = &raw_cpu_policy;
-
- /* 0x000000ce MSR_INTEL_PLATFORM_INFO */
- /* Was already added by probe_cpuid_faulting() */
-
- if ( cpu_has_arch_caps )
- rdmsrl(MSR_ARCH_CAPABILITIES, mp->arch_caps.raw);
-}
-
-static void __init calculate_host_policy(void)
-{
- struct msr_policy *mp = &host_cpu_policy;
-
- *mp = raw_cpu_policy;
-
- /* 0x000000ce MSR_INTEL_PLATFORM_INFO */
- /* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_ENABLES */
- mp->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
-
- /* Temporary, until we have known_features[] for feature bits in MSRs. */
- mp->arch_caps.raw &=
- (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
- ARCH_CAPS_SKIP_L1DFL | ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO |
- ARCH_CAPS_IF_PSCHANGE_MC_NO | ARCH_CAPS_TSX_CTRL | ARCH_CAPS_TAA_NO |
- ARCH_CAPS_SBDR_SSDP_NO | ARCH_CAPS_FBSDP_NO | ARCH_CAPS_PSDP_NO |
- ARCH_CAPS_FB_CLEAR | ARCH_CAPS_RRSBA | ARCH_CAPS_BHI_NO |
- ARCH_CAPS_PBRSB_NO);
-}
-
-static void __init calculate_pv_max_policy(void)
-{
- struct msr_policy *mp = &pv_max_cpu_policy;
-
- *mp = host_cpu_policy;
-
- mp->arch_caps.raw = 0; /* Not supported yet. */
-}
-
-static void __init calculate_pv_def_policy(void)
-{
- struct msr_policy *mp = &pv_def_cpu_policy;
-
- *mp = pv_max_cpu_policy;
-}
-
-static void __init calculate_hvm_max_policy(void)
-{
- struct msr_policy *mp = &hvm_max_cpu_policy;
-
- *mp = host_cpu_policy;
-
- /* It's always possible to emulate CPUID faulting for HVM guests */
- mp->platform_info.cpuid_faulting = true;
-
- mp->arch_caps.raw = 0; /* Not supported yet. */
-}
-
-static void __init calculate_hvm_def_policy(void)
-{
- struct msr_policy *mp = &hvm_def_cpu_policy;
-
- *mp = hvm_max_cpu_policy;
-}
-
-void __init init_guest_msr_policy(void)
-{
- calculate_raw_policy();
- calculate_host_policy();
-
- if ( IS_ENABLED(CONFIG_PV) )
- {
- calculate_pv_max_policy();
- calculate_pv_def_policy();
- }
-
- if ( hvm_enabled )
- {
- calculate_hvm_max_policy();
- calculate_hvm_def_policy();
- }
-}
-
int init_vcpu_msr_policy(struct vcpu *v)
{
struct vcpu_msrs *msrs = xzalloc(struct vcpu_msrs);
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b29229933d8c..51a19b9019eb 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -50,6 +50,7 @@
#include <asm/nmi.h>
#include <asm/alternative.h>
#include <asm/mc146818rtc.h>
+#include <asm/cpu-policy.h>
#include <asm/cpuid.h>
#include <asm/spec_ctrl.h>
#include <asm/guest.h>
@@ -1991,7 +1992,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
panic("Could not protect TXT memory regions\n");
init_guest_cpuid();
- init_guest_msr_policy();
+ init_guest_cpu_policies();
if ( xen_cpuidle )
xen_processor_pmbits |= XEN_PROCESSOR_PM_CX;
--
2.30.2
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v2 11/15] x86/boot: Merge CPUID policy initialisation logic into cpu-policy.c
2023-04-04 9:52 [PATCH v2 00/15] x86: Merge cpuid and msr policy objects Andrew Cooper
` (9 preceding siblings ...)
2023-04-04 9:52 ` [PATCH v2 10/15] x86/boot: Move MSR policy initialisation logic into cpu-policy.c Andrew Cooper
@ 2023-04-04 9:52 ` Andrew Cooper
2023-04-04 15:16 ` Jan Beulich
2023-04-04 9:52 ` [PATCH v2 12/15] x86/emul: Switch x86_emulate_ctxt to cpu_policy Andrew Cooper
` (3 subsequent siblings)
14 siblings, 1 reply; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 9:52 UTC (permalink / raw)
To: Xen-devel; +Cc: Andrew Cooper, Jan Beulich, Roger Pau Monné, Wei Liu
Switch to the newer cpu_policy nomenclature. Do some easy cleanup of
includes.
No practical change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
v2:
* New
---
xen/arch/x86/cpu-policy.c | 752 ++++++++++++++++++++++++
xen/arch/x86/cpuid.c | 817 +-------------------------
xen/arch/x86/hvm/hvm.c | 1 -
xen/arch/x86/include/asm/cpu-policy.h | 6 +
xen/arch/x86/include/asm/cpuid.h | 11 +-
xen/arch/x86/pv/domain.c | 1 +
xen/arch/x86/setup.c | 2 -
7 files changed, 764 insertions(+), 826 deletions(-)
diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index f6a2317ed7bd..83186e940ca7 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -1,13 +1,19 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#include <xen/cache.h>
#include <xen/kernel.h>
+#include <xen/param.h>
#include <xen/sched.h>
#include <xen/lib/x86/cpu-policy.h>
+#include <asm/amd.h>
#include <asm/cpu-policy.h>
+#include <asm/hvm/nestedhvm.h>
+#include <asm/hvm/svm/svm.h>
#include <asm/msr-index.h>
+#include <asm/paging.h>
#include <asm/setup.h>
+#include <asm/xstate.h>
struct cpu_policy __ro_after_init raw_cpu_policy;
struct cpu_policy __ro_after_init host_cpu_policy;
@@ -20,10 +26,332 @@ struct cpu_policy __ro_after_init hvm_max_cpu_policy;
struct cpu_policy __ro_after_init hvm_def_cpu_policy;
#endif
+const uint32_t known_features[] = INIT_KNOWN_FEATURES;
+
+static const uint32_t __initconst pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
+static const uint32_t hvm_shadow_max_featuremask[] = INIT_HVM_SHADOW_MAX_FEATURES;
+static const uint32_t __initconst hvm_hap_max_featuremask[] =
+ INIT_HVM_HAP_MAX_FEATURES;
+static const uint32_t __initconst pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
+static const uint32_t __initconst hvm_shadow_def_featuremask[] =
+ INIT_HVM_SHADOW_DEF_FEATURES;
+static const uint32_t __initconst hvm_hap_def_featuremask[] =
+ INIT_HVM_HAP_DEF_FEATURES;
+static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
+
+static const struct feature_name {
+ const char *name;
+ unsigned int bit;
+} feature_names[] __initconstrel = INIT_FEATURE_NAMES;
+
+/*
+ * Parse a list of cpuid feature names -> bool, calling the callback for any
+ * matches found.
+ *
+ * always_inline, because this is init code only and we really don't want a
+ * function pointer call in the middle of the loop.
+ */
+static int __init always_inline parse_cpuid(
+ const char *s, void (*callback)(unsigned int feat, bool val))
+{
+ const char *ss;
+ int val, rc = 0;
+
+ do {
+ const struct feature_name *lhs, *rhs, *mid = NULL /* GCC... */;
+ const char *feat;
+
+ ss = strchr(s, ',');
+ if ( !ss )
+ ss = strchr(s, '\0');
+
+ /* Skip the 'no-' prefix for name comparisons. */
+ feat = s;
+ if ( strncmp(s, "no-", 3) == 0 )
+ feat += 3;
+
+ /* (Re)initalise lhs and rhs for binary search. */
+ lhs = feature_names;
+ rhs = feature_names + ARRAY_SIZE(feature_names);
+
+ while ( lhs < rhs )
+ {
+ int res;
+
+ mid = lhs + (rhs - lhs) / 2;
+ res = cmdline_strcmp(feat, mid->name);
+
+ if ( res < 0 )
+ {
+ rhs = mid;
+ continue;
+ }
+ if ( res > 0 )
+ {
+ lhs = mid + 1;
+ continue;
+ }
+
+ if ( (val = parse_boolean(mid->name, s, ss)) >= 0 )
+ {
+ callback(mid->bit, val);
+ mid = NULL;
+ }
+
+ break;
+ }
+
+ /*
+ * Mid being NULL means that the name and boolean were successfully
+ * identified. Everything else is an error.
+ */
+ if ( mid )
+ rc = -EINVAL;
+
+ s = ss + 1;
+ } while ( *ss );
+
+ return rc;
+}
+
+static void __init cf_check _parse_xen_cpuid(unsigned int feat, bool val)
+{
+ if ( !val )
+ setup_clear_cpu_cap(feat);
+ else if ( feat == X86_FEATURE_RDRAND &&
+ (cpuid_ecx(1) & cpufeat_mask(X86_FEATURE_RDRAND)) )
+ setup_force_cpu_cap(X86_FEATURE_RDRAND);
+}
+
+static int __init cf_check parse_xen_cpuid(const char *s)
+{
+ return parse_cpuid(s, _parse_xen_cpuid);
+}
+custom_param("cpuid", parse_xen_cpuid);
+
+static bool __initdata dom0_cpuid_cmdline;
+static uint32_t __initdata dom0_enable_feat[FSCAPINTS];
+static uint32_t __initdata dom0_disable_feat[FSCAPINTS];
+
+static void __init cf_check _parse_dom0_cpuid(unsigned int feat, bool val)
+{
+ __set_bit (feat, val ? dom0_enable_feat : dom0_disable_feat);
+ __clear_bit(feat, val ? dom0_disable_feat : dom0_enable_feat );
+}
+
+static int __init cf_check parse_dom0_cpuid(const char *s)
+{
+ dom0_cpuid_cmdline = true;
+
+ return parse_cpuid(s, _parse_dom0_cpuid);
+}
+custom_param("dom0-cpuid", parse_dom0_cpuid);
+
+#define EMPTY_LEAF ((struct cpuid_leaf){})
+static void zero_leaves(struct cpuid_leaf *l,
+ unsigned int first, unsigned int last)
+{
+ memset(&l[first], 0, sizeof(*l) * (last - first + 1));
+}
+
+static void sanitise_featureset(uint32_t *fs)
+{
+ /* for_each_set_bit() uses unsigned longs. Extend with zeroes. */
+ uint32_t disabled_features[
+ ROUNDUP(FSCAPINTS, sizeof(unsigned long)/sizeof(uint32_t))] = {};
+ unsigned int i;
+
+ for ( i = 0; i < FSCAPINTS; ++i )
+ {
+ /* Clamp to known mask. */
+ fs[i] &= known_features[i];
+
+ /*
+ * Identify which features with deep dependencies have been
+ * disabled.
+ */
+ disabled_features[i] = ~fs[i] & deep_features[i];
+ }
+
+ for_each_set_bit(i, (void *)disabled_features,
+ sizeof(disabled_features) * 8)
+ {
+ const uint32_t *dfs = x86_cpuid_lookup_deep_deps(i);
+ unsigned int j;
+
+ ASSERT(dfs); /* deep_features[] should guarentee this. */
+
+ for ( j = 0; j < FSCAPINTS; ++j )
+ {
+ fs[j] &= ~dfs[j];
+ disabled_features[j] &= ~dfs[j];
+ }
+ }
+}
+
+static void recalculate_xstate(struct cpu_policy *p)
+{
+ uint64_t xstates = XSTATE_FP_SSE;
+ uint32_t xstate_size = XSTATE_AREA_MIN_SIZE;
+ unsigned int i, Da1 = p->xstate.Da1;
+
+ /*
+ * The Da1 leaf is the only piece of information preserved in the common
+ * case. Everything else is derived from other feature state.
+ */
+ memset(&p->xstate, 0, sizeof(p->xstate));
+
+ if ( !p->basic.xsave )
+ return;
+
+ if ( p->basic.avx )
+ {
+ xstates |= X86_XCR0_YMM;
+ xstate_size = max(xstate_size,
+ xstate_offsets[X86_XCR0_YMM_POS] +
+ xstate_sizes[X86_XCR0_YMM_POS]);
+ }
+
+ if ( p->feat.mpx )
+ {
+ xstates |= X86_XCR0_BNDREGS | X86_XCR0_BNDCSR;
+ xstate_size = max(xstate_size,
+ xstate_offsets[X86_XCR0_BNDCSR_POS] +
+ xstate_sizes[X86_XCR0_BNDCSR_POS]);
+ }
+
+ if ( p->feat.avx512f )
+ {
+ xstates |= X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM;
+ xstate_size = max(xstate_size,
+ xstate_offsets[X86_XCR0_HI_ZMM_POS] +
+ xstate_sizes[X86_XCR0_HI_ZMM_POS]);
+ }
+
+ if ( p->feat.pku )
+ {
+ xstates |= X86_XCR0_PKRU;
+ xstate_size = max(xstate_size,
+ xstate_offsets[X86_XCR0_PKRU_POS] +
+ xstate_sizes[X86_XCR0_PKRU_POS]);
+ }
+
+ p->xstate.max_size = xstate_size;
+ p->xstate.xcr0_low = xstates & ~XSTATE_XSAVES_ONLY;
+ p->xstate.xcr0_high = (xstates & ~XSTATE_XSAVES_ONLY) >> 32;
+
+ p->xstate.Da1 = Da1;
+ if ( p->xstate.xsaves )
+ {
+ p->xstate.xss_low = xstates & XSTATE_XSAVES_ONLY;
+ p->xstate.xss_high = (xstates & XSTATE_XSAVES_ONLY) >> 32;
+ }
+ else
+ xstates &= ~XSTATE_XSAVES_ONLY;
+
+ for ( i = 2; i < min(63ul, ARRAY_SIZE(p->xstate.comp)); ++i )
+ {
+ uint64_t curr_xstate = 1ul << i;
+
+ if ( !(xstates & curr_xstate) )
+ continue;
+
+ p->xstate.comp[i].size = xstate_sizes[i];
+ p->xstate.comp[i].offset = xstate_offsets[i];
+ p->xstate.comp[i].xss = curr_xstate & XSTATE_XSAVES_ONLY;
+ p->xstate.comp[i].align = curr_xstate & xstate_align;
+ }
+}
+
+/*
+ * Misc adjustments to the policy. Mostly clobbering reserved fields and
+ * duplicating shared fields. Intentionally hidden fields are annotated.
+ */
+static void recalculate_misc(struct cpu_policy *p)
+{
+ p->basic.raw_fms &= 0x0fff0fff; /* Clobber Processor Type on Intel. */
+ p->basic.apic_id = 0; /* Dynamic. */
+
+ p->basic.raw[0x5] = EMPTY_LEAF; /* MONITOR not exposed to guests. */
+ p->basic.raw[0x6] = EMPTY_LEAF; /* Therm/Power not exposed to guests. */
+
+ p->basic.raw[0x8] = EMPTY_LEAF;
+
+ /* TODO: Rework topology logic. */
+ memset(p->topo.raw, 0, sizeof(p->topo.raw));
+
+ p->basic.raw[0xc] = EMPTY_LEAF;
+
+ p->extd.e1d &= ~CPUID_COMMON_1D_FEATURES;
+
+ /* Most of Power/RAS hidden from guests. */
+ p->extd.raw[0x7].a = p->extd.raw[0x7].b = p->extd.raw[0x7].c = 0;
+
+ p->extd.raw[0x8].d = 0;
+
+ switch ( p->x86_vendor )
+ {
+ case X86_VENDOR_INTEL:
+ p->basic.l2_nr_queries = 1; /* Fixed to 1 query. */
+ p->basic.raw[0x3] = EMPTY_LEAF; /* PSN - always hidden. */
+ p->basic.raw[0x9] = EMPTY_LEAF; /* DCA - always hidden. */
+
+ p->extd.vendor_ebx = 0;
+ p->extd.vendor_ecx = 0;
+ p->extd.vendor_edx = 0;
+
+ p->extd.raw[0x1].a = p->extd.raw[0x1].b = 0;
+
+ p->extd.raw[0x5] = EMPTY_LEAF;
+ p->extd.raw[0x6].a = p->extd.raw[0x6].b = p->extd.raw[0x6].d = 0;
+
+ p->extd.raw[0x8].a &= 0x0000ffff;
+ p->extd.raw[0x8].c = 0;
+ break;
+
+ case X86_VENDOR_AMD:
+ case X86_VENDOR_HYGON:
+ zero_leaves(p->basic.raw, 0x2, 0x3);
+ memset(p->cache.raw, 0, sizeof(p->cache.raw));
+ zero_leaves(p->basic.raw, 0x9, 0xa);
+
+ p->extd.vendor_ebx = p->basic.vendor_ebx;
+ p->extd.vendor_ecx = p->basic.vendor_ecx;
+ p->extd.vendor_edx = p->basic.vendor_edx;
+
+ p->extd.raw_fms = p->basic.raw_fms;
+ p->extd.raw[0x1].b &= 0xff00ffff;
+ p->extd.e1d |= p->basic._1d & CPUID_COMMON_1D_FEATURES;
+
+ p->extd.raw[0x8].a &= 0x0000ffff; /* GuestMaxPhysAddr hidden. */
+ p->extd.raw[0x8].c &= 0x0003f0ff;
+
+ p->extd.raw[0x9] = EMPTY_LEAF;
+
+ zero_leaves(p->extd.raw, 0xb, 0x18);
+
+ /* 0x19 - TLB details. Pass through. */
+ /* 0x1a - Perf hints. Pass through. */
+
+ p->extd.raw[0x1b] = EMPTY_LEAF; /* IBS - not supported. */
+ p->extd.raw[0x1c] = EMPTY_LEAF; /* LWP - not supported. */
+ p->extd.raw[0x1d] = EMPTY_LEAF; /* TopoExt Cache */
+ p->extd.raw[0x1e] = EMPTY_LEAF; /* TopoExt APIC ID/Core/Node */
+ p->extd.raw[0x1f] = EMPTY_LEAF; /* SEV */
+ p->extd.raw[0x20] = EMPTY_LEAF; /* Platform QoS */
+ break;
+ }
+}
+
static void __init calculate_raw_policy(void)
{
struct cpu_policy *p = &raw_cpu_policy;
+ x86_cpuid_policy_fill_native(p);
+
+ /* Nothing good will come from Xen and libx86 disagreeing on vendor. */
+ ASSERT(p->x86_vendor == boot_cpu_data.x86_vendor);
+
/* 0x000000ce MSR_INTEL_PLATFORM_INFO */
/* Was already added by probe_cpuid_faulting() */
@@ -34,9 +362,50 @@ static void __init calculate_raw_policy(void)
static void __init calculate_host_policy(void)
{
struct cpu_policy *p = &host_cpu_policy;
+ unsigned int max_extd_leaf;
*p = raw_cpu_policy;
+ p->basic.max_leaf =
+ min_t(uint32_t, p->basic.max_leaf, ARRAY_SIZE(p->basic.raw) - 1);
+ p->feat.max_subleaf =
+ min_t(uint32_t, p->feat.max_subleaf, ARRAY_SIZE(p->feat.raw) - 1);
+
+ max_extd_leaf = p->extd.max_leaf;
+
+ /*
+ * For AMD/Hygon hardware before Zen3, we unilaterally modify LFENCE to be
+ * dispatch serialising for Spectre mitigations. Extend max_extd_leaf
+ * beyond what hardware supports, to include the feature leaf containing
+ * this information.
+ */
+ if ( cpu_has_lfence_dispatch )
+ max_extd_leaf = max(max_extd_leaf, 0x80000021);
+
+ p->extd.max_leaf = 0x80000000 | min_t(uint32_t, max_extd_leaf & 0xffff,
+ ARRAY_SIZE(p->extd.raw) - 1);
+
+ x86_cpu_featureset_to_policy(boot_cpu_data.x86_capability, p);
+ recalculate_xstate(p);
+ recalculate_misc(p);
+
+ /* When vPMU is disabled, drop it from the host policy. */
+ if ( vpmu_mode == XENPMU_MODE_OFF )
+ p->basic.raw[0xa] = EMPTY_LEAF;
+
+ if ( p->extd.svm )
+ {
+ /* Clamp to implemented features which require hardware support. */
+ p->extd.raw[0xa].d &= ((1u << SVM_FEATURE_NPT) |
+ (1u << SVM_FEATURE_LBRV) |
+ (1u << SVM_FEATURE_NRIPS) |
+ (1u << SVM_FEATURE_PAUSEFILTER) |
+ (1u << SVM_FEATURE_DECODEASSISTS));
+ /* Enable features which are always emulated. */
+ p->extd.raw[0xa].d |= ((1u << SVM_FEATURE_VMCBCLEAN) |
+ (1u << SVM_FEATURE_TSCRATEMSR));
+ }
+
/* 0x000000ce MSR_INTEL_PLATFORM_INFO */
/* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_ENABLES */
p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
@@ -51,11 +420,88 @@ static void __init calculate_host_policy(void)
ARCH_CAPS_PBRSB_NO);
}
+static void __init guest_common_default_feature_adjustments(uint32_t *fs)
+{
+ /*
+ * IvyBridge client parts suffer from leakage of RDRAND data due to SRBDS
+ * (XSA-320 / CVE-2020-0543), and won't be receiving microcode to
+ * compensate.
+ *
+ * Mitigate by hiding RDRAND from guests by default, unless explicitly
+ * overridden on the Xen command line (cpuid=rdrand). Irrespective of the
+ * default setting, guests can use RDRAND if explicitly enabled
+ * (cpuid="host,rdrand=1") in the VM's config file, and VMs which were
+ * previously using RDRAND can migrate in.
+ */
+ if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
+ boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 0x3a &&
+ cpu_has_rdrand && !is_forced_cpu_cap(X86_FEATURE_RDRAND) )
+ __clear_bit(X86_FEATURE_RDRAND, fs);
+
+ /*
+ * On certain hardware, speculative or errata workarounds can result in
+ * TSX being placed in "force-abort" mode, where it doesn't actually
+ * function as expected, but is technically compatible with the ISA.
+ *
+ * Do not advertise RTM to guests by default if it won't actually work.
+ */
+ if ( rtm_disabled )
+ __clear_bit(X86_FEATURE_RTM, fs);
+}
+
+static void __init guest_common_feature_adjustments(uint32_t *fs)
+{
+ /* Unconditionally claim to be able to set the hypervisor bit. */
+ __set_bit(X86_FEATURE_HYPERVISOR, fs);
+
+ /*
+ * If IBRS is offered to the guest, unconditionally offer STIBP. It is a
+ * nop on non-HT hardware, and has this behaviour to make heterogeneous
+ * setups easier to manage.
+ */
+ if ( test_bit(X86_FEATURE_IBRSB, fs) )
+ __set_bit(X86_FEATURE_STIBP, fs);
+ if ( test_bit(X86_FEATURE_IBRS, fs) )
+ __set_bit(X86_FEATURE_AMD_STIBP, fs);
+
+ /*
+ * On hardware which supports IBRS/IBPB, we can offer IBPB independently
+ * of IBRS by using the AMD feature bit. An administrator may wish for
+ * performance reasons to offer IBPB without IBRS.
+ */
+ if ( host_cpu_policy.feat.ibrsb )
+ __set_bit(X86_FEATURE_IBPB, fs);
+}
+
static void __init calculate_pv_max_policy(void)
{
struct cpu_policy *p = &pv_max_cpu_policy;
+ uint32_t fs[FSCAPINTS];
+ unsigned int i;
*p = host_cpu_policy;
+ x86_cpu_policy_to_featureset(p, fs);
+
+ for ( i = 0; i < ARRAY_SIZE(fs); ++i )
+ fs[i] &= pv_max_featuremask[i];
+
+ /*
+ * If Xen isn't virtualising MSR_SPEC_CTRL for PV guests (functional
+ * availability, or admin choice), hide the feature.
+ */
+ if ( !boot_cpu_has(X86_FEATURE_SC_MSR_PV) )
+ {
+ __clear_bit(X86_FEATURE_IBRSB, fs);
+ __clear_bit(X86_FEATURE_IBRS, fs);
+ }
+
+ guest_common_feature_adjustments(fs);
+
+ sanitise_featureset(fs);
+ x86_cpu_featureset_to_policy(fs, p);
+ recalculate_xstate(p);
+
+ p->extd.raw[0xa] = EMPTY_LEAF; /* No SVM for PV guests. */
p->arch_caps.raw = 0; /* Not supported yet. */
}
@@ -63,15 +509,112 @@ static void __init calculate_pv_max_policy(void)
static void __init calculate_pv_def_policy(void)
{
struct cpu_policy *p = &pv_def_cpu_policy;
+ uint32_t fs[FSCAPINTS];
+ unsigned int i;
*p = pv_max_cpu_policy;
+ x86_cpu_policy_to_featureset(p, fs);
+
+ for ( i = 0; i < ARRAY_SIZE(fs); ++i )
+ fs[i] &= pv_def_featuremask[i];
+
+ guest_common_feature_adjustments(fs);
+ guest_common_default_feature_adjustments(fs);
+
+ sanitise_featureset(fs);
+ x86_cpu_featureset_to_policy(fs, p);
+ recalculate_xstate(p);
}
static void __init calculate_hvm_max_policy(void)
{
struct cpu_policy *p = &hvm_max_cpu_policy;
+ uint32_t fs[FSCAPINTS];
+ unsigned int i;
+ const uint32_t *mask;
*p = host_cpu_policy;
+ x86_cpu_policy_to_featureset(p, fs);
+
+ mask = hvm_hap_supported() ?
+ hvm_hap_max_featuremask : hvm_shadow_max_featuremask;
+
+ for ( i = 0; i < ARRAY_SIZE(fs); ++i )
+ fs[i] &= mask[i];
+
+ /*
+ * Xen can provide an (x2)APIC emulation to HVM guests even if the host's
+ * (x2)APIC isn't enabled.
+ */
+ __set_bit(X86_FEATURE_APIC, fs);
+ __set_bit(X86_FEATURE_X2APIC, fs);
+
+ /*
+ * We don't support EFER.LMSLE at all. AMD has dropped the feature from
+ * hardware and allocated a CPUID bit to indicate its absence.
+ */
+ __set_bit(X86_FEATURE_NO_LMSL, fs);
+
+ /*
+ * On AMD, PV guests are entirely unable to use SYSENTER as Xen runs in
+ * long mode (and init_amd() has cleared it out of host capabilities), but
+ * HVM guests are able if running in protected mode.
+ */
+ if ( (boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) &&
+ raw_cpu_policy.basic.sep )
+ __set_bit(X86_FEATURE_SEP, fs);
+
+ /*
+ * VIRT_SSBD is exposed in the default policy as a result of
+ * amd_virt_spec_ctrl being set, it also needs exposing in the max policy.
+ */
+ if ( amd_virt_spec_ctrl )
+ __set_bit(X86_FEATURE_VIRT_SSBD, fs);
+
+ /*
+ * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional
+ * availability, or admin choice), hide the feature.
+ */
+ if ( !boot_cpu_has(X86_FEATURE_SC_MSR_HVM) )
+ {
+ __clear_bit(X86_FEATURE_IBRSB, fs);
+ __clear_bit(X86_FEATURE_IBRS, fs);
+ }
+ else if ( boot_cpu_has(X86_FEATURE_AMD_SSBD) )
+ /*
+ * If SPEC_CTRL.SSBD is available VIRT_SPEC_CTRL.SSBD can be exposed
+ * and implemented using the former. Expose in the max policy only as
+ * the preference is for guests to use SPEC_CTRL.SSBD if available.
+ */
+ __set_bit(X86_FEATURE_VIRT_SSBD, fs);
+
+ /*
+ * With VT-x, some features are only supported by Xen if dedicated
+ * hardware support is also available.
+ */
+ if ( cpu_has_vmx )
+ {
+ if ( !cpu_has_vmx_mpx )
+ __clear_bit(X86_FEATURE_MPX, fs);
+
+ if ( !cpu_has_vmx_xsaves )
+ __clear_bit(X86_FEATURE_XSAVES, fs);
+ }
+
+ /*
+ * Xen doesn't use PKS, so the guest support for it has opted to not use
+ * the VMCS load/save controls for efficiency reasons. This depends on
+ * the exact vmentry/exit behaviour, so don't expose PKS in other
+ * situations until someone has cross-checked the behaviour for safety.
+ */
+ if ( !cpu_has_vmx )
+ __clear_bit(X86_FEATURE_PKS, fs);
+
+ guest_common_feature_adjustments(fs);
+
+ sanitise_featureset(fs);
+ x86_cpu_featureset_to_policy(fs, p);
+ recalculate_xstate(p);
/* It's always possible to emulate CPUID faulting for HVM guests */
p->platform_info.cpuid_faulting = true;
@@ -82,8 +625,32 @@ static void __init calculate_hvm_max_policy(void)
static void __init calculate_hvm_def_policy(void)
{
struct cpu_policy *p = &hvm_def_cpu_policy;
+ uint32_t fs[FSCAPINTS];
+ unsigned int i;
+ const uint32_t *mask;
*p = hvm_max_cpu_policy;
+ x86_cpu_policy_to_featureset(p, fs);
+
+ mask = hvm_hap_supported() ?
+ hvm_hap_def_featuremask : hvm_shadow_def_featuremask;
+
+ for ( i = 0; i < ARRAY_SIZE(fs); ++i )
+ fs[i] &= mask[i];
+
+ guest_common_feature_adjustments(fs);
+ guest_common_default_feature_adjustments(fs);
+
+ /*
+ * Only expose VIRT_SSBD if AMD_SSBD is not available, and thus
+ * amd_virt_spec_ctrl is set.
+ */
+ if ( amd_virt_spec_ctrl )
+ __set_bit(X86_FEATURE_VIRT_SSBD, fs);
+
+ sanitise_featureset(fs);
+ x86_cpu_featureset_to_policy(fs, p);
+ recalculate_xstate(p);
}
void __init init_guest_cpu_policies(void)
@@ -149,3 +716,188 @@ int init_domain_cpu_policy(struct domain *d)
return 0;
}
+
+void recalculate_cpuid_policy(struct domain *d)
+{
+ struct cpu_policy *p = d->arch.cpuid;
+ const struct cpu_policy *max = is_pv_domain(d)
+ ? (IS_ENABLED(CONFIG_PV) ? &pv_max_cpu_policy : NULL)
+ : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
+ uint32_t fs[FSCAPINTS], max_fs[FSCAPINTS];
+ unsigned int i;
+
+ if ( !max )
+ {
+ ASSERT_UNREACHABLE();
+ return;
+ }
+
+ p->x86_vendor = x86_cpuid_lookup_vendor(
+ p->basic.vendor_ebx, p->basic.vendor_ecx, p->basic.vendor_edx);
+
+ p->basic.max_leaf = min(p->basic.max_leaf, max->basic.max_leaf);
+ p->feat.max_subleaf = min(p->feat.max_subleaf, max->feat.max_subleaf);
+ p->extd.max_leaf = 0x80000000 | min(p->extd.max_leaf & 0xffff,
+ ((p->x86_vendor & (X86_VENDOR_AMD |
+ X86_VENDOR_HYGON))
+ ? CPUID_GUEST_NR_EXTD_AMD
+ : CPUID_GUEST_NR_EXTD_INTEL) - 1);
+
+ x86_cpu_policy_to_featureset(p, fs);
+ x86_cpu_policy_to_featureset(max, max_fs);
+
+ if ( is_hvm_domain(d) )
+ {
+ /*
+ * HVM domains using Shadow paging have further restrictions on their
+ * available paging features.
+ */
+ if ( !hap_enabled(d) )
+ {
+ for ( i = 0; i < ARRAY_SIZE(max_fs); i++ )
+ max_fs[i] &= hvm_shadow_max_featuremask[i];
+ }
+
+ /* Hide nested-virt if it hasn't been explicitly configured. */
+ if ( !nestedhvm_enabled(d) )
+ {
+ __clear_bit(X86_FEATURE_VMX, max_fs);
+ __clear_bit(X86_FEATURE_SVM, max_fs);
+ }
+ }
+
+ /*
+ * Allow the toolstack to set HTT, X2APIC and CMP_LEGACY. These bits
+ * affect how to interpret topology information in other cpuid leaves.
+ */
+ __set_bit(X86_FEATURE_HTT, max_fs);
+ __set_bit(X86_FEATURE_X2APIC, max_fs);
+ __set_bit(X86_FEATURE_CMP_LEGACY, max_fs);
+
+ /*
+ * 32bit PV domains can't use any Long Mode features, and cannot use
+ * SYSCALL on non-AMD hardware.
+ */
+ if ( is_pv_32bit_domain(d) )
+ {
+ __clear_bit(X86_FEATURE_LM, max_fs);
+ if ( !(boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
+ __clear_bit(X86_FEATURE_SYSCALL, max_fs);
+ }
+
+ /* Clamp the toolstacks choices to reality. */
+ for ( i = 0; i < ARRAY_SIZE(fs); i++ )
+ fs[i] &= max_fs[i];
+
+ if ( p->basic.max_leaf < XSTATE_CPUID )
+ __clear_bit(X86_FEATURE_XSAVE, fs);
+
+ sanitise_featureset(fs);
+
+ /* Fold host's FDP_EXCP_ONLY and NO_FPU_SEL into guest's view. */
+ fs[FEATURESET_7b0] &= ~(cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
+ cpufeat_mask(X86_FEATURE_NO_FPU_SEL));
+ fs[FEATURESET_7b0] |= (host_cpu_policy.feat._7b0 &
+ (cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
+ cpufeat_mask(X86_FEATURE_NO_FPU_SEL)));
+
+ x86_cpu_featureset_to_policy(fs, p);
+
+ /* Pass host cacheline size through to guests. */
+ p->basic.clflush_size = max->basic.clflush_size;
+
+ p->extd.maxphysaddr = min(p->extd.maxphysaddr, max->extd.maxphysaddr);
+ p->extd.maxphysaddr = min_t(uint8_t, p->extd.maxphysaddr,
+ paging_max_paddr_bits(d));
+ p->extd.maxphysaddr = max_t(uint8_t, p->extd.maxphysaddr,
+ (p->basic.pae || p->basic.pse36) ? 36 : 32);
+
+ p->extd.maxlinaddr = p->extd.lm ? 48 : 32;
+
+ recalculate_xstate(p);
+ recalculate_misc(p);
+
+ for ( i = 0; i < ARRAY_SIZE(p->cache.raw); ++i )
+ {
+ if ( p->cache.subleaf[i].type >= 1 &&
+ p->cache.subleaf[i].type <= 3 )
+ {
+ /* Subleaf has a valid cache type. Zero reserved fields. */
+ p->cache.raw[i].a &= 0xffffc3ffu;
+ p->cache.raw[i].d &= 0x00000007u;
+ }
+ else
+ {
+ /* Subleaf is not valid. Zero the rest of the union. */
+ zero_leaves(p->cache.raw, i, ARRAY_SIZE(p->cache.raw) - 1);
+ break;
+ }
+ }
+
+ if ( vpmu_mode == XENPMU_MODE_OFF ||
+ ((vpmu_mode & XENPMU_MODE_ALL) && !is_hardware_domain(d)) )
+ p->basic.raw[0xa] = EMPTY_LEAF;
+
+ if ( !p->extd.svm )
+ p->extd.raw[0xa] = EMPTY_LEAF;
+
+ if ( !p->extd.page1gb )
+ p->extd.raw[0x19] = EMPTY_LEAF;
+}
+
+void __init init_dom0_cpuid_policy(struct domain *d)
+{
+ struct cpu_policy *p = d->arch.cpuid;
+
+ /* dom0 can't migrate. Give it ITSC if available. */
+ if ( cpu_has_itsc )
+ p->extd.itsc = true;
+
+ /*
+ * Expose the "hardware speculation behaviour" bits of ARCH_CAPS to dom0,
+ * so dom0 can turn off workarounds as appropriate. Temporary, until the
+ * domain policy logic gains a better understanding of MSRs.
+ */
+ if ( cpu_has_arch_caps )
+ p->feat.arch_caps = true;
+
+ /* Apply dom0-cpuid= command line settings, if provided. */
+ if ( dom0_cpuid_cmdline )
+ {
+ uint32_t fs[FSCAPINTS];
+ unsigned int i;
+
+ x86_cpu_policy_to_featureset(p, fs);
+
+ for ( i = 0; i < ARRAY_SIZE(fs); ++i )
+ {
+ fs[i] |= dom0_enable_feat [i];
+ fs[i] &= ~dom0_disable_feat[i];
+ }
+
+ x86_cpu_featureset_to_policy(fs, p);
+
+ recalculate_cpuid_policy(d);
+ }
+}
+
+static void __init __maybe_unused build_assertions(void)
+{
+ BUILD_BUG_ON(ARRAY_SIZE(known_features) != FSCAPINTS);
+ BUILD_BUG_ON(ARRAY_SIZE(pv_max_featuremask) != FSCAPINTS);
+ BUILD_BUG_ON(ARRAY_SIZE(hvm_shadow_max_featuremask) != FSCAPINTS);
+ BUILD_BUG_ON(ARRAY_SIZE(hvm_hap_max_featuremask) != FSCAPINTS);
+ BUILD_BUG_ON(ARRAY_SIZE(deep_features) != FSCAPINTS);
+
+ /* Find some more clever allocation scheme if this trips. */
+ BUILD_BUG_ON(sizeof(struct cpu_policy) > PAGE_SIZE);
+
+ BUILD_BUG_ON(sizeof(raw_cpu_policy.basic) !=
+ sizeof(raw_cpu_policy.basic.raw));
+ BUILD_BUG_ON(sizeof(raw_cpu_policy.feat) !=
+ sizeof(raw_cpu_policy.feat.raw));
+ BUILD_BUG_ON(sizeof(raw_cpu_policy.xstate) !=
+ sizeof(raw_cpu_policy.xstate.raw));
+ BUILD_BUG_ON(sizeof(raw_cpu_policy.extd) !=
+ sizeof(raw_cpu_policy.extd.raw));
+}
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 5eb5f1893516..3f20c342fde8 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -1,638 +1,14 @@
-#include <xen/init.h>
-#include <xen/lib.h>
-#include <xen/param.h>
#include <xen/sched.h>
-#include <xen/nospec.h>
-#include <asm/amd.h>
+#include <xen/types.h>
+
+#include <public/hvm/params.h>
+
#include <asm/cpu-policy.h>
#include <asm/cpuid.h>
-#include <asm/hvm/hvm.h>
-#include <asm/hvm/nestedhvm.h>
-#include <asm/hvm/svm/svm.h>
#include <asm/hvm/viridian.h>
-#include <asm/hvm/vmx/vmcs.h>
-#include <asm/paging.h>
-#include <asm/processor.h>
#include <asm/xstate.h>
-const uint32_t known_features[] = INIT_KNOWN_FEATURES;
-
-static const uint32_t __initconst pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
-static const uint32_t hvm_shadow_max_featuremask[] = INIT_HVM_SHADOW_MAX_FEATURES;
-static const uint32_t __initconst hvm_hap_max_featuremask[] =
- INIT_HVM_HAP_MAX_FEATURES;
-static const uint32_t __initconst pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
-static const uint32_t __initconst hvm_shadow_def_featuremask[] =
- INIT_HVM_SHADOW_DEF_FEATURES;
-static const uint32_t __initconst hvm_hap_def_featuremask[] =
- INIT_HVM_HAP_DEF_FEATURES;
-static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
-
-static const struct feature_name {
- const char *name;
- unsigned int bit;
-} feature_names[] __initconstrel = INIT_FEATURE_NAMES;
-
-/*
- * Parse a list of cpuid feature names -> bool, calling the callback for any
- * matches found.
- *
- * always_inline, because this is init code only and we really don't want a
- * function pointer call in the middle of the loop.
- */
-static int __init always_inline parse_cpuid(
- const char *s, void (*callback)(unsigned int feat, bool val))
-{
- const char *ss;
- int val, rc = 0;
-
- do {
- const struct feature_name *lhs, *rhs, *mid = NULL /* GCC... */;
- const char *feat;
-
- ss = strchr(s, ',');
- if ( !ss )
- ss = strchr(s, '\0');
-
- /* Skip the 'no-' prefix for name comparisons. */
- feat = s;
- if ( strncmp(s, "no-", 3) == 0 )
- feat += 3;
-
- /* (Re)initalise lhs and rhs for binary search. */
- lhs = feature_names;
- rhs = feature_names + ARRAY_SIZE(feature_names);
-
- while ( lhs < rhs )
- {
- int res;
-
- mid = lhs + (rhs - lhs) / 2;
- res = cmdline_strcmp(feat, mid->name);
-
- if ( res < 0 )
- {
- rhs = mid;
- continue;
- }
- if ( res > 0 )
- {
- lhs = mid + 1;
- continue;
- }
-
- if ( (val = parse_boolean(mid->name, s, ss)) >= 0 )
- {
- callback(mid->bit, val);
- mid = NULL;
- }
-
- break;
- }
-
- /*
- * Mid being NULL means that the name and boolean were successfully
- * identified. Everything else is an error.
- */
- if ( mid )
- rc = -EINVAL;
-
- s = ss + 1;
- } while ( *ss );
-
- return rc;
-}
-
-static void __init cf_check _parse_xen_cpuid(unsigned int feat, bool val)
-{
- if ( !val )
- setup_clear_cpu_cap(feat);
- else if ( feat == X86_FEATURE_RDRAND &&
- (cpuid_ecx(1) & cpufeat_mask(X86_FEATURE_RDRAND)) )
- setup_force_cpu_cap(X86_FEATURE_RDRAND);
-}
-
-static int __init cf_check parse_xen_cpuid(const char *s)
-{
- return parse_cpuid(s, _parse_xen_cpuid);
-}
-custom_param("cpuid", parse_xen_cpuid);
-
-static bool __initdata dom0_cpuid_cmdline;
-static uint32_t __initdata dom0_enable_feat[FSCAPINTS];
-static uint32_t __initdata dom0_disable_feat[FSCAPINTS];
-
-static void __init cf_check _parse_dom0_cpuid(unsigned int feat, bool val)
-{
- __set_bit (feat, val ? dom0_enable_feat : dom0_disable_feat);
- __clear_bit(feat, val ? dom0_disable_feat : dom0_enable_feat );
-}
-
-static int __init cf_check parse_dom0_cpuid(const char *s)
-{
- dom0_cpuid_cmdline = true;
-
- return parse_cpuid(s, _parse_dom0_cpuid);
-}
-custom_param("dom0-cpuid", parse_dom0_cpuid);
-
#define EMPTY_LEAF ((struct cpuid_leaf){})
-static void zero_leaves(struct cpuid_leaf *l,
- unsigned int first, unsigned int last)
-{
- memset(&l[first], 0, sizeof(*l) * (last - first + 1));
-}
-
-static void sanitise_featureset(uint32_t *fs)
-{
- /* for_each_set_bit() uses unsigned longs. Extend with zeroes. */
- uint32_t disabled_features[
- ROUNDUP(FSCAPINTS, sizeof(unsigned long)/sizeof(uint32_t))] = {};
- unsigned int i;
-
- for ( i = 0; i < FSCAPINTS; ++i )
- {
- /* Clamp to known mask. */
- fs[i] &= known_features[i];
-
- /*
- * Identify which features with deep dependencies have been
- * disabled.
- */
- disabled_features[i] = ~fs[i] & deep_features[i];
- }
-
- for_each_set_bit(i, (void *)disabled_features,
- sizeof(disabled_features) * 8)
- {
- const uint32_t *dfs = x86_cpuid_lookup_deep_deps(i);
- unsigned int j;
-
- ASSERT(dfs); /* deep_features[] should guarentee this. */
-
- for ( j = 0; j < FSCAPINTS; ++j )
- {
- fs[j] &= ~dfs[j];
- disabled_features[j] &= ~dfs[j];
- }
- }
-}
-
-static void recalculate_xstate(struct cpuid_policy *p)
-{
- uint64_t xstates = XSTATE_FP_SSE;
- uint32_t xstate_size = XSTATE_AREA_MIN_SIZE;
- unsigned int i, Da1 = p->xstate.Da1;
-
- /*
- * The Da1 leaf is the only piece of information preserved in the common
- * case. Everything else is derived from other feature state.
- */
- memset(&p->xstate, 0, sizeof(p->xstate));
-
- if ( !p->basic.xsave )
- return;
-
- if ( p->basic.avx )
- {
- xstates |= X86_XCR0_YMM;
- xstate_size = max(xstate_size,
- xstate_offsets[X86_XCR0_YMM_POS] +
- xstate_sizes[X86_XCR0_YMM_POS]);
- }
-
- if ( p->feat.mpx )
- {
- xstates |= X86_XCR0_BNDREGS | X86_XCR0_BNDCSR;
- xstate_size = max(xstate_size,
- xstate_offsets[X86_XCR0_BNDCSR_POS] +
- xstate_sizes[X86_XCR0_BNDCSR_POS]);
- }
-
- if ( p->feat.avx512f )
- {
- xstates |= X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM;
- xstate_size = max(xstate_size,
- xstate_offsets[X86_XCR0_HI_ZMM_POS] +
- xstate_sizes[X86_XCR0_HI_ZMM_POS]);
- }
-
- if ( p->feat.pku )
- {
- xstates |= X86_XCR0_PKRU;
- xstate_size = max(xstate_size,
- xstate_offsets[X86_XCR0_PKRU_POS] +
- xstate_sizes[X86_XCR0_PKRU_POS]);
- }
-
- p->xstate.max_size = xstate_size;
- p->xstate.xcr0_low = xstates & ~XSTATE_XSAVES_ONLY;
- p->xstate.xcr0_high = (xstates & ~XSTATE_XSAVES_ONLY) >> 32;
-
- p->xstate.Da1 = Da1;
- if ( p->xstate.xsaves )
- {
- p->xstate.xss_low = xstates & XSTATE_XSAVES_ONLY;
- p->xstate.xss_high = (xstates & XSTATE_XSAVES_ONLY) >> 32;
- }
- else
- xstates &= ~XSTATE_XSAVES_ONLY;
-
- for ( i = 2; i < min(63ul, ARRAY_SIZE(p->xstate.comp)); ++i )
- {
- uint64_t curr_xstate = 1ul << i;
-
- if ( !(xstates & curr_xstate) )
- continue;
-
- p->xstate.comp[i].size = xstate_sizes[i];
- p->xstate.comp[i].offset = xstate_offsets[i];
- p->xstate.comp[i].xss = curr_xstate & XSTATE_XSAVES_ONLY;
- p->xstate.comp[i].align = curr_xstate & xstate_align;
- }
-}
-
-/*
- * Misc adjustments to the policy. Mostly clobbering reserved fields and
- * duplicating shared fields. Intentionally hidden fields are annotated.
- */
-static void recalculate_misc(struct cpuid_policy *p)
-{
- p->basic.raw_fms &= 0x0fff0fff; /* Clobber Processor Type on Intel. */
- p->basic.apic_id = 0; /* Dynamic. */
-
- p->basic.raw[0x5] = EMPTY_LEAF; /* MONITOR not exposed to guests. */
- p->basic.raw[0x6] = EMPTY_LEAF; /* Therm/Power not exposed to guests. */
-
- p->basic.raw[0x8] = EMPTY_LEAF;
-
- /* TODO: Rework topology logic. */
- memset(p->topo.raw, 0, sizeof(p->topo.raw));
-
- p->basic.raw[0xc] = EMPTY_LEAF;
-
- p->extd.e1d &= ~CPUID_COMMON_1D_FEATURES;
-
- /* Most of Power/RAS hidden from guests. */
- p->extd.raw[0x7].a = p->extd.raw[0x7].b = p->extd.raw[0x7].c = 0;
-
- p->extd.raw[0x8].d = 0;
-
- switch ( p->x86_vendor )
- {
- case X86_VENDOR_INTEL:
- p->basic.l2_nr_queries = 1; /* Fixed to 1 query. */
- p->basic.raw[0x3] = EMPTY_LEAF; /* PSN - always hidden. */
- p->basic.raw[0x9] = EMPTY_LEAF; /* DCA - always hidden. */
-
- p->extd.vendor_ebx = 0;
- p->extd.vendor_ecx = 0;
- p->extd.vendor_edx = 0;
-
- p->extd.raw[0x1].a = p->extd.raw[0x1].b = 0;
-
- p->extd.raw[0x5] = EMPTY_LEAF;
- p->extd.raw[0x6].a = p->extd.raw[0x6].b = p->extd.raw[0x6].d = 0;
-
- p->extd.raw[0x8].a &= 0x0000ffff;
- p->extd.raw[0x8].c = 0;
- break;
-
- case X86_VENDOR_AMD:
- case X86_VENDOR_HYGON:
- zero_leaves(p->basic.raw, 0x2, 0x3);
- memset(p->cache.raw, 0, sizeof(p->cache.raw));
- zero_leaves(p->basic.raw, 0x9, 0xa);
-
- p->extd.vendor_ebx = p->basic.vendor_ebx;
- p->extd.vendor_ecx = p->basic.vendor_ecx;
- p->extd.vendor_edx = p->basic.vendor_edx;
-
- p->extd.raw_fms = p->basic.raw_fms;
- p->extd.raw[0x1].b &= 0xff00ffff;
- p->extd.e1d |= p->basic._1d & CPUID_COMMON_1D_FEATURES;
-
- p->extd.raw[0x8].a &= 0x0000ffff; /* GuestMaxPhysAddr hidden. */
- p->extd.raw[0x8].c &= 0x0003f0ff;
-
- p->extd.raw[0x9] = EMPTY_LEAF;
-
- zero_leaves(p->extd.raw, 0xb, 0x18);
-
- /* 0x19 - TLB details. Pass through. */
- /* 0x1a - Perf hints. Pass through. */
-
- p->extd.raw[0x1b] = EMPTY_LEAF; /* IBS - not supported. */
- p->extd.raw[0x1c] = EMPTY_LEAF; /* LWP - not supported. */
- p->extd.raw[0x1d] = EMPTY_LEAF; /* TopoExt Cache */
- p->extd.raw[0x1e] = EMPTY_LEAF; /* TopoExt APIC ID/Core/Node */
- p->extd.raw[0x1f] = EMPTY_LEAF; /* SEV */
- p->extd.raw[0x20] = EMPTY_LEAF; /* Platform QoS */
- break;
- }
-}
-
-static void __init calculate_raw_policy(void)
-{
- struct cpuid_policy *p = &raw_cpu_policy;
-
- x86_cpuid_policy_fill_native(p);
-
- /* Nothing good will come from Xen and libx86 disagreeing on vendor. */
- ASSERT(p->x86_vendor == boot_cpu_data.x86_vendor);
-}
-
-static void __init calculate_host_policy(void)
-{
- struct cpuid_policy *p = &host_cpu_policy;
- unsigned int max_extd_leaf;
-
- *p = raw_cpu_policy;
-
- p->basic.max_leaf =
- min_t(uint32_t, p->basic.max_leaf, ARRAY_SIZE(p->basic.raw) - 1);
- p->feat.max_subleaf =
- min_t(uint32_t, p->feat.max_subleaf, ARRAY_SIZE(p->feat.raw) - 1);
-
- max_extd_leaf = p->extd.max_leaf;
-
- /*
- * For AMD/Hygon hardware before Zen3, we unilaterally modify LFENCE to be
- * dispatch serialising for Spectre mitigations. Extend max_extd_leaf
- * beyond what hardware supports, to include the feature leaf containing
- * this information.
- */
- if ( cpu_has_lfence_dispatch )
- max_extd_leaf = max(max_extd_leaf, 0x80000021);
-
- p->extd.max_leaf = 0x80000000 | min_t(uint32_t, max_extd_leaf & 0xffff,
- ARRAY_SIZE(p->extd.raw) - 1);
-
- x86_cpu_featureset_to_policy(boot_cpu_data.x86_capability, p);
- recalculate_xstate(p);
- recalculate_misc(p);
-
- /* When vPMU is disabled, drop it from the host policy. */
- if ( vpmu_mode == XENPMU_MODE_OFF )
- p->basic.raw[0xa] = EMPTY_LEAF;
-
- if ( p->extd.svm )
- {
- /* Clamp to implemented features which require hardware support. */
- p->extd.raw[0xa].d &= ((1u << SVM_FEATURE_NPT) |
- (1u << SVM_FEATURE_LBRV) |
- (1u << SVM_FEATURE_NRIPS) |
- (1u << SVM_FEATURE_PAUSEFILTER) |
- (1u << SVM_FEATURE_DECODEASSISTS));
- /* Enable features which are always emulated. */
- p->extd.raw[0xa].d |= ((1u << SVM_FEATURE_VMCBCLEAN) |
- (1u << SVM_FEATURE_TSCRATEMSR));
- }
-}
-
-static void __init guest_common_default_feature_adjustments(uint32_t *fs)
-{
- /*
- * IvyBridge client parts suffer from leakage of RDRAND data due to SRBDS
- * (XSA-320 / CVE-2020-0543), and won't be receiving microcode to
- * compensate.
- *
- * Mitigate by hiding RDRAND from guests by default, unless explicitly
- * overridden on the Xen command line (cpuid=rdrand). Irrespective of the
- * default setting, guests can use RDRAND if explicitly enabled
- * (cpuid="host,rdrand=1") in the VM's config file, and VMs which were
- * previously using RDRAND can migrate in.
- */
- if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
- boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 0x3a &&
- cpu_has_rdrand && !is_forced_cpu_cap(X86_FEATURE_RDRAND) )
- __clear_bit(X86_FEATURE_RDRAND, fs);
-
- /*
- * On certain hardware, speculative or errata workarounds can result in
- * TSX being placed in "force-abort" mode, where it doesn't actually
- * function as expected, but is technically compatible with the ISA.
- *
- * Do not advertise RTM to guests by default if it won't actually work.
- */
- if ( rtm_disabled )
- __clear_bit(X86_FEATURE_RTM, fs);
-}
-
-static void __init guest_common_feature_adjustments(uint32_t *fs)
-{
- /* Unconditionally claim to be able to set the hypervisor bit. */
- __set_bit(X86_FEATURE_HYPERVISOR, fs);
-
- /*
- * If IBRS is offered to the guest, unconditionally offer STIBP. It is a
- * nop on non-HT hardware, and has this behaviour to make heterogeneous
- * setups easier to manage.
- */
- if ( test_bit(X86_FEATURE_IBRSB, fs) )
- __set_bit(X86_FEATURE_STIBP, fs);
- if ( test_bit(X86_FEATURE_IBRS, fs) )
- __set_bit(X86_FEATURE_AMD_STIBP, fs);
-
- /*
- * On hardware which supports IBRS/IBPB, we can offer IBPB independently
- * of IBRS by using the AMD feature bit. An administrator may wish for
- * performance reasons to offer IBPB without IBRS.
- */
- if ( host_cpu_policy.feat.ibrsb )
- __set_bit(X86_FEATURE_IBPB, fs);
-}
-
-static void __init calculate_pv_max_policy(void)
-{
- struct cpuid_policy *p = &pv_max_cpu_policy;
- uint32_t pv_featureset[FSCAPINTS];
- unsigned int i;
-
- *p = host_cpu_policy;
- x86_cpu_policy_to_featureset(p, pv_featureset);
-
- for ( i = 0; i < ARRAY_SIZE(pv_featureset); ++i )
- pv_featureset[i] &= pv_max_featuremask[i];
-
- /*
- * If Xen isn't virtualising MSR_SPEC_CTRL for PV guests (functional
- * availability, or admin choice), hide the feature.
- */
- if ( !boot_cpu_has(X86_FEATURE_SC_MSR_PV) )
- {
- __clear_bit(X86_FEATURE_IBRSB, pv_featureset);
- __clear_bit(X86_FEATURE_IBRS, pv_featureset);
- }
-
- guest_common_feature_adjustments(pv_featureset);
-
- sanitise_featureset(pv_featureset);
- x86_cpu_featureset_to_policy(pv_featureset, p);
- recalculate_xstate(p);
-
- p->extd.raw[0xa] = EMPTY_LEAF; /* No SVM for PV guests. */
-}
-
-static void __init calculate_pv_def_policy(void)
-{
- struct cpuid_policy *p = &pv_def_cpu_policy;
- uint32_t pv_featureset[FSCAPINTS];
- unsigned int i;
-
- *p = pv_max_cpu_policy;
- x86_cpu_policy_to_featureset(p, pv_featureset);
-
- for ( i = 0; i < ARRAY_SIZE(pv_featureset); ++i )
- pv_featureset[i] &= pv_def_featuremask[i];
-
- guest_common_feature_adjustments(pv_featureset);
- guest_common_default_feature_adjustments(pv_featureset);
-
- sanitise_featureset(pv_featureset);
- x86_cpu_featureset_to_policy(pv_featureset, p);
- recalculate_xstate(p);
-}
-
-static void __init calculate_hvm_max_policy(void)
-{
- struct cpuid_policy *p = &hvm_max_cpu_policy;
- uint32_t hvm_featureset[FSCAPINTS];
- unsigned int i;
- const uint32_t *hvm_featuremask;
-
- *p = host_cpu_policy;
- x86_cpu_policy_to_featureset(p, hvm_featureset);
-
- hvm_featuremask = hvm_hap_supported() ?
- hvm_hap_max_featuremask : hvm_shadow_max_featuremask;
-
- for ( i = 0; i < ARRAY_SIZE(hvm_featureset); ++i )
- hvm_featureset[i] &= hvm_featuremask[i];
-
- /*
- * Xen can provide an (x2)APIC emulation to HVM guests even if the host's
- * (x2)APIC isn't enabled.
- */
- __set_bit(X86_FEATURE_APIC, hvm_featureset);
- __set_bit(X86_FEATURE_X2APIC, hvm_featureset);
-
- /*
- * We don't support EFER.LMSLE at all. AMD has dropped the feature from
- * hardware and allocated a CPUID bit to indicate its absence.
- */
- __set_bit(X86_FEATURE_NO_LMSL, hvm_featureset);
-
- /*
- * On AMD, PV guests are entirely unable to use SYSENTER as Xen runs in
- * long mode (and init_amd() has cleared it out of host capabilities), but
- * HVM guests are able if running in protected mode.
- */
- if ( (boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) &&
- raw_cpu_policy.basic.sep )
- __set_bit(X86_FEATURE_SEP, hvm_featureset);
-
- /*
- * VIRT_SSBD is exposed in the default policy as a result of
- * amd_virt_spec_ctrl being set, it also needs exposing in the max policy.
- */
- if ( amd_virt_spec_ctrl )
- __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
-
- /*
- * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional
- * availability, or admin choice), hide the feature.
- */
- if ( !boot_cpu_has(X86_FEATURE_SC_MSR_HVM) )
- {
- __clear_bit(X86_FEATURE_IBRSB, hvm_featureset);
- __clear_bit(X86_FEATURE_IBRS, hvm_featureset);
- }
- else if ( boot_cpu_has(X86_FEATURE_AMD_SSBD) )
- /*
- * If SPEC_CTRL.SSBD is available VIRT_SPEC_CTRL.SSBD can be exposed
- * and implemented using the former. Expose in the max policy only as
- * the preference is for guests to use SPEC_CTRL.SSBD if available.
- */
- __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
-
- /*
- * With VT-x, some features are only supported by Xen if dedicated
- * hardware support is also available.
- */
- if ( cpu_has_vmx )
- {
- if ( !cpu_has_vmx_mpx )
- __clear_bit(X86_FEATURE_MPX, hvm_featureset);
-
- if ( !cpu_has_vmx_xsaves )
- __clear_bit(X86_FEATURE_XSAVES, hvm_featureset);
- }
-
- /*
- * Xen doesn't use PKS, so the guest support for it has opted to not use
- * the VMCS load/save controls for efficiency reasons. This depends on
- * the exact vmentry/exit behaviour, so don't expose PKS in other
- * situations until someone has cross-checked the behaviour for safety.
- */
- if ( !cpu_has_vmx )
- __clear_bit(X86_FEATURE_PKS, hvm_featureset);
-
- guest_common_feature_adjustments(hvm_featureset);
-
- sanitise_featureset(hvm_featureset);
- x86_cpu_featureset_to_policy(hvm_featureset, p);
- recalculate_xstate(p);
-}
-
-static void __init calculate_hvm_def_policy(void)
-{
- struct cpuid_policy *p = &hvm_def_cpu_policy;
- uint32_t hvm_featureset[FSCAPINTS];
- unsigned int i;
- const uint32_t *hvm_featuremask;
-
- *p = hvm_max_cpu_policy;
- x86_cpu_policy_to_featureset(p, hvm_featureset);
-
- hvm_featuremask = hvm_hap_supported() ?
- hvm_hap_def_featuremask : hvm_shadow_def_featuremask;
-
- for ( i = 0; i < ARRAY_SIZE(hvm_featureset); ++i )
- hvm_featureset[i] &= hvm_featuremask[i];
-
- guest_common_feature_adjustments(hvm_featureset);
- guest_common_default_feature_adjustments(hvm_featureset);
-
- /*
- * Only expose VIRT_SSBD if AMD_SSBD is not available, and thus
- * amd_virt_spec_ctrl is set.
- */
- if ( amd_virt_spec_ctrl )
- __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
-
- sanitise_featureset(hvm_featureset);
- x86_cpu_featureset_to_policy(hvm_featureset, p);
- recalculate_xstate(p);
-}
-
-void __init init_guest_cpuid(void)
-{
- calculate_raw_policy();
- calculate_host_policy();
-
- if ( IS_ENABLED(CONFIG_PV) )
- {
- calculate_pv_max_policy();
- calculate_pv_def_policy();
- }
-
- if ( hvm_enabled )
- {
- calculate_hvm_max_policy();
- calculate_hvm_def_policy();
- }
-}
bool recheck_cpu_features(unsigned int cpu)
{
@@ -656,170 +32,6 @@ bool recheck_cpu_features(unsigned int cpu)
return okay;
}
-void recalculate_cpuid_policy(struct domain *d)
-{
- struct cpuid_policy *p = d->arch.cpuid;
- const struct cpuid_policy *max = is_pv_domain(d)
- ? (IS_ENABLED(CONFIG_PV) ? &pv_max_cpu_policy : NULL)
- : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
- uint32_t fs[FSCAPINTS], max_fs[FSCAPINTS];
- unsigned int i;
-
- if ( !max )
- {
- ASSERT_UNREACHABLE();
- return;
- }
-
- p->x86_vendor = x86_cpuid_lookup_vendor(
- p->basic.vendor_ebx, p->basic.vendor_ecx, p->basic.vendor_edx);
-
- p->basic.max_leaf = min(p->basic.max_leaf, max->basic.max_leaf);
- p->feat.max_subleaf = min(p->feat.max_subleaf, max->feat.max_subleaf);
- p->extd.max_leaf = 0x80000000 | min(p->extd.max_leaf & 0xffff,
- ((p->x86_vendor & (X86_VENDOR_AMD |
- X86_VENDOR_HYGON))
- ? CPUID_GUEST_NR_EXTD_AMD
- : CPUID_GUEST_NR_EXTD_INTEL) - 1);
-
- x86_cpu_policy_to_featureset(p, fs);
- x86_cpu_policy_to_featureset(max, max_fs);
-
- if ( is_hvm_domain(d) )
- {
- /*
- * HVM domains using Shadow paging have further restrictions on their
- * available paging features.
- */
- if ( !hap_enabled(d) )
- {
- for ( i = 0; i < ARRAY_SIZE(max_fs); i++ )
- max_fs[i] &= hvm_shadow_max_featuremask[i];
- }
-
- /* Hide nested-virt if it hasn't been explicitly configured. */
- if ( !nestedhvm_enabled(d) )
- {
- __clear_bit(X86_FEATURE_VMX, max_fs);
- __clear_bit(X86_FEATURE_SVM, max_fs);
- }
- }
-
- /*
- * Allow the toolstack to set HTT, X2APIC and CMP_LEGACY. These bits
- * affect how to interpret topology information in other cpuid leaves.
- */
- __set_bit(X86_FEATURE_HTT, max_fs);
- __set_bit(X86_FEATURE_X2APIC, max_fs);
- __set_bit(X86_FEATURE_CMP_LEGACY, max_fs);
-
- /*
- * 32bit PV domains can't use any Long Mode features, and cannot use
- * SYSCALL on non-AMD hardware.
- */
- if ( is_pv_32bit_domain(d) )
- {
- __clear_bit(X86_FEATURE_LM, max_fs);
- if ( !(boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
- __clear_bit(X86_FEATURE_SYSCALL, max_fs);
- }
-
- /* Clamp the toolstacks choices to reality. */
- for ( i = 0; i < ARRAY_SIZE(fs); i++ )
- fs[i] &= max_fs[i];
-
- if ( p->basic.max_leaf < XSTATE_CPUID )
- __clear_bit(X86_FEATURE_XSAVE, fs);
-
- sanitise_featureset(fs);
-
- /* Fold host's FDP_EXCP_ONLY and NO_FPU_SEL into guest's view. */
- fs[FEATURESET_7b0] &= ~(cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
- cpufeat_mask(X86_FEATURE_NO_FPU_SEL));
- fs[FEATURESET_7b0] |= (host_cpu_policy.feat._7b0 &
- (cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
- cpufeat_mask(X86_FEATURE_NO_FPU_SEL)));
-
- x86_cpu_featureset_to_policy(fs, p);
-
- /* Pass host cacheline size through to guests. */
- p->basic.clflush_size = max->basic.clflush_size;
-
- p->extd.maxphysaddr = min(p->extd.maxphysaddr, max->extd.maxphysaddr);
- p->extd.maxphysaddr = min_t(uint8_t, p->extd.maxphysaddr,
- paging_max_paddr_bits(d));
- p->extd.maxphysaddr = max_t(uint8_t, p->extd.maxphysaddr,
- (p->basic.pae || p->basic.pse36) ? 36 : 32);
-
- p->extd.maxlinaddr = p->extd.lm ? 48 : 32;
-
- recalculate_xstate(p);
- recalculate_misc(p);
-
- for ( i = 0; i < ARRAY_SIZE(p->cache.raw); ++i )
- {
- if ( p->cache.subleaf[i].type >= 1 &&
- p->cache.subleaf[i].type <= 3 )
- {
- /* Subleaf has a valid cache type. Zero reserved fields. */
- p->cache.raw[i].a &= 0xffffc3ffu;
- p->cache.raw[i].d &= 0x00000007u;
- }
- else
- {
- /* Subleaf is not valid. Zero the rest of the union. */
- zero_leaves(p->cache.raw, i, ARRAY_SIZE(p->cache.raw) - 1);
- break;
- }
- }
-
- if ( vpmu_mode == XENPMU_MODE_OFF ||
- ((vpmu_mode & XENPMU_MODE_ALL) && !is_hardware_domain(d)) )
- p->basic.raw[0xa] = EMPTY_LEAF;
-
- if ( !p->extd.svm )
- p->extd.raw[0xa] = EMPTY_LEAF;
-
- if ( !p->extd.page1gb )
- p->extd.raw[0x19] = EMPTY_LEAF;
-}
-
-void __init init_dom0_cpuid_policy(struct domain *d)
-{
- struct cpuid_policy *p = d->arch.cpuid;
-
- /* dom0 can't migrate. Give it ITSC if available. */
- if ( cpu_has_itsc )
- p->extd.itsc = true;
-
- /*
- * Expose the "hardware speculation behaviour" bits of ARCH_CAPS to dom0,
- * so dom0 can turn off workarounds as appropriate. Temporary, until the
- * domain policy logic gains a better understanding of MSRs.
- */
- if ( cpu_has_arch_caps )
- p->feat.arch_caps = true;
-
- /* Apply dom0-cpuid= command line settings, if provided. */
- if ( dom0_cpuid_cmdline )
- {
- uint32_t fs[FSCAPINTS];
- unsigned int i;
-
- x86_cpu_policy_to_featureset(p, fs);
-
- for ( i = 0; i < ARRAY_SIZE(fs); ++i )
- {
- fs[i] |= dom0_enable_feat [i];
- fs[i] &= ~dom0_disable_feat[i];
- }
-
- x86_cpu_featureset_to_policy(fs, p);
-
- recalculate_cpuid_policy(d);
- }
-}
-
void guest_cpuid(const struct vcpu *v, uint32_t leaf,
uint32_t subleaf, struct cpuid_leaf *res)
{
@@ -1190,27 +402,6 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
}
}
-static void __init __maybe_unused build_assertions(void)
-{
- BUILD_BUG_ON(ARRAY_SIZE(known_features) != FSCAPINTS);
- BUILD_BUG_ON(ARRAY_SIZE(pv_max_featuremask) != FSCAPINTS);
- BUILD_BUG_ON(ARRAY_SIZE(hvm_shadow_max_featuremask) != FSCAPINTS);
- BUILD_BUG_ON(ARRAY_SIZE(hvm_hap_max_featuremask) != FSCAPINTS);
- BUILD_BUG_ON(ARRAY_SIZE(deep_features) != FSCAPINTS);
-
- /* Find some more clever allocation scheme if this trips. */
- BUILD_BUG_ON(sizeof(struct cpuid_policy) > PAGE_SIZE);
-
- BUILD_BUG_ON(sizeof(raw_cpu_policy.basic) !=
- sizeof(raw_cpu_policy.basic.raw));
- BUILD_BUG_ON(sizeof(raw_cpu_policy.feat) !=
- sizeof(raw_cpu_policy.feat.raw));
- BUILD_BUG_ON(sizeof(raw_cpu_policy.xstate) !=
- sizeof(raw_cpu_policy.xstate.raw));
- BUILD_BUG_ON(sizeof(raw_cpu_policy.extd) !=
- sizeof(raw_cpu_policy.extd.raw));
-}
-
/*
* Local variables:
* mode: C
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d326fa1c0136..675c523d9909 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -77,7 +77,6 @@
#include <public/memory.h>
#include <public/vm_event.h>
#include <public/arch-x86/cpuid.h>
-#include <asm/cpuid.h>
#include <compat/hvm/hvm_op.h>
diff --git a/xen/arch/x86/include/asm/cpu-policy.h b/xen/arch/x86/include/asm/cpu-policy.h
index 13e2a1f86d13..b361537a602b 100644
--- a/xen/arch/x86/include/asm/cpu-policy.h
+++ b/xen/arch/x86/include/asm/cpu-policy.h
@@ -18,4 +18,10 @@ void init_guest_cpu_policies(void);
/* Allocate and initialise a CPU policy suitable for the domain. */
int init_domain_cpu_policy(struct domain *d);
+/* Apply dom0-specific tweaks to the CPUID policy. */
+void init_dom0_cpuid_policy(struct domain *d);
+
+/* Clamp the CPUID policy to reality. */
+void recalculate_cpuid_policy(struct domain *d);
+
#endif /* X86_CPU_POLICY_H */
diff --git a/xen/arch/x86/include/asm/cpuid.h b/xen/arch/x86/include/asm/cpuid.h
index 7f81b998ce01..b32ba0bbfe5c 100644
--- a/xen/arch/x86/include/asm/cpuid.h
+++ b/xen/arch/x86/include/asm/cpuid.h
@@ -8,14 +8,10 @@
#include <xen/kernel.h>
#include <xen/percpu.h>
-#include <xen/lib/x86/cpu-policy.h>
-
#include <public/sysctl.h>
extern const uint32_t known_features[FSCAPINTS];
-void init_guest_cpuid(void);
-
/*
* Expected levelling capabilities (given cpuid vendor/family information),
* and levelling capabilities actually available (given MSR probing).
@@ -49,13 +45,8 @@ extern struct cpuidmasks cpuidmask_defaults;
/* Check that all previously present features are still available. */
bool recheck_cpu_features(unsigned int cpu);
-/* Apply dom0-specific tweaks to the CPUID policy. */
-void init_dom0_cpuid_policy(struct domain *d);
-
-/* Clamp the CPUID policy to reality. */
-void recalculate_cpuid_policy(struct domain *d);
-
struct vcpu;
+struct cpuid_leaf;
void guest_cpuid(const struct vcpu *v, uint32_t leaf,
uint32_t subleaf, struct cpuid_leaf *res);
diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
index f94f28c8e271..95492715d8ad 100644
--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -10,6 +10,7 @@
#include <xen/param.h>
#include <xen/sched.h>
+#include <asm/cpu-policy.h>
#include <asm/cpufeature.h>
#include <asm/invpcid.h>
#include <asm/spec_ctrl.h>
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 51a19b9019eb..08ade715a3ce 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -51,7 +51,6 @@
#include <asm/alternative.h>
#include <asm/mc146818rtc.h>
#include <asm/cpu-policy.h>
-#include <asm/cpuid.h>
#include <asm/spec_ctrl.h>
#include <asm/guest.h>
#include <asm/microcode.h>
@@ -1991,7 +1990,6 @@ void __init noreturn __start_xen(unsigned long mbi_p)
if ( !tboot_protect_mem_regions() )
panic("Could not protect TXT memory regions\n");
- init_guest_cpuid();
init_guest_cpu_policies();
if ( xen_cpuidle )
--
2.30.2
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v2 12/15] x86/emul: Switch x86_emulate_ctxt to cpu_policy
2023-04-04 9:52 [PATCH v2 00/15] x86: Merge cpuid and msr policy objects Andrew Cooper
` (10 preceding siblings ...)
2023-04-04 9:52 ` [PATCH v2 11/15] x86/boot: Merge CPUID " Andrew Cooper
@ 2023-04-04 9:52 ` Andrew Cooper
2023-04-04 15:22 ` Jan Beulich
2023-04-04 9:52 ` [PATCH v2 13/15] tools/fuzz: Rework afl-policy-fuzzer Andrew Cooper
` (2 subsequent siblings)
14 siblings, 1 reply; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 9:52 UTC (permalink / raw)
To: Xen-devel; +Cc: Andrew Cooper, Jan Beulich, Roger Pau Monné, Wei Liu
As with struct domain, retain cpuid as a valid alias for local code clarity.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
v2:
* Retain cpuid compatibility alias.
* Split out of RFC patch.
---
tools/fuzz/x86_instruction_emulator/fuzz-emul.c | 2 +-
tools/tests/x86_emulator/test_x86_emulator.c | 2 +-
tools/tests/x86_emulator/x86-emulate.c | 2 +-
xen/arch/x86/hvm/emulate.c | 4 ++--
xen/arch/x86/mm/shadow/hvm.c | 2 +-
xen/arch/x86/pv/emul-priv-op.c | 2 +-
xen/arch/x86/pv/ro-page-fault.c | 2 +-
xen/arch/x86/x86_emulate/private.h | 4 ++--
xen/arch/x86/x86_emulate/x86_emulate.h | 7 +++++--
9 files changed, 15 insertions(+), 12 deletions(-)
diff --git a/tools/fuzz/x86_instruction_emulator/fuzz-emul.c b/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
index 966e46bee199..4885a68210d0 100644
--- a/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
+++ b/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
@@ -893,7 +893,7 @@ int LLVMFuzzerTestOneInput(const uint8_t *data_p, size_t size)
struct x86_emulate_ctxt ctxt = {
.data = &state,
.regs = &input.regs,
- .cpuid = &cp,
+ .cpu_policy = &cp,
.addr_size = 8 * sizeof(void *),
.sp_size = 8 * sizeof(void *),
};
diff --git a/tools/tests/x86_emulator/test_x86_emulator.c b/tools/tests/x86_emulator/test_x86_emulator.c
index 31586f805726..7b7fbaaf45ec 100644
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -909,7 +909,7 @@ int main(int argc, char **argv)
ctxt.regs = ®s;
ctxt.force_writeback = 0;
- ctxt.cpuid = &cp;
+ ctxt.cpu_policy = &cp;
ctxt.lma = sizeof(void *) == 8;
ctxt.addr_size = 8 * sizeof(void *);
ctxt.sp_size = 8 * sizeof(void *);
diff --git a/tools/tests/x86_emulator/x86-emulate.c b/tools/tests/x86_emulator/x86-emulate.c
index f6ee09439751..2692404df906 100644
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -25,7 +25,7 @@
#endif
uint32_t mxcsr_mask = 0x0000ffbf;
-struct cpuid_policy cp;
+struct cpu_policy cp;
static char fpu_save_area[0x4000] __attribute__((__aligned__((64))));
static bool use_xsave;
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 95364deb1996..5691725d6c6f 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -2771,7 +2771,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
unsigned int errcode)
{
- struct hvm_emulate_ctxt ctx = {{ 0 }};
+ struct hvm_emulate_ctxt ctx = {};
int rc;
hvm_emulate_init_once(&ctx, NULL, guest_cpu_user_regs());
@@ -2846,7 +2846,7 @@ void hvm_emulate_init_once(
hvmemul_ctxt->validate = validate;
hvmemul_ctxt->ctxt.regs = regs;
- hvmemul_ctxt->ctxt.cpuid = curr->domain->arch.cpuid;
+ hvmemul_ctxt->ctxt.cpu_policy = curr->domain->arch.cpu_policy;
hvmemul_ctxt->ctxt.force_writeback = true;
}
diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c
index e2ee1c77056f..cc84af01925a 100644
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -319,7 +319,7 @@ const struct x86_emulate_ops *shadow_init_emulation(
memset(sh_ctxt, 0, sizeof(*sh_ctxt));
sh_ctxt->ctxt.regs = regs;
- sh_ctxt->ctxt.cpuid = curr->domain->arch.cpuid;
+ sh_ctxt->ctxt.cpu_policy = curr->domain->arch.cpu_policy;
sh_ctxt->ctxt.lma = hvm_long_mode_active(curr);
/* Segment cache initialisation. Primed with CS. */
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index 5da00e24e4ff..ab52768271c5 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -1327,7 +1327,7 @@ int pv_emulate_privileged_op(struct cpu_user_regs *regs)
struct domain *currd = curr->domain;
struct priv_op_ctxt ctxt = {
.ctxt.regs = regs,
- .ctxt.cpuid = currd->arch.cpuid,
+ .ctxt.cpu_policy = currd->arch.cpu_policy,
.ctxt.lma = !is_pv_32bit_domain(currd),
};
int rc;
diff --git a/xen/arch/x86/pv/ro-page-fault.c b/xen/arch/x86/pv/ro-page-fault.c
index 5963f5ee2d51..0d02c7d2ab10 100644
--- a/xen/arch/x86/pv/ro-page-fault.c
+++ b/xen/arch/x86/pv/ro-page-fault.c
@@ -356,7 +356,7 @@ int pv_ro_page_fault(unsigned long addr, struct cpu_user_regs *regs)
unsigned int addr_size = is_pv_32bit_domain(currd) ? 32 : BITS_PER_LONG;
struct x86_emulate_ctxt ctxt = {
.regs = regs,
- .cpuid = currd->arch.cpuid,
+ .cpu_policy = currd->arch.cpu_policy,
.addr_size = addr_size,
.sp_size = addr_size,
.lma = addr_size > 32,
diff --git a/xen/arch/x86/x86_emulate/private.h b/xen/arch/x86/x86_emulate/private.h
index 653a298c705b..8dee019731ae 100644
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -505,7 +505,7 @@ in_protmode(
})
static inline bool
-_amd_like(const struct cpuid_policy *cp)
+_amd_like(const struct cpu_policy *cp)
{
return cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON);
}
@@ -513,7 +513,7 @@ _amd_like(const struct cpuid_policy *cp)
static inline bool
amd_like(const struct x86_emulate_ctxt *ctxt)
{
- return _amd_like(ctxt->cpuid);
+ return _amd_like(ctxt->cpu_policy);
}
#define vcpu_has_fpu() (ctxt->cpuid->basic.fpu)
diff --git a/xen/arch/x86/x86_emulate/x86_emulate.h b/xen/arch/x86/x86_emulate/x86_emulate.h
index 75015104fbdb..0139d16da70c 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.h
+++ b/xen/arch/x86/x86_emulate/x86_emulate.h
@@ -565,8 +565,11 @@ struct x86_emulate_ctxt
* Input-only state:
*/
- /* CPUID Policy for the domain. */
- const struct cpuid_policy *cpuid;
+ /* CPU policy for the domain. Allow aliases for local code clarity. */
+ union {
+ struct cpu_policy *cpu_policy;
+ struct cpu_policy *cpuid;
+ };
/* Set this if writes may have side effects. */
bool force_writeback;
--
2.30.2
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v2 13/15] tools/fuzz: Rework afl-policy-fuzzer
2023-04-04 9:52 [PATCH v2 00/15] x86: Merge cpuid and msr policy objects Andrew Cooper
` (11 preceding siblings ...)
2023-04-04 9:52 ` [PATCH v2 12/15] x86/emul: Switch x86_emulate_ctxt to cpu_policy Andrew Cooper
@ 2023-04-04 9:52 ` Andrew Cooper
2023-04-04 15:25 ` Jan Beulich
2023-04-04 9:52 ` [PATCH v2 14/15] libx86: Update library API for cpu_policy Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 15/15] x86: Remove temporary {cpuid,msr}_policy defines Andrew Cooper
14 siblings, 1 reply; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 9:52 UTC (permalink / raw)
To: Xen-devel; +Cc: Andrew Cooper, Jan Beulich, Roger Pau Monné, Wei Liu
With cpuid_policy and msr_policy merged to form cpu_policy, merge the
respective fuzzing logic.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
v2:
* New
---
tools/fuzz/cpu-policy/afl-policy-fuzzer.c | 57 ++++++++---------------
1 file changed, 20 insertions(+), 37 deletions(-)
diff --git a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
index 0ce3d8e16626..466bdbb1d91a 100644
--- a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
+++ b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
@@ -16,16 +16,19 @@ static bool debug;
#define EMPTY_LEAF ((struct cpuid_leaf){})
-static void check_cpuid(struct cpuid_policy *cp)
+static void check_policy(struct cpu_policy *cp)
{
- struct cpuid_policy new = {};
+ struct cpu_policy new = {};
size_t data_end;
xen_cpuid_leaf_t *leaves = malloc(CPUID_MAX_SERIALISED_LEAVES *
sizeof(xen_cpuid_leaf_t));
- unsigned int nr = CPUID_MAX_SERIALISED_LEAVES;
+ xen_msr_entry_t *msrs = malloc(MSR_MAX_SERIALISED_ENTRIES *
+ sizeof(xen_cpuid_leaf_t));
+ unsigned int nr_leaves = CPUID_MAX_SERIALISED_LEAVES;
+ unsigned int nr_msrs = MSR_MAX_SERIALISED_ENTRIES;
int rc;
- if ( !leaves )
+ if ( !leaves || !msrs )
return;
/*
@@ -49,12 +52,19 @@ static void check_cpuid(struct cpuid_policy *cp)
x86_cpuid_policy_recalc_synth(cp);
/* Serialise... */
- rc = x86_cpuid_copy_to_buffer(cp, leaves, &nr);
+ rc = x86_cpuid_copy_to_buffer(cp, leaves, &nr_leaves);
+ assert(rc == 0);
+ assert(nr_leaves <= CPUID_MAX_SERIALISED_LEAVES);
+
+ rc = x86_msr_copy_to_buffer(cp, msrs, &nr_msrs);
assert(rc == 0);
- assert(nr <= CPUID_MAX_SERIALISED_LEAVES);
+ assert(nr_msrs <= MSR_MAX_SERIALISED_ENTRIES);
/* ... and deserialise. */
- rc = x86_cpuid_copy_from_buffer(&new, leaves, nr, NULL, NULL);
+ rc = x86_cpuid_copy_from_buffer(&new, leaves, nr_leaves, NULL, NULL);
+ assert(rc == 0);
+
+ rc = x86_msr_copy_from_buffer(&new, msrs, nr_msrs, NULL);
assert(rc == 0);
/* The result after serialisation/deserialisaion should be identical... */
@@ -76,28 +86,6 @@ static void check_cpuid(struct cpuid_policy *cp)
free(leaves);
}
-static void check_msr(struct msr_policy *mp)
-{
- struct msr_policy new = {};
- xen_msr_entry_t *msrs = malloc(MSR_MAX_SERIALISED_ENTRIES *
- sizeof(xen_msr_entry_t));
- unsigned int nr = MSR_MAX_SERIALISED_ENTRIES;
- int rc;
-
- if ( !msrs )
- return;
-
- rc = x86_msr_copy_to_buffer(mp, msrs, &nr);
- assert(rc == 0);
- assert(nr <= MSR_MAX_SERIALISED_ENTRIES);
-
- rc = x86_msr_copy_from_buffer(&new, msrs, nr, NULL);
- assert(rc == 0);
- assert(memcmp(mp, &new, sizeof(*mp)) == 0);
-
- free(msrs);
-}
-
int main(int argc, char **argv)
{
FILE *fp = NULL;
@@ -144,8 +132,7 @@ int main(int argc, char **argv)
while ( __AFL_LOOP(1000) )
#endif
{
- struct cpuid_policy *cp = NULL;
- struct msr_policy *mp = NULL;
+ struct cpu_policy *cp = NULL;
if ( fp != stdin )
{
@@ -160,22 +147,18 @@ int main(int argc, char **argv)
}
cp = calloc(1, sizeof(*cp));
- mp = calloc(1, sizeof(*mp));
- if ( !cp || !mp )
+ if ( !cp )
goto skip;
fread(cp, sizeof(*cp), 1, fp);
- fread(mp, sizeof(*mp), 1, fp);
if ( !feof(fp) )
goto skip;
- check_cpuid(cp);
- check_msr(mp);
+ check_policy(cp);
skip:
free(cp);
- free(mp);
if ( fp != stdin )
{
--
2.30.2
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v2 14/15] libx86: Update library API for cpu_policy
2023-04-04 9:52 [PATCH v2 00/15] x86: Merge cpuid and msr policy objects Andrew Cooper
` (12 preceding siblings ...)
2023-04-04 9:52 ` [PATCH v2 13/15] tools/fuzz: Rework afl-policy-fuzzer Andrew Cooper
@ 2023-04-04 9:52 ` Andrew Cooper
2023-04-04 15:34 ` Jan Beulich
2023-04-04 21:06 ` [PATCH v3] " Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 15/15] x86: Remove temporary {cpuid,msr}_policy defines Andrew Cooper
14 siblings, 2 replies; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 9:52 UTC (permalink / raw)
To: Xen-devel; +Cc: Andrew Cooper, Jan Beulich, Roger Pau Monné, Wei Liu
Adjust the API and comments appropriately.
x86_cpu_policy_fill_native() will eventually contain MSR reads, but leave a
TODO in the short term.
No practical change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
v2:
* New
---
tools/fuzz/cpu-policy/afl-policy-fuzzer.c | 4 +--
tools/tests/cpu-policy/test-cpu-policy.c | 4 +--
tools/tests/x86_emulator/x86-emulate.c | 2 +-
xen/arch/x86/cpu-policy.c | 2 +-
xen/arch/x86/domctl.c | 2 +-
xen/arch/x86/xstate.c | 4 +--
xen/include/xen/lib/x86/cpu-policy.h | 42 ++++++++++++-----------
xen/lib/x86/cpuid.c | 24 +++++++------
xen/lib/x86/msr.c | 4 +--
9 files changed, 46 insertions(+), 42 deletions(-)
diff --git a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
index 466bdbb1d91a..7d8467b4b258 100644
--- a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
+++ b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
@@ -48,8 +48,8 @@ static void check_policy(struct cpu_policy *cp)
* Fix up the data in the source policy which isn't expected to survive
* serialisation.
*/
- x86_cpuid_policy_clear_out_of_range_leaves(cp);
- x86_cpuid_policy_recalc_synth(cp);
+ x86_cpu_policy_clear_out_of_range_leaves(cp);
+ x86_cpu_policy_recalc_synth(cp);
/* Serialise... */
rc = x86_cpuid_copy_to_buffer(cp, leaves, &nr_leaves);
diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
index a4ca07f33973..f1d968adfc39 100644
--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -105,7 +105,7 @@ static void test_cpuid_current(void)
printf("Testing CPUID on current CPU\n");
- x86_cpuid_policy_fill_native(&p);
+ x86_cpu_policy_fill_native(&p);
rc = x86_cpuid_copy_to_buffer(&p, leaves, &nr);
if ( rc != 0 )
@@ -554,7 +554,7 @@ static void test_cpuid_out_of_range_clearing(void)
void *ptr;
unsigned int nr_markers;
- x86_cpuid_policy_clear_out_of_range_leaves(p);
+ x86_cpu_policy_clear_out_of_range_leaves(p);
/* Count the number of 0xc2's still remaining. */
for ( ptr = p, nr_markers = 0;
diff --git a/tools/tests/x86_emulator/x86-emulate.c b/tools/tests/x86_emulator/x86-emulate.c
index 2692404df906..7d2d57f7591a 100644
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -75,7 +75,7 @@ bool emul_test_init(void)
unsigned long sp;
- x86_cpuid_policy_fill_native(&cp);
+ x86_cpu_policy_fill_native(&cp);
/*
* The emulator doesn't use these instructions, so can always emulate
diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index 83186e940ca7..1140f0b365cd 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -347,7 +347,7 @@ static void __init calculate_raw_policy(void)
{
struct cpu_policy *p = &raw_cpu_policy;
- x86_cpuid_policy_fill_native(p);
+ x86_cpu_policy_fill_native(p);
/* Nothing good will come from Xen and libx86 disagreeing on vendor. */
ASSERT(p->x86_vendor == boot_cpu_data.x86_vendor);
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index c02528594102..1a8b4cff48ee 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -66,7 +66,7 @@ static int update_domain_cpu_policy(struct domain *d,
goto out;
/* Trim any newly-stale out-of-range leaves. */
- x86_cpuid_policy_clear_out_of_range_leaves(new);
+ x86_cpu_policy_clear_out_of_range_leaves(new);
/* Audit the combined dataset. */
ret = x86_cpu_policies_are_compatible(sys, new, &err);
diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
index d481e1db3e7e..92496f379546 100644
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -684,7 +684,7 @@ void xstate_init(struct cpuinfo_x86 *c)
int validate_xstate(const struct domain *d, uint64_t xcr0, uint64_t xcr0_accum,
const struct xsave_hdr *hdr)
{
- uint64_t xcr0_max = cpuid_policy_xcr0_max(d->arch.cpuid);
+ uint64_t xcr0_max = cpu_policy_xcr0_max(d->arch.cpuid);
unsigned int i;
if ( (hdr->xstate_bv & ~xcr0_accum) ||
@@ -708,7 +708,7 @@ int validate_xstate(const struct domain *d, uint64_t xcr0, uint64_t xcr0_accum,
int handle_xsetbv(u32 index, u64 new_bv)
{
struct vcpu *curr = current;
- uint64_t xcr0_max = cpuid_policy_xcr0_max(curr->domain->arch.cpuid);
+ uint64_t xcr0_max = cpu_policy_xcr0_max(curr->domain->arch.cpuid);
u64 mask;
if ( index != XCR_XFEATURE_ENABLED_MASK )
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index 57b4633c861e..dee46adeff17 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -399,12 +399,12 @@ void x86_cpu_policy_to_featureset(const struct cpu_policy *p,
void x86_cpu_featureset_to_policy(const uint32_t fs[FEATURESET_NR_ENTRIES],
struct cpu_policy *p);
-static inline uint64_t cpuid_policy_xcr0_max(const struct cpuid_policy *p)
+static inline uint64_t cpu_policy_xcr0_max(const struct cpu_policy *p)
{
return ((uint64_t)p->xstate.xcr0_high << 32) | p->xstate.xcr0_low;
}
-static inline uint64_t cpuid_policy_xstates(const struct cpuid_policy *p)
+static inline uint64_t cpu_policy_xstates(const struct cpu_policy *p)
{
uint64_t val = p->xstate.xcr0_high | p->xstate.xss_high;
@@ -414,18 +414,18 @@ static inline uint64_t cpuid_policy_xstates(const struct cpuid_policy *p)
const uint32_t *x86_cpuid_lookup_deep_deps(uint32_t feature);
/**
- * Recalculate the content in a CPUID policy which is derived from raw data.
+ * Recalculate the content in a CPU policy which is derived from raw data.
*/
-void x86_cpuid_policy_recalc_synth(struct cpuid_policy *p);
+void x86_cpu_policy_recalc_synth(struct cpu_policy *p);
/**
- * Fill a CPUID policy using the native CPUID instruction.
+ * Fill CPU policy using the native CPUID/RDMSR instruction.
*
* No sanitisation is performed, but synthesised values are calculated.
* Values may be influenced by a hypervisor or from masking/faulting
* configuration.
*/
-void x86_cpuid_policy_fill_native(struct cpuid_policy *p);
+void x86_cpu_policy_fill_native(struct cpu_policy *p);
/**
* Clear leaf data beyond the policies max leaf/subleaf settings.
@@ -436,7 +436,7 @@ void x86_cpuid_policy_fill_native(struct cpuid_policy *p);
* with out-of-range leaves with stale content in them. This helper clears
* them.
*/
-void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p);
+void x86_cpu_policy_clear_out_of_range_leaves(struct cpu_policy *p);
#ifdef __XEN__
#include <public/arch-x86/xen.h>
@@ -449,9 +449,10 @@ typedef xen_msr_entry_t msr_entry_buffer_t[];
#endif
/**
- * Serialise a cpuid_policy object into an array of cpuid leaves.
+ * Serialise the CPUID leaves of a cpu_policy object into an array of cpuid
+ * leaves.
*
- * @param policy The cpuid_policy to serialise.
+ * @param policy The cpu_policy to serialise.
* @param leaves The array of leaves to serialise into.
* @param nr_entries The number of entries in 'leaves'.
* @returns -errno
@@ -460,13 +461,14 @@ typedef xen_msr_entry_t msr_entry_buffer_t[];
* leaves array is too short. On success, nr_entries is updated with the
* actual number of leaves written.
*/
-int x86_cpuid_copy_to_buffer(const struct cpuid_policy *policy,
+int x86_cpuid_copy_to_buffer(const struct cpu_policy *policy,
cpuid_leaf_buffer_t leaves, uint32_t *nr_entries);
/**
- * Unserialise a cpuid_policy object from an array of cpuid leaves.
+ * Unserialise the CPUID leaves of a cpu_policy object into an array of cpuid
+ * leaves.
*
- * @param policy The cpuid_policy to unserialise into.
+ * @param policy The cpu_policy to unserialise into.
* @param leaves The array of leaves to unserialise from.
* @param nr_entries The number of entries in 'leaves'.
* @param err_leaf Optional hint for error diagnostics.
@@ -474,21 +476,21 @@ int x86_cpuid_copy_to_buffer(const struct cpuid_policy *policy,
* @returns -errno
*
* Reads at most CPUID_MAX_SERIALISED_LEAVES. May return -ERANGE if an
- * incoming leaf is out of range of cpuid_policy, in which case the optional
+ * incoming leaf is out of range of cpu_policy, in which case the optional
* err_* pointers will identify the out-of-range indicies.
*
* No content validation of in-range leaves is performed. Synthesised data is
* recalculated.
*/
-int x86_cpuid_copy_from_buffer(struct cpuid_policy *policy,
+int x86_cpuid_copy_from_buffer(struct cpu_policy *policy,
const cpuid_leaf_buffer_t leaves,
uint32_t nr_entries, uint32_t *err_leaf,
uint32_t *err_subleaf);
/**
- * Serialise an msr_policy object into an array.
+ * Serialise the MSRs of a cpu_policy object into an array.
*
- * @param policy The msr_policy to serialise.
+ * @param policy The cpu_policy to serialise.
* @param msrs The array of msrs to serialise into.
* @param nr_entries The number of entries in 'msrs'.
* @returns -errno
@@ -497,13 +499,13 @@ int x86_cpuid_copy_from_buffer(struct cpuid_policy *policy,
* buffer array is too short. On success, nr_entries is updated with the
* actual number of msrs written.
*/
-int x86_msr_copy_to_buffer(const struct msr_policy *policy,
+int x86_msr_copy_to_buffer(const struct cpu_policy *policy,
msr_entry_buffer_t msrs, uint32_t *nr_entries);
/**
- * Unserialise an msr_policy object from an array of msrs.
+ * Unserialise the MSRs of a cpu_policy object from an array of msrs.
*
- * @param policy The msr_policy object to unserialise into.
+ * @param policy The cpu_policy object to unserialise into.
* @param msrs The array of msrs to unserialise from.
* @param nr_entries The number of entries in 'msrs'.
* @param err_msr Optional hint for error diagnostics.
@@ -517,7 +519,7 @@ int x86_msr_copy_to_buffer(const struct msr_policy *policy,
*
* No content validation is performed on the data stored in the policy object.
*/
-int x86_msr_copy_from_buffer(struct msr_policy *policy,
+int x86_msr_copy_from_buffer(struct cpu_policy *policy,
const msr_entry_buffer_t msrs, uint32_t nr_entries,
uint32_t *err_msr);
diff --git a/xen/lib/x86/cpuid.c b/xen/lib/x86/cpuid.c
index 734e90823a63..7c7b092736ff 100644
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -102,13 +102,13 @@ void x86_cpu_featureset_to_policy(
p->feat._7d1 = fs[FEATURESET_7d1];
}
-void x86_cpuid_policy_recalc_synth(struct cpuid_policy *p)
+void x86_cpu_policy_recalc_synth(struct cpu_policy *p)
{
p->x86_vendor = x86_cpuid_lookup_vendor(
p->basic.vendor_ebx, p->basic.vendor_ecx, p->basic.vendor_edx);
}
-void x86_cpuid_policy_fill_native(struct cpuid_policy *p)
+void x86_cpu_policy_fill_native(struct cpu_policy *p)
{
unsigned int i;
@@ -199,7 +199,7 @@ void x86_cpuid_policy_fill_native(struct cpuid_policy *p)
cpuid_count_leaf(0xd, 0, &p->xstate.raw[0]);
cpuid_count_leaf(0xd, 1, &p->xstate.raw[1]);
- xstates = cpuid_policy_xstates(p);
+ xstates = cpu_policy_xstates(p);
/* This logic will probably need adjusting when XCR0[63] gets used. */
BUILD_BUG_ON(ARRAY_SIZE(p->xstate.raw) > 63);
@@ -222,10 +222,12 @@ void x86_cpuid_policy_fill_native(struct cpuid_policy *p)
p->hv_limit = 0;
p->hv2_limit = 0;
- x86_cpuid_policy_recalc_synth(p);
+ /* TODO MSRs */
+
+ x86_cpu_policy_recalc_synth(p);
}
-void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p)
+void x86_cpu_policy_clear_out_of_range_leaves(struct cpu_policy *p)
{
unsigned int i;
@@ -260,7 +262,7 @@ void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p)
zero_leaves(p->topo.raw, i, ARRAY_SIZE(p->topo.raw) - 1);
}
- if ( p->basic.max_leaf < 0xd || !cpuid_policy_xstates(p) )
+ if ( p->basic.max_leaf < 0xd || !cpu_policy_xstates(p) )
memset(p->xstate.raw, 0, sizeof(p->xstate.raw));
else
{
@@ -268,7 +270,7 @@ void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p)
BUILD_BUG_ON(ARRAY_SIZE(p->xstate.raw) > 63);
/* First two leaves always valid. Rest depend on xstates. */
- i = max(2, 64 - __builtin_clzll(cpuid_policy_xstates(p)));
+ i = max(2, 64 - __builtin_clzll(cpu_policy_xstates(p)));
zero_leaves(p->xstate.raw, i,
ARRAY_SIZE(p->xstate.raw) - 1);
@@ -333,7 +335,7 @@ static int copy_leaf_to_buffer(uint32_t leaf, uint32_t subleaf,
return 0;
}
-int x86_cpuid_copy_to_buffer(const struct cpuid_policy *p,
+int x86_cpuid_copy_to_buffer(const struct cpu_policy *p,
cpuid_leaf_buffer_t leaves, uint32_t *nr_entries_p)
{
const uint32_t nr_entries = *nr_entries_p;
@@ -383,7 +385,7 @@ int x86_cpuid_copy_to_buffer(const struct cpuid_policy *p,
case 0xd:
{
- uint64_t xstates = cpuid_policy_xstates(p);
+ uint64_t xstates = cpu_policy_xstates(p);
COPY_LEAF(leaf, 0, &p->xstate.raw[0]);
COPY_LEAF(leaf, 1, &p->xstate.raw[1]);
@@ -419,7 +421,7 @@ int x86_cpuid_copy_to_buffer(const struct cpuid_policy *p,
return 0;
}
-int x86_cpuid_copy_from_buffer(struct cpuid_policy *p,
+int x86_cpuid_copy_from_buffer(struct cpu_policy *p,
const cpuid_leaf_buffer_t leaves,
uint32_t nr_entries, uint32_t *err_leaf,
uint32_t *err_subleaf)
@@ -522,7 +524,7 @@ int x86_cpuid_copy_from_buffer(struct cpuid_policy *p,
}
}
- x86_cpuid_policy_recalc_synth(p);
+ x86_cpu_policy_recalc_synth(p);
return 0;
diff --git a/xen/lib/x86/msr.c b/xen/lib/x86/msr.c
index c4d885e7b568..e04b9ca01302 100644
--- a/xen/lib/x86/msr.c
+++ b/xen/lib/x86/msr.c
@@ -23,7 +23,7 @@ static int copy_msr_to_buffer(uint32_t idx, uint64_t val,
return 0;
}
-int x86_msr_copy_to_buffer(const struct msr_policy *p,
+int x86_msr_copy_to_buffer(const struct cpu_policy *p,
msr_entry_buffer_t msrs, uint32_t *nr_entries_p)
{
const uint32_t nr_entries = *nr_entries_p;
@@ -48,7 +48,7 @@ int x86_msr_copy_to_buffer(const struct msr_policy *p,
return 0;
}
-int x86_msr_copy_from_buffer(struct msr_policy *p,
+int x86_msr_copy_from_buffer(struct cpu_policy *p,
const msr_entry_buffer_t msrs, uint32_t nr_entries,
uint32_t *err_msr)
{
--
2.30.2
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v2 15/15] x86: Remove temporary {cpuid,msr}_policy defines
2023-04-04 9:52 [PATCH v2 00/15] x86: Merge cpuid and msr policy objects Andrew Cooper
` (13 preceding siblings ...)
2023-04-04 9:52 ` [PATCH v2 14/15] libx86: Update library API for cpu_policy Andrew Cooper
@ 2023-04-04 9:52 ` Andrew Cooper
2023-04-04 15:37 ` Jan Beulich
14 siblings, 1 reply; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 9:52 UTC (permalink / raw)
To: Xen-devel; +Cc: Andrew Cooper, Jan Beulich, Roger Pau Monné, Wei Liu
With all code areas updated, drop the temporary defines and adjust all
remaining users.
No practical change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
v2:
* Split out of RFC patch
---
xen/arch/x86/cpu/mcheck/mce_intel.c | 2 +-
xen/arch/x86/cpuid.c | 2 +-
xen/arch/x86/domain.c | 2 +-
xen/arch/x86/hvm/hvm.c | 4 ++--
xen/arch/x86/hvm/svm/svm.c | 2 +-
xen/arch/x86/hvm/vlapic.c | 2 +-
xen/arch/x86/hvm/vmx/vmx.c | 8 ++++----
xen/arch/x86/include/asm/msr.h | 2 +-
xen/arch/x86/msr.c | 20 +++++++++-----------
xen/arch/x86/pv/domain.c | 2 +-
xen/arch/x86/pv/emul-priv-op.c | 4 ++--
xen/arch/x86/traps.c | 2 +-
xen/arch/x86/x86_emulate/x86_emulate.c | 2 +-
xen/include/xen/lib/x86/cpu-policy.h | 4 ----
14 files changed, 26 insertions(+), 32 deletions(-)
diff --git a/xen/arch/x86/cpu/mcheck/mce_intel.c b/xen/arch/x86/cpu/mcheck/mce_intel.c
index 301533722d1a..2f23f02923d2 100644
--- a/xen/arch/x86/cpu/mcheck/mce_intel.c
+++ b/xen/arch/x86/cpu/mcheck/mce_intel.c
@@ -1008,7 +1008,7 @@ int vmce_intel_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
int vmce_intel_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
{
- const struct cpuid_policy *cp = v->domain->arch.cpuid;
+ const struct cpu_policy *cp = v->domain->arch.cpu_policy;
unsigned int bank = msr - MSR_IA32_MC0_CTL2;
switch ( msr )
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 3f20c342fde8..f311372cdf1f 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -36,7 +36,7 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
uint32_t subleaf, struct cpuid_leaf *res)
{
const struct domain *d = v->domain;
- const struct cpuid_policy *p = d->arch.cpuid;
+ const struct cpu_policy *p = d->arch.cpu_policy;
*res = EMPTY_LEAF;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index b23e5014d1d3..91f57e3a3b17 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -283,7 +283,7 @@ void update_guest_memory_policy(struct vcpu *v,
void domain_cpu_policy_changed(struct domain *d)
{
- const struct cpuid_policy *p = d->arch.cpuid;
+ const struct cpu_policy *p = d->arch.cpu_policy;
struct vcpu *v;
if ( is_pv_domain(d) )
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 675c523d9909..7020fdce995c 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -924,7 +924,7 @@ const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
signed int cr0_pg)
{
const struct domain *d = v->domain;
- const struct cpuid_policy *p = d->arch.cpuid;
+ const struct cpu_policy *p = d->arch.cpu_policy;
if ( value & ~EFER_KNOWN_MASK )
return "Unknown bits set";
@@ -961,7 +961,7 @@ const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
/* These bits in CR4 can be set by the guest. */
unsigned long hvm_cr4_guest_valid_bits(const struct domain *d)
{
- const struct cpuid_policy *p = d->arch.cpuid;
+ const struct cpu_policy *p = d->arch.cpu_policy;
bool mce, vmxe, cet;
/* Logic broken out simply to aid readability below. */
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 02563e4b7027..b8fe759db456 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -583,7 +583,7 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v)
{
struct svm_vcpu *svm = &v->arch.hvm.svm;
struct vmcb_struct *vmcb = svm->vmcb;
- const struct cpuid_policy *cp = v->domain->arch.cpuid;
+ const struct cpu_policy *cp = v->domain->arch.cpu_policy;
u32 bitmap = vmcb_get_exception_intercepts(vmcb);
if ( opt_hvm_fep ||
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index dc93b5e930b1..f4f5ffc673e5 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1083,7 +1083,7 @@ static void set_x2apic_id(struct vlapic *vlapic)
int guest_wrmsr_apic_base(struct vcpu *v, uint64_t value)
{
- const struct cpuid_policy *cp = v->domain->arch.cpuid;
+ const struct cpu_policy *cp = v->domain->arch.cpu_policy;
struct vlapic *vlapic = vcpu_vlapic(v);
if ( !has_vlapic(v->domain) )
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index e05588505871..ee4c41628cc3 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -710,7 +710,7 @@ static void vmx_restore_host_msrs(void)
static void vmx_save_guest_msrs(struct vcpu *v)
{
- const struct cpuid_policy *cp = v->domain->arch.cpuid;
+ const struct cpu_policy *cp = v->domain->arch.cpu_policy;
struct vcpu_msrs *msrs = v->arch.msrs;
/*
@@ -731,7 +731,7 @@ static void vmx_save_guest_msrs(struct vcpu *v)
static void vmx_restore_guest_msrs(struct vcpu *v)
{
- const struct cpuid_policy *cp = v->domain->arch.cpuid;
+ const struct cpu_policy *cp = v->domain->arch.cpu_policy;
const struct vcpu_msrs *msrs = v->arch.msrs;
write_gs_shadow(v->arch.hvm.vmx.shadow_gs);
@@ -784,7 +784,7 @@ void vmx_update_exception_bitmap(struct vcpu *v)
static void cf_check vmx_cpuid_policy_changed(struct vcpu *v)
{
- const struct cpuid_policy *cp = v->domain->arch.cpuid;
+ const struct cpu_policy *cp = v->domain->arch.cpu_policy;
int rc = 0;
if ( opt_hvm_fep ||
@@ -3521,7 +3521,7 @@ static int cf_check vmx_msr_write_intercept(
unsigned int msr, uint64_t msr_content)
{
struct vcpu *v = current;
- const struct cpuid_policy *cp = v->domain->arch.cpuid;
+ const struct cpu_policy *cp = v->domain->arch.cpu_policy;
HVM_DBG_LOG(DBG_LEVEL_MSR, "ecx=%#x, msr_value=%#"PRIx64, msr, msr_content);
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index 458841733e18..1d8ea9f26faa 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -290,7 +290,7 @@ static inline void wrmsr_tsc_aux(uint32_t val)
}
}
-uint64_t msr_spec_ctrl_valid_bits(const struct cpuid_policy *cp);
+uint64_t msr_spec_ctrl_valid_bits(const struct cpu_policy *cp);
/* Container object for per-vCPU MSRs */
struct vcpu_msrs
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 802fc60baf81..2e16818bf509 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -54,8 +54,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
{
const struct vcpu *curr = current;
const struct domain *d = v->domain;
- const struct cpuid_policy *cp = d->arch.cpuid;
- const struct msr_policy *mp = d->arch.msr;
+ const struct cpu_policy *cp = d->arch.cpu_policy;
const struct vcpu_msrs *msrs = v->arch.msrs;
int ret = X86EMUL_OKAY;
@@ -139,13 +138,13 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
goto get_reg;
case MSR_INTEL_PLATFORM_INFO:
- *val = mp->platform_info.raw;
+ *val = cp->platform_info.raw;
break;
case MSR_ARCH_CAPABILITIES:
if ( !cp->feat.arch_caps )
goto gp_fault;
- *val = mp->arch_caps.raw;
+ *val = cp->arch_caps.raw;
break;
case MSR_INTEL_MISC_FEATURES_ENABLES:
@@ -326,7 +325,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
* separate CPUID features for this functionality, but only set will be
* active.
*/
-uint64_t msr_spec_ctrl_valid_bits(const struct cpuid_policy *cp)
+uint64_t msr_spec_ctrl_valid_bits(const struct cpu_policy *cp)
{
bool ssbd = cp->feat.ssbd || cp->extd.amd_ssbd;
bool psfd = cp->feat.intel_psfd || cp->extd.psfd;
@@ -345,8 +344,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
{
const struct vcpu *curr = current;
struct domain *d = v->domain;
- const struct cpuid_policy *cp = d->arch.cpuid;
- const struct msr_policy *mp = d->arch.msr;
+ const struct cpu_policy *cp = d->arch.cpu_policy;
struct vcpu_msrs *msrs = v->arch.msrs;
int ret = X86EMUL_OKAY;
@@ -387,7 +385,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
* for backwards compatiblity, the OS should write 0 to it before
* trying to access the current microcode version.
*/
- if ( d->arch.cpuid->x86_vendor != X86_VENDOR_INTEL || val != 0 )
+ if ( cp->x86_vendor != X86_VENDOR_INTEL || val != 0 )
goto gp_fault;
break;
@@ -397,7 +395,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
* to AMD CPUs as well (at least the architectural/CPUID part does).
*/
if ( is_pv_domain(d) ||
- d->arch.cpuid->x86_vendor != X86_VENDOR_AMD )
+ cp->x86_vendor != X86_VENDOR_AMD )
goto gp_fault;
break;
@@ -409,7 +407,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
* by any CPUID bit.
*/
if ( is_pv_domain(d) ||
- d->arch.cpuid->x86_vendor != X86_VENDOR_INTEL )
+ cp->x86_vendor != X86_VENDOR_INTEL )
goto gp_fault;
break;
@@ -446,7 +444,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
bool old_cpuid_faulting = msrs->misc_features_enables.cpuid_faulting;
rsvd = ~0ull;
- if ( mp->platform_info.cpuid_faulting )
+ if ( cp->platform_info.cpuid_faulting )
rsvd &= ~MSR_MISC_FEATURES_CPUID_FAULTING;
if ( val & rsvd )
diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
index 95492715d8ad..5c92812dc67a 100644
--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -146,7 +146,7 @@ static void release_compat_l4(struct vcpu *v)
unsigned long pv_fixup_guest_cr4(const struct vcpu *v, unsigned long cr4)
{
- const struct cpuid_policy *p = v->domain->arch.cpuid;
+ const struct cpu_policy *p = v->domain->arch.cpu_policy;
/* Discard attempts to set guest controllable bits outside of the policy. */
cr4 &= ~((p->basic.tsc ? 0 : X86_CR4_TSD) |
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index ab52768271c5..04416f197951 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -885,7 +885,7 @@ static int cf_check read_msr(
{
struct vcpu *curr = current;
const struct domain *currd = curr->domain;
- const struct cpuid_policy *cp = currd->arch.cpuid;
+ const struct cpu_policy *cp = currd->arch.cpu_policy;
bool vpmu_msr = false, warn = false;
uint64_t tmp;
int ret;
@@ -1034,7 +1034,7 @@ static int cf_check write_msr(
{
struct vcpu *curr = current;
const struct domain *currd = curr->domain;
- const struct cpuid_policy *cp = currd->arch.cpuid;
+ const struct cpu_policy *cp = currd->arch.cpu_policy;
bool vpmu_msr = false;
int ret;
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index c36e3f855bd9..e4f8b158e1ed 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1036,7 +1036,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
uint32_t subleaf, struct cpuid_leaf *res)
{
const struct domain *d = v->domain;
- const struct cpuid_policy *p = d->arch.cpuid;
+ const struct cpu_policy *p = d->arch.cpu_policy;
uint32_t base = is_viridian_domain(d) ? 0x40000100 : 0x40000000;
uint32_t idx = leaf - base;
unsigned int limit = is_viridian_domain(d) ? p->hv2_limit : p->hv_limit;
diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c
index 5a0ec5900a93..c69f7c65f526 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -848,7 +848,7 @@ protmode_load_seg(
struct x86_emulate_ctxt *ctxt,
const struct x86_emulate_ops *ops)
{
- const struct cpuid_policy *cp = ctxt->cpuid;
+ const struct cpu_policy *cp = ctxt->cpu_policy;
enum x86_segment sel_seg = (sel & 4) ? x86_seg_ldtr : x86_seg_gdtr;
struct { uint32_t a, b; } desc, desc_hi = {};
uint8_t dpl, rpl;
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index dee46adeff17..182cf77cffaf 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -375,10 +375,6 @@ struct cpu_policy
uint8_t x86_vendor;
};
-/* Temporary */
-#define cpuid_policy cpu_policy
-#define msr_policy cpu_policy
-
struct cpu_policy_errors
{
uint32_t leaf, subleaf;
--
2.30.2
^ permalink raw reply related [flat|nested] 30+ messages in thread
* Re: [PATCH v2 09/15] x86: Out-of-inline the policy<->featureset convertors
2023-04-04 9:52 ` [PATCH v2 09/15] x86: Out-of-inline the policy<->featureset convertors Andrew Cooper
@ 2023-04-04 15:01 ` Jan Beulich
2023-04-04 15:26 ` Andrew Cooper
0 siblings, 1 reply; 30+ messages in thread
From: Jan Beulich @ 2023-04-04 15:01 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Xen-devel
On 04.04.2023 11:52, Andrew Cooper wrote:
> These are already getting over-large for being inline functions, and are only
> going to grow more over time. Out of line them, yielding the following net
> delta from bloat-o-meter:
>
> add/remove: 2/0 grow/shrink: 0/4 up/down: 276/-1877 (-1601)
>
> Switch to the newer cpu_policy terminology while doing so.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
I take it you have a reason to ...
> --- a/xen/lib/x86/cpuid.c
> +++ b/xen/lib/x86/cpuid.c
> @@ -60,6 +60,48 @@ const char *x86_cpuid_vendor_to_str(unsigned int vendor)
> }
> }
>
> +void x86_cpu_policy_to_featureset(
> + const struct cpu_policy *p, uint32_t fs[FEATURESET_NR_ENTRIES])
> +{
> + fs[FEATURESET_1d] = p->basic._1d;
> + fs[FEATURESET_1c] = p->basic._1c;
> + fs[FEATURESET_e1d] = p->extd.e1d;
> + fs[FEATURESET_e1c] = p->extd.e1c;
> + fs[FEATURESET_Da1] = p->xstate.Da1;
> + fs[FEATURESET_7b0] = p->feat._7b0;
> + fs[FEATURESET_7c0] = p->feat._7c0;
> + fs[FEATURESET_e7d] = p->extd.e7d;
> + fs[FEATURESET_e8b] = p->extd.e8b;
> + fs[FEATURESET_7d0] = p->feat._7d0;
> + fs[FEATURESET_7a1] = p->feat._7a1;
> + fs[FEATURESET_e21a] = p->extd.e21a;
> + fs[FEATURESET_7b1] = p->feat._7b1;
> + fs[FEATURESET_7d2] = p->feat._7d2;
> + fs[FEATURESET_7c1] = p->feat._7c1;
> + fs[FEATURESET_7d1] = p->feat._7d1;
> +}
> +
> +void x86_cpu_featureset_to_policy(
> + const uint32_t fs[FEATURESET_NR_ENTRIES], struct cpu_policy *p)
> +{
> + p->basic._1d = fs[FEATURESET_1d];
> + p->basic._1c = fs[FEATURESET_1c];
> + p->extd.e1d = fs[FEATURESET_e1d];
> + p->extd.e1c = fs[FEATURESET_e1c];
> + p->xstate.Da1 = fs[FEATURESET_Da1];
> + p->feat._7b0 = fs[FEATURESET_7b0];
> + p->feat._7c0 = fs[FEATURESET_7c0];
> + p->extd.e7d = fs[FEATURESET_e7d];
> + p->extd.e8b = fs[FEATURESET_e8b];
> + p->feat._7d0 = fs[FEATURESET_7d0];
> + p->feat._7a1 = fs[FEATURESET_7a1];
> + p->extd.e21a = fs[FEATURESET_e21a];
> + p->feat._7b1 = fs[FEATURESET_7b1];
> + p->feat._7d2 = fs[FEATURESET_7d2];
> + p->feat._7c1 = fs[FEATURESET_7c1];
> + p->feat._7d1 = fs[FEATURESET_7d1];
> +}
... add quite a few padding blanks in here, unlike in the originals?
Jan
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v2 10/15] x86/boot: Move MSR policy initialisation logic into cpu-policy.c
2023-04-04 9:52 ` [PATCH v2 10/15] x86/boot: Move MSR policy initialisation logic into cpu-policy.c Andrew Cooper
@ 2023-04-04 15:04 ` Jan Beulich
0 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2023-04-04 15:04 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Xen-devel
On 04.04.2023 11:52, Andrew Cooper wrote:
> Switch to the newer cpu_policy nomenclature.
>
> No practical change.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v2 11/15] x86/boot: Merge CPUID policy initialisation logic into cpu-policy.c
2023-04-04 9:52 ` [PATCH v2 11/15] x86/boot: Merge CPUID " Andrew Cooper
@ 2023-04-04 15:16 ` Jan Beulich
2023-04-04 15:45 ` Andrew Cooper
0 siblings, 1 reply; 30+ messages in thread
From: Jan Beulich @ 2023-04-04 15:16 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Xen-devel
On 04.04.2023 11:52, Andrew Cooper wrote:
> Switch to the newer cpu_policy nomenclature. Do some easy cleanup of
> includes.
>
> No practical change.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
>
> v2:
> * New
> ---
> xen/arch/x86/cpu-policy.c | 752 ++++++++++++++++++++++++
> xen/arch/x86/cpuid.c | 817 +-------------------------
> xen/arch/x86/hvm/hvm.c | 1 -
> xen/arch/x86/include/asm/cpu-policy.h | 6 +
> xen/arch/x86/include/asm/cpuid.h | 11 +-
> xen/arch/x86/pv/domain.c | 1 +
> xen/arch/x86/setup.c | 2 -
> 7 files changed, 764 insertions(+), 826 deletions(-)
>
> diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
> index f6a2317ed7bd..83186e940ca7 100644
> --- a/xen/arch/x86/cpu-policy.c
> +++ b/xen/arch/x86/cpu-policy.c
> @@ -1,13 +1,19 @@
> /* SPDX-License-Identifier: GPL-2.0-or-later */
> #include <xen/cache.h>
> #include <xen/kernel.h>
> +#include <xen/param.h>
> #include <xen/sched.h>
>
> #include <xen/lib/x86/cpu-policy.h>
>
> +#include <asm/amd.h>
> #include <asm/cpu-policy.h>
> +#include <asm/hvm/nestedhvm.h>
> +#include <asm/hvm/svm/svm.h>
> #include <asm/msr-index.h>
> +#include <asm/paging.h>
> #include <asm/setup.h>
> +#include <asm/xstate.h>
>
> struct cpu_policy __ro_after_init raw_cpu_policy;
> struct cpu_policy __ro_after_init host_cpu_policy;
> @@ -20,10 +26,332 @@ struct cpu_policy __ro_after_init hvm_max_cpu_policy;
> struct cpu_policy __ro_after_init hvm_def_cpu_policy;
> #endif
>
> +const uint32_t known_features[] = INIT_KNOWN_FEATURES;
> +
> +static const uint32_t __initconst pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
> +static const uint32_t hvm_shadow_max_featuremask[] = INIT_HVM_SHADOW_MAX_FEATURES;
> +static const uint32_t __initconst hvm_hap_max_featuremask[] =
> + INIT_HVM_HAP_MAX_FEATURES;
> +static const uint32_t __initconst pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
> +static const uint32_t __initconst hvm_shadow_def_featuremask[] =
> + INIT_HVM_SHADOW_DEF_FEATURES;
> +static const uint32_t __initconst hvm_hap_def_featuremask[] =
> + INIT_HVM_HAP_DEF_FEATURES;
> +static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
> +
> +static const struct feature_name {
> + const char *name;
> + unsigned int bit;
> +} feature_names[] __initconstrel = INIT_FEATURE_NAMES;
> +
> +/*
> + * Parse a list of cpuid feature names -> bool, calling the callback for any
> + * matches found.
> + *
> + * always_inline, because this is init code only and we really don't want a
> + * function pointer call in the middle of the loop.
> + */
> +static int __init always_inline parse_cpuid(
> + const char *s, void (*callback)(unsigned int feat, bool val))
> +{
> + const char *ss;
> + int val, rc = 0;
> +
> + do {
> + const struct feature_name *lhs, *rhs, *mid = NULL /* GCC... */;
> + const char *feat;
> +
> + ss = strchr(s, ',');
> + if ( !ss )
> + ss = strchr(s, '\0');
> +
> + /* Skip the 'no-' prefix for name comparisons. */
> + feat = s;
> + if ( strncmp(s, "no-", 3) == 0 )
> + feat += 3;
> +
> + /* (Re)initalise lhs and rhs for binary search. */
> + lhs = feature_names;
> + rhs = feature_names + ARRAY_SIZE(feature_names);
> +
> + while ( lhs < rhs )
> + {
> + int res;
> +
> + mid = lhs + (rhs - lhs) / 2;
> + res = cmdline_strcmp(feat, mid->name);
> +
> + if ( res < 0 )
> + {
> + rhs = mid;
> + continue;
> + }
> + if ( res > 0 )
> + {
> + lhs = mid + 1;
> + continue;
> + }
> +
> + if ( (val = parse_boolean(mid->name, s, ss)) >= 0 )
> + {
> + callback(mid->bit, val);
> + mid = NULL;
> + }
> +
> + break;
> + }
> +
> + /*
> + * Mid being NULL means that the name and boolean were successfully
> + * identified. Everything else is an error.
> + */
> + if ( mid )
> + rc = -EINVAL;
> +
> + s = ss + 1;
> + } while ( *ss );
> +
> + return rc;
> +}
> +
> +static void __init cf_check _parse_xen_cpuid(unsigned int feat, bool val)
> +{
> + if ( !val )
> + setup_clear_cpu_cap(feat);
> + else if ( feat == X86_FEATURE_RDRAND &&
> + (cpuid_ecx(1) & cpufeat_mask(X86_FEATURE_RDRAND)) )
> + setup_force_cpu_cap(X86_FEATURE_RDRAND);
> +}
> +
> +static int __init cf_check parse_xen_cpuid(const char *s)
> +{
> + return parse_cpuid(s, _parse_xen_cpuid);
> +}
> +custom_param("cpuid", parse_xen_cpuid);
> +
> +static bool __initdata dom0_cpuid_cmdline;
> +static uint32_t __initdata dom0_enable_feat[FSCAPINTS];
> +static uint32_t __initdata dom0_disable_feat[FSCAPINTS];
> +
> +static void __init cf_check _parse_dom0_cpuid(unsigned int feat, bool val)
> +{
> + __set_bit (feat, val ? dom0_enable_feat : dom0_disable_feat);
> + __clear_bit(feat, val ? dom0_disable_feat : dom0_enable_feat );
> +}
> +
> +static int __init cf_check parse_dom0_cpuid(const char *s)
> +{
> + dom0_cpuid_cmdline = true;
> +
> + return parse_cpuid(s, _parse_dom0_cpuid);
> +}
> +custom_param("dom0-cpuid", parse_dom0_cpuid);
Unless the plan is to completely remove cpuid.c, this command line
handling would imo better fit there. I understand that to keep
dom0_{en,dis}able_feat[] static, the _parse_dom0_cpuid() helper
would then need to be exposed (under a different name), but I think
that's quite okay, the more that it's an __init function.
> +#define EMPTY_LEAF ((struct cpuid_leaf){})
> +static void zero_leaves(struct cpuid_leaf *l,
> + unsigned int first, unsigned int last)
> +{
> + memset(&l[first], 0, sizeof(*l) * (last - first + 1));
> +}
> +
> +static void sanitise_featureset(uint32_t *fs)
> +{
> + /* for_each_set_bit() uses unsigned longs. Extend with zeroes. */
> + uint32_t disabled_features[
> + ROUNDUP(FSCAPINTS, sizeof(unsigned long)/sizeof(uint32_t))] = {};
> + unsigned int i;
> +
> + for ( i = 0; i < FSCAPINTS; ++i )
> + {
> + /* Clamp to known mask. */
> + fs[i] &= known_features[i];
> +
> + /*
> + * Identify which features with deep dependencies have been
> + * disabled.
> + */
> + disabled_features[i] = ~fs[i] & deep_features[i];
> + }
> +
> + for_each_set_bit(i, (void *)disabled_features,
> + sizeof(disabled_features) * 8)
> + {
> + const uint32_t *dfs = x86_cpuid_lookup_deep_deps(i);
> + unsigned int j;
> +
> + ASSERT(dfs); /* deep_features[] should guarentee this. */
> +
> + for ( j = 0; j < FSCAPINTS; ++j )
> + {
> + fs[j] &= ~dfs[j];
> + disabled_features[j] &= ~dfs[j];
> + }
> + }
> +}
> +
> +static void recalculate_xstate(struct cpu_policy *p)
> +{
> + uint64_t xstates = XSTATE_FP_SSE;
> + uint32_t xstate_size = XSTATE_AREA_MIN_SIZE;
> + unsigned int i, Da1 = p->xstate.Da1;
> +
> + /*
> + * The Da1 leaf is the only piece of information preserved in the common
> + * case. Everything else is derived from other feature state.
> + */
> + memset(&p->xstate, 0, sizeof(p->xstate));
> +
> + if ( !p->basic.xsave )
> + return;
> +
> + if ( p->basic.avx )
> + {
> + xstates |= X86_XCR0_YMM;
> + xstate_size = max(xstate_size,
> + xstate_offsets[X86_XCR0_YMM_POS] +
> + xstate_sizes[X86_XCR0_YMM_POS]);
> + }
> +
> + if ( p->feat.mpx )
> + {
> + xstates |= X86_XCR0_BNDREGS | X86_XCR0_BNDCSR;
> + xstate_size = max(xstate_size,
> + xstate_offsets[X86_XCR0_BNDCSR_POS] +
> + xstate_sizes[X86_XCR0_BNDCSR_POS]);
> + }
> +
> + if ( p->feat.avx512f )
> + {
> + xstates |= X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM;
> + xstate_size = max(xstate_size,
> + xstate_offsets[X86_XCR0_HI_ZMM_POS] +
> + xstate_sizes[X86_XCR0_HI_ZMM_POS]);
> + }
> +
> + if ( p->feat.pku )
> + {
> + xstates |= X86_XCR0_PKRU;
> + xstate_size = max(xstate_size,
> + xstate_offsets[X86_XCR0_PKRU_POS] +
> + xstate_sizes[X86_XCR0_PKRU_POS]);
> + }
> +
> + p->xstate.max_size = xstate_size;
> + p->xstate.xcr0_low = xstates & ~XSTATE_XSAVES_ONLY;
> + p->xstate.xcr0_high = (xstates & ~XSTATE_XSAVES_ONLY) >> 32;
> +
> + p->xstate.Da1 = Da1;
> + if ( p->xstate.xsaves )
> + {
> + p->xstate.xss_low = xstates & XSTATE_XSAVES_ONLY;
> + p->xstate.xss_high = (xstates & XSTATE_XSAVES_ONLY) >> 32;
> + }
> + else
> + xstates &= ~XSTATE_XSAVES_ONLY;
> +
> + for ( i = 2; i < min(63ul, ARRAY_SIZE(p->xstate.comp)); ++i )
> + {
> + uint64_t curr_xstate = 1ul << i;
> +
> + if ( !(xstates & curr_xstate) )
> + continue;
> +
> + p->xstate.comp[i].size = xstate_sizes[i];
> + p->xstate.comp[i].offset = xstate_offsets[i];
> + p->xstate.comp[i].xss = curr_xstate & XSTATE_XSAVES_ONLY;
> + p->xstate.comp[i].align = curr_xstate & xstate_align;
> + }
> +}
> +
> +/*
> + * Misc adjustments to the policy. Mostly clobbering reserved fields and
> + * duplicating shared fields. Intentionally hidden fields are annotated.
> + */
> +static void recalculate_misc(struct cpu_policy *p)
> +{
> + p->basic.raw_fms &= 0x0fff0fff; /* Clobber Processor Type on Intel. */
> + p->basic.apic_id = 0; /* Dynamic. */
> +
> + p->basic.raw[0x5] = EMPTY_LEAF; /* MONITOR not exposed to guests. */
> + p->basic.raw[0x6] = EMPTY_LEAF; /* Therm/Power not exposed to guests. */
> +
> + p->basic.raw[0x8] = EMPTY_LEAF;
> +
> + /* TODO: Rework topology logic. */
> + memset(p->topo.raw, 0, sizeof(p->topo.raw));
> +
> + p->basic.raw[0xc] = EMPTY_LEAF;
> +
> + p->extd.e1d &= ~CPUID_COMMON_1D_FEATURES;
> +
> + /* Most of Power/RAS hidden from guests. */
> + p->extd.raw[0x7].a = p->extd.raw[0x7].b = p->extd.raw[0x7].c = 0;
> +
> + p->extd.raw[0x8].d = 0;
> +
> + switch ( p->x86_vendor )
> + {
> + case X86_VENDOR_INTEL:
> + p->basic.l2_nr_queries = 1; /* Fixed to 1 query. */
> + p->basic.raw[0x3] = EMPTY_LEAF; /* PSN - always hidden. */
> + p->basic.raw[0x9] = EMPTY_LEAF; /* DCA - always hidden. */
> +
> + p->extd.vendor_ebx = 0;
> + p->extd.vendor_ecx = 0;
> + p->extd.vendor_edx = 0;
> +
> + p->extd.raw[0x1].a = p->extd.raw[0x1].b = 0;
> +
> + p->extd.raw[0x5] = EMPTY_LEAF;
> + p->extd.raw[0x6].a = p->extd.raw[0x6].b = p->extd.raw[0x6].d = 0;
> +
> + p->extd.raw[0x8].a &= 0x0000ffff;
> + p->extd.raw[0x8].c = 0;
> + break;
> +
> + case X86_VENDOR_AMD:
> + case X86_VENDOR_HYGON:
> + zero_leaves(p->basic.raw, 0x2, 0x3);
> + memset(p->cache.raw, 0, sizeof(p->cache.raw));
> + zero_leaves(p->basic.raw, 0x9, 0xa);
> +
> + p->extd.vendor_ebx = p->basic.vendor_ebx;
> + p->extd.vendor_ecx = p->basic.vendor_ecx;
> + p->extd.vendor_edx = p->basic.vendor_edx;
> +
> + p->extd.raw_fms = p->basic.raw_fms;
> + p->extd.raw[0x1].b &= 0xff00ffff;
> + p->extd.e1d |= p->basic._1d & CPUID_COMMON_1D_FEATURES;
> +
> + p->extd.raw[0x8].a &= 0x0000ffff; /* GuestMaxPhysAddr hidden. */
> + p->extd.raw[0x8].c &= 0x0003f0ff;
> +
> + p->extd.raw[0x9] = EMPTY_LEAF;
> +
> + zero_leaves(p->extd.raw, 0xb, 0x18);
> +
> + /* 0x19 - TLB details. Pass through. */
> + /* 0x1a - Perf hints. Pass through. */
> +
> + p->extd.raw[0x1b] = EMPTY_LEAF; /* IBS - not supported. */
> + p->extd.raw[0x1c] = EMPTY_LEAF; /* LWP - not supported. */
> + p->extd.raw[0x1d] = EMPTY_LEAF; /* TopoExt Cache */
> + p->extd.raw[0x1e] = EMPTY_LEAF; /* TopoExt APIC ID/Core/Node */
> + p->extd.raw[0x1f] = EMPTY_LEAF; /* SEV */
> + p->extd.raw[0x20] = EMPTY_LEAF; /* Platform QoS */
> + break;
> + }
> +}
> +
> static void __init calculate_raw_policy(void)
> {
> struct cpu_policy *p = &raw_cpu_policy;
>
> + x86_cpuid_policy_fill_native(p);
> +
> + /* Nothing good will come from Xen and libx86 disagreeing on vendor. */
> + ASSERT(p->x86_vendor == boot_cpu_data.x86_vendor);
> +
> /* 0x000000ce MSR_INTEL_PLATFORM_INFO */
> /* Was already added by probe_cpuid_faulting() */
>
> @@ -34,9 +362,50 @@ static void __init calculate_raw_policy(void)
> static void __init calculate_host_policy(void)
> {
> struct cpu_policy *p = &host_cpu_policy;
> + unsigned int max_extd_leaf;
>
> *p = raw_cpu_policy;
>
> + p->basic.max_leaf =
> + min_t(uint32_t, p->basic.max_leaf, ARRAY_SIZE(p->basic.raw) - 1);
> + p->feat.max_subleaf =
> + min_t(uint32_t, p->feat.max_subleaf, ARRAY_SIZE(p->feat.raw) - 1);
> +
> + max_extd_leaf = p->extd.max_leaf;
> +
> + /*
> + * For AMD/Hygon hardware before Zen3, we unilaterally modify LFENCE to be
> + * dispatch serialising for Spectre mitigations. Extend max_extd_leaf
> + * beyond what hardware supports, to include the feature leaf containing
> + * this information.
> + */
> + if ( cpu_has_lfence_dispatch )
> + max_extd_leaf = max(max_extd_leaf, 0x80000021);
> +
> + p->extd.max_leaf = 0x80000000 | min_t(uint32_t, max_extd_leaf & 0xffff,
> + ARRAY_SIZE(p->extd.raw) - 1);
> +
> + x86_cpu_featureset_to_policy(boot_cpu_data.x86_capability, p);
> + recalculate_xstate(p);
> + recalculate_misc(p);
> +
> + /* When vPMU is disabled, drop it from the host policy. */
> + if ( vpmu_mode == XENPMU_MODE_OFF )
> + p->basic.raw[0xa] = EMPTY_LEAF;
> +
> + if ( p->extd.svm )
> + {
> + /* Clamp to implemented features which require hardware support. */
> + p->extd.raw[0xa].d &= ((1u << SVM_FEATURE_NPT) |
> + (1u << SVM_FEATURE_LBRV) |
> + (1u << SVM_FEATURE_NRIPS) |
> + (1u << SVM_FEATURE_PAUSEFILTER) |
> + (1u << SVM_FEATURE_DECODEASSISTS));
> + /* Enable features which are always emulated. */
> + p->extd.raw[0xa].d |= ((1u << SVM_FEATURE_VMCBCLEAN) |
> + (1u << SVM_FEATURE_TSCRATEMSR));
> + }
> +
> /* 0x000000ce MSR_INTEL_PLATFORM_INFO */
> /* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_ENABLES */
> p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
> @@ -51,11 +420,88 @@ static void __init calculate_host_policy(void)
> ARCH_CAPS_PBRSB_NO);
> }
>
> +static void __init guest_common_default_feature_adjustments(uint32_t *fs)
> +{
> + /*
> + * IvyBridge client parts suffer from leakage of RDRAND data due to SRBDS
> + * (XSA-320 / CVE-2020-0543), and won't be receiving microcode to
> + * compensate.
> + *
> + * Mitigate by hiding RDRAND from guests by default, unless explicitly
> + * overridden on the Xen command line (cpuid=rdrand). Irrespective of the
> + * default setting, guests can use RDRAND if explicitly enabled
> + * (cpuid="host,rdrand=1") in the VM's config file, and VMs which were
> + * previously using RDRAND can migrate in.
> + */
> + if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
> + boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 0x3a &&
> + cpu_has_rdrand && !is_forced_cpu_cap(X86_FEATURE_RDRAND) )
> + __clear_bit(X86_FEATURE_RDRAND, fs);
> +
> + /*
> + * On certain hardware, speculative or errata workarounds can result in
> + * TSX being placed in "force-abort" mode, where it doesn't actually
> + * function as expected, but is technically compatible with the ISA.
> + *
> + * Do not advertise RTM to guests by default if it won't actually work.
> + */
> + if ( rtm_disabled )
> + __clear_bit(X86_FEATURE_RTM, fs);
> +}
> +
> +static void __init guest_common_feature_adjustments(uint32_t *fs)
> +{
> + /* Unconditionally claim to be able to set the hypervisor bit. */
> + __set_bit(X86_FEATURE_HYPERVISOR, fs);
> +
> + /*
> + * If IBRS is offered to the guest, unconditionally offer STIBP. It is a
> + * nop on non-HT hardware, and has this behaviour to make heterogeneous
> + * setups easier to manage.
> + */
> + if ( test_bit(X86_FEATURE_IBRSB, fs) )
> + __set_bit(X86_FEATURE_STIBP, fs);
> + if ( test_bit(X86_FEATURE_IBRS, fs) )
> + __set_bit(X86_FEATURE_AMD_STIBP, fs);
> +
> + /*
> + * On hardware which supports IBRS/IBPB, we can offer IBPB independently
> + * of IBRS by using the AMD feature bit. An administrator may wish for
> + * performance reasons to offer IBPB without IBRS.
> + */
> + if ( host_cpu_policy.feat.ibrsb )
> + __set_bit(X86_FEATURE_IBPB, fs);
> +}
> +
> static void __init calculate_pv_max_policy(void)
> {
> struct cpu_policy *p = &pv_max_cpu_policy;
> + uint32_t fs[FSCAPINTS];
> + unsigned int i;
>
> *p = host_cpu_policy;
> + x86_cpu_policy_to_featureset(p, fs);
> +
> + for ( i = 0; i < ARRAY_SIZE(fs); ++i )
> + fs[i] &= pv_max_featuremask[i];
> +
> + /*
> + * If Xen isn't virtualising MSR_SPEC_CTRL for PV guests (functional
> + * availability, or admin choice), hide the feature.
> + */
> + if ( !boot_cpu_has(X86_FEATURE_SC_MSR_PV) )
> + {
> + __clear_bit(X86_FEATURE_IBRSB, fs);
> + __clear_bit(X86_FEATURE_IBRS, fs);
> + }
> +
> + guest_common_feature_adjustments(fs);
> +
> + sanitise_featureset(fs);
> + x86_cpu_featureset_to_policy(fs, p);
> + recalculate_xstate(p);
> +
> + p->extd.raw[0xa] = EMPTY_LEAF; /* No SVM for PV guests. */
>
> p->arch_caps.raw = 0; /* Not supported yet. */
> }
> @@ -63,15 +509,112 @@ static void __init calculate_pv_max_policy(void)
> static void __init calculate_pv_def_policy(void)
> {
> struct cpu_policy *p = &pv_def_cpu_policy;
> + uint32_t fs[FSCAPINTS];
> + unsigned int i;
>
> *p = pv_max_cpu_policy;
> + x86_cpu_policy_to_featureset(p, fs);
> +
> + for ( i = 0; i < ARRAY_SIZE(fs); ++i )
> + fs[i] &= pv_def_featuremask[i];
> +
> + guest_common_feature_adjustments(fs);
> + guest_common_default_feature_adjustments(fs);
> +
> + sanitise_featureset(fs);
> + x86_cpu_featureset_to_policy(fs, p);
> + recalculate_xstate(p);
> }
>
> static void __init calculate_hvm_max_policy(void)
> {
> struct cpu_policy *p = &hvm_max_cpu_policy;
> + uint32_t fs[FSCAPINTS];
> + unsigned int i;
> + const uint32_t *mask;
>
> *p = host_cpu_policy;
> + x86_cpu_policy_to_featureset(p, fs);
> +
> + mask = hvm_hap_supported() ?
> + hvm_hap_max_featuremask : hvm_shadow_max_featuremask;
> +
> + for ( i = 0; i < ARRAY_SIZE(fs); ++i )
> + fs[i] &= mask[i];
> +
> + /*
> + * Xen can provide an (x2)APIC emulation to HVM guests even if the host's
> + * (x2)APIC isn't enabled.
> + */
> + __set_bit(X86_FEATURE_APIC, fs);
> + __set_bit(X86_FEATURE_X2APIC, fs);
> +
> + /*
> + * We don't support EFER.LMSLE at all. AMD has dropped the feature from
> + * hardware and allocated a CPUID bit to indicate its absence.
> + */
> + __set_bit(X86_FEATURE_NO_LMSL, fs);
> +
> + /*
> + * On AMD, PV guests are entirely unable to use SYSENTER as Xen runs in
> + * long mode (and init_amd() has cleared it out of host capabilities), but
> + * HVM guests are able if running in protected mode.
> + */
> + if ( (boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) &&
> + raw_cpu_policy.basic.sep )
> + __set_bit(X86_FEATURE_SEP, fs);
> +
> + /*
> + * VIRT_SSBD is exposed in the default policy as a result of
> + * amd_virt_spec_ctrl being set, it also needs exposing in the max policy.
> + */
> + if ( amd_virt_spec_ctrl )
> + __set_bit(X86_FEATURE_VIRT_SSBD, fs);
> +
> + /*
> + * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional
> + * availability, or admin choice), hide the feature.
> + */
> + if ( !boot_cpu_has(X86_FEATURE_SC_MSR_HVM) )
> + {
> + __clear_bit(X86_FEATURE_IBRSB, fs);
> + __clear_bit(X86_FEATURE_IBRS, fs);
> + }
> + else if ( boot_cpu_has(X86_FEATURE_AMD_SSBD) )
> + /*
> + * If SPEC_CTRL.SSBD is available VIRT_SPEC_CTRL.SSBD can be exposed
> + * and implemented using the former. Expose in the max policy only as
> + * the preference is for guests to use SPEC_CTRL.SSBD if available.
> + */
> + __set_bit(X86_FEATURE_VIRT_SSBD, fs);
> +
> + /*
> + * With VT-x, some features are only supported by Xen if dedicated
> + * hardware support is also available.
> + */
> + if ( cpu_has_vmx )
> + {
> + if ( !cpu_has_vmx_mpx )
> + __clear_bit(X86_FEATURE_MPX, fs);
> +
> + if ( !cpu_has_vmx_xsaves )
> + __clear_bit(X86_FEATURE_XSAVES, fs);
> + }
> +
> + /*
> + * Xen doesn't use PKS, so the guest support for it has opted to not use
> + * the VMCS load/save controls for efficiency reasons. This depends on
> + * the exact vmentry/exit behaviour, so don't expose PKS in other
> + * situations until someone has cross-checked the behaviour for safety.
> + */
> + if ( !cpu_has_vmx )
> + __clear_bit(X86_FEATURE_PKS, fs);
> +
> + guest_common_feature_adjustments(fs);
> +
> + sanitise_featureset(fs);
> + x86_cpu_featureset_to_policy(fs, p);
> + recalculate_xstate(p);
>
> /* It's always possible to emulate CPUID faulting for HVM guests */
> p->platform_info.cpuid_faulting = true;
> @@ -82,8 +625,32 @@ static void __init calculate_hvm_max_policy(void)
> static void __init calculate_hvm_def_policy(void)
> {
> struct cpu_policy *p = &hvm_def_cpu_policy;
> + uint32_t fs[FSCAPINTS];
> + unsigned int i;
> + const uint32_t *mask;
>
> *p = hvm_max_cpu_policy;
> + x86_cpu_policy_to_featureset(p, fs);
> +
> + mask = hvm_hap_supported() ?
> + hvm_hap_def_featuremask : hvm_shadow_def_featuremask;
> +
> + for ( i = 0; i < ARRAY_SIZE(fs); ++i )
> + fs[i] &= mask[i];
> +
> + guest_common_feature_adjustments(fs);
> + guest_common_default_feature_adjustments(fs);
> +
> + /*
> + * Only expose VIRT_SSBD if AMD_SSBD is not available, and thus
> + * amd_virt_spec_ctrl is set.
> + */
> + if ( amd_virt_spec_ctrl )
> + __set_bit(X86_FEATURE_VIRT_SSBD, fs);
> +
> + sanitise_featureset(fs);
> + x86_cpu_featureset_to_policy(fs, p);
> + recalculate_xstate(p);
> }
>
> void __init init_guest_cpu_policies(void)
> @@ -149,3 +716,188 @@ int init_domain_cpu_policy(struct domain *d)
>
> return 0;
> }
> +
> +void recalculate_cpuid_policy(struct domain *d)
> +{
> + struct cpu_policy *p = d->arch.cpuid;
> + const struct cpu_policy *max = is_pv_domain(d)
> + ? (IS_ENABLED(CONFIG_PV) ? &pv_max_cpu_policy : NULL)
> + : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
While this is how the original code was, wouldn't this want to use
hvm_enabled, just like init_guest_cpu_policies() does (patch 10)?
> + uint32_t fs[FSCAPINTS], max_fs[FSCAPINTS];
> + unsigned int i;
> +
> + if ( !max )
> + {
> + ASSERT_UNREACHABLE();
> + return;
> + }
> +
> + p->x86_vendor = x86_cpuid_lookup_vendor(
> + p->basic.vendor_ebx, p->basic.vendor_ecx, p->basic.vendor_edx);
> +
> + p->basic.max_leaf = min(p->basic.max_leaf, max->basic.max_leaf);
> + p->feat.max_subleaf = min(p->feat.max_subleaf, max->feat.max_subleaf);
> + p->extd.max_leaf = 0x80000000 | min(p->extd.max_leaf & 0xffff,
> + ((p->x86_vendor & (X86_VENDOR_AMD |
> + X86_VENDOR_HYGON))
> + ? CPUID_GUEST_NR_EXTD_AMD
> + : CPUID_GUEST_NR_EXTD_INTEL) - 1);
> +
> + x86_cpu_policy_to_featureset(p, fs);
> + x86_cpu_policy_to_featureset(max, max_fs);
> +
> + if ( is_hvm_domain(d) )
> + {
> + /*
> + * HVM domains using Shadow paging have further restrictions on their
> + * available paging features.
> + */
> + if ( !hap_enabled(d) )
> + {
> + for ( i = 0; i < ARRAY_SIZE(max_fs); i++ )
> + max_fs[i] &= hvm_shadow_max_featuremask[i];
> + }
> +
> + /* Hide nested-virt if it hasn't been explicitly configured. */
> + if ( !nestedhvm_enabled(d) )
> + {
> + __clear_bit(X86_FEATURE_VMX, max_fs);
> + __clear_bit(X86_FEATURE_SVM, max_fs);
> + }
> + }
> +
> + /*
> + * Allow the toolstack to set HTT, X2APIC and CMP_LEGACY. These bits
> + * affect how to interpret topology information in other cpuid leaves.
> + */
> + __set_bit(X86_FEATURE_HTT, max_fs);
> + __set_bit(X86_FEATURE_X2APIC, max_fs);
> + __set_bit(X86_FEATURE_CMP_LEGACY, max_fs);
> +
> + /*
> + * 32bit PV domains can't use any Long Mode features, and cannot use
> + * SYSCALL on non-AMD hardware.
> + */
> + if ( is_pv_32bit_domain(d) )
> + {
> + __clear_bit(X86_FEATURE_LM, max_fs);
> + if ( !(boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
> + __clear_bit(X86_FEATURE_SYSCALL, max_fs);
> + }
> +
> + /* Clamp the toolstacks choices to reality. */
> + for ( i = 0; i < ARRAY_SIZE(fs); i++ )
> + fs[i] &= max_fs[i];
> +
> + if ( p->basic.max_leaf < XSTATE_CPUID )
> + __clear_bit(X86_FEATURE_XSAVE, fs);
> +
> + sanitise_featureset(fs);
> +
> + /* Fold host's FDP_EXCP_ONLY and NO_FPU_SEL into guest's view. */
> + fs[FEATURESET_7b0] &= ~(cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
> + cpufeat_mask(X86_FEATURE_NO_FPU_SEL));
> + fs[FEATURESET_7b0] |= (host_cpu_policy.feat._7b0 &
> + (cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
> + cpufeat_mask(X86_FEATURE_NO_FPU_SEL)));
> +
> + x86_cpu_featureset_to_policy(fs, p);
> +
> + /* Pass host cacheline size through to guests. */
> + p->basic.clflush_size = max->basic.clflush_size;
> +
> + p->extd.maxphysaddr = min(p->extd.maxphysaddr, max->extd.maxphysaddr);
> + p->extd.maxphysaddr = min_t(uint8_t, p->extd.maxphysaddr,
> + paging_max_paddr_bits(d));
> + p->extd.maxphysaddr = max_t(uint8_t, p->extd.maxphysaddr,
> + (p->basic.pae || p->basic.pse36) ? 36 : 32);
> +
> + p->extd.maxlinaddr = p->extd.lm ? 48 : 32;
> +
> + recalculate_xstate(p);
> + recalculate_misc(p);
> +
> + for ( i = 0; i < ARRAY_SIZE(p->cache.raw); ++i )
> + {
> + if ( p->cache.subleaf[i].type >= 1 &&
> + p->cache.subleaf[i].type <= 3 )
> + {
> + /* Subleaf has a valid cache type. Zero reserved fields. */
> + p->cache.raw[i].a &= 0xffffc3ffu;
> + p->cache.raw[i].d &= 0x00000007u;
> + }
> + else
> + {
> + /* Subleaf is not valid. Zero the rest of the union. */
> + zero_leaves(p->cache.raw, i, ARRAY_SIZE(p->cache.raw) - 1);
> + break;
> + }
> + }
> +
> + if ( vpmu_mode == XENPMU_MODE_OFF ||
> + ((vpmu_mode & XENPMU_MODE_ALL) && !is_hardware_domain(d)) )
> + p->basic.raw[0xa] = EMPTY_LEAF;
> +
> + if ( !p->extd.svm )
> + p->extd.raw[0xa] = EMPTY_LEAF;
> +
> + if ( !p->extd.page1gb )
> + p->extd.raw[0x19] = EMPTY_LEAF;
> +}
> +
> +void __init init_dom0_cpuid_policy(struct domain *d)
> +{
> + struct cpu_policy *p = d->arch.cpuid;
> +
> + /* dom0 can't migrate. Give it ITSC if available. */
> + if ( cpu_has_itsc )
> + p->extd.itsc = true;
> +
> + /*
> + * Expose the "hardware speculation behaviour" bits of ARCH_CAPS to dom0,
> + * so dom0 can turn off workarounds as appropriate. Temporary, until the
> + * domain policy logic gains a better understanding of MSRs.
> + */
> + if ( cpu_has_arch_caps )
> + p->feat.arch_caps = true;
> +
> + /* Apply dom0-cpuid= command line settings, if provided. */
> + if ( dom0_cpuid_cmdline )
> + {
> + uint32_t fs[FSCAPINTS];
> + unsigned int i;
> +
> + x86_cpu_policy_to_featureset(p, fs);
> +
> + for ( i = 0; i < ARRAY_SIZE(fs); ++i )
> + {
> + fs[i] |= dom0_enable_feat [i];
> + fs[i] &= ~dom0_disable_feat[i];
> + }
> +
> + x86_cpu_featureset_to_policy(fs, p);
> +
> + recalculate_cpuid_policy(d);
> + }
> +}
> +
> +static void __init __maybe_unused build_assertions(void)
> +{
> + BUILD_BUG_ON(ARRAY_SIZE(known_features) != FSCAPINTS);
> + BUILD_BUG_ON(ARRAY_SIZE(pv_max_featuremask) != FSCAPINTS);
> + BUILD_BUG_ON(ARRAY_SIZE(hvm_shadow_max_featuremask) != FSCAPINTS);
> + BUILD_BUG_ON(ARRAY_SIZE(hvm_hap_max_featuremask) != FSCAPINTS);
> + BUILD_BUG_ON(ARRAY_SIZE(deep_features) != FSCAPINTS);
> +
> + /* Find some more clever allocation scheme if this trips. */
> + BUILD_BUG_ON(sizeof(struct cpu_policy) > PAGE_SIZE);
> +
> + BUILD_BUG_ON(sizeof(raw_cpu_policy.basic) !=
> + sizeof(raw_cpu_policy.basic.raw));
> + BUILD_BUG_ON(sizeof(raw_cpu_policy.feat) !=
> + sizeof(raw_cpu_policy.feat.raw));
> + BUILD_BUG_ON(sizeof(raw_cpu_policy.xstate) !=
> + sizeof(raw_cpu_policy.xstate.raw));
> + BUILD_BUG_ON(sizeof(raw_cpu_policy.extd) !=
> + sizeof(raw_cpu_policy.extd.raw));
> +}
> diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
> index 5eb5f1893516..3f20c342fde8 100644
> --- a/xen/arch/x86/cpuid.c
> +++ b/xen/arch/x86/cpuid.c
> @@ -1,638 +1,14 @@
> -#include <xen/init.h>
> -#include <xen/lib.h>
> -#include <xen/param.h>
> #include <xen/sched.h>
> -#include <xen/nospec.h>
> -#include <asm/amd.h>
> +#include <xen/types.h>
> +
> +#include <public/hvm/params.h>
> +
> #include <asm/cpu-policy.h>
> #include <asm/cpuid.h>
> -#include <asm/hvm/hvm.h>
> -#include <asm/hvm/nestedhvm.h>
> -#include <asm/hvm/svm/svm.h>
> #include <asm/hvm/viridian.h>
> -#include <asm/hvm/vmx/vmcs.h>
> -#include <asm/paging.h>
> -#include <asm/processor.h>
> #include <asm/xstate.h>
>
> -const uint32_t known_features[] = INIT_KNOWN_FEATURES;
> -
> -static const uint32_t __initconst pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
> -static const uint32_t hvm_shadow_max_featuremask[] = INIT_HVM_SHADOW_MAX_FEATURES;
> -static const uint32_t __initconst hvm_hap_max_featuremask[] =
> - INIT_HVM_HAP_MAX_FEATURES;
> -static const uint32_t __initconst pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
> -static const uint32_t __initconst hvm_shadow_def_featuremask[] =
> - INIT_HVM_SHADOW_DEF_FEATURES;
> -static const uint32_t __initconst hvm_hap_def_featuremask[] =
> - INIT_HVM_HAP_DEF_FEATURES;
> -static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
> -
> -static const struct feature_name {
> - const char *name;
> - unsigned int bit;
> -} feature_names[] __initconstrel = INIT_FEATURE_NAMES;
> -
> -/*
> - * Parse a list of cpuid feature names -> bool, calling the callback for any
> - * matches found.
> - *
> - * always_inline, because this is init code only and we really don't want a
> - * function pointer call in the middle of the loop.
> - */
> -static int __init always_inline parse_cpuid(
> - const char *s, void (*callback)(unsigned int feat, bool val))
> -{
> - const char *ss;
> - int val, rc = 0;
> -
> - do {
> - const struct feature_name *lhs, *rhs, *mid = NULL /* GCC... */;
> - const char *feat;
> -
> - ss = strchr(s, ',');
> - if ( !ss )
> - ss = strchr(s, '\0');
> -
> - /* Skip the 'no-' prefix for name comparisons. */
> - feat = s;
> - if ( strncmp(s, "no-", 3) == 0 )
> - feat += 3;
> -
> - /* (Re)initalise lhs and rhs for binary search. */
> - lhs = feature_names;
> - rhs = feature_names + ARRAY_SIZE(feature_names);
> -
> - while ( lhs < rhs )
> - {
> - int res;
> -
> - mid = lhs + (rhs - lhs) / 2;
> - res = cmdline_strcmp(feat, mid->name);
> -
> - if ( res < 0 )
> - {
> - rhs = mid;
> - continue;
> - }
> - if ( res > 0 )
> - {
> - lhs = mid + 1;
> - continue;
> - }
> -
> - if ( (val = parse_boolean(mid->name, s, ss)) >= 0 )
> - {
> - callback(mid->bit, val);
> - mid = NULL;
> - }
> -
> - break;
> - }
> -
> - /*
> - * Mid being NULL means that the name and boolean were successfully
> - * identified. Everything else is an error.
> - */
> - if ( mid )
> - rc = -EINVAL;
> -
> - s = ss + 1;
> - } while ( *ss );
> -
> - return rc;
> -}
> -
> -static void __init cf_check _parse_xen_cpuid(unsigned int feat, bool val)
> -{
> - if ( !val )
> - setup_clear_cpu_cap(feat);
> - else if ( feat == X86_FEATURE_RDRAND &&
> - (cpuid_ecx(1) & cpufeat_mask(X86_FEATURE_RDRAND)) )
> - setup_force_cpu_cap(X86_FEATURE_RDRAND);
> -}
> -
> -static int __init cf_check parse_xen_cpuid(const char *s)
> -{
> - return parse_cpuid(s, _parse_xen_cpuid);
> -}
> -custom_param("cpuid", parse_xen_cpuid);
> -
> -static bool __initdata dom0_cpuid_cmdline;
> -static uint32_t __initdata dom0_enable_feat[FSCAPINTS];
> -static uint32_t __initdata dom0_disable_feat[FSCAPINTS];
> -
> -static void __init cf_check _parse_dom0_cpuid(unsigned int feat, bool val)
> -{
> - __set_bit (feat, val ? dom0_enable_feat : dom0_disable_feat);
> - __clear_bit(feat, val ? dom0_disable_feat : dom0_enable_feat );
> -}
> -
> -static int __init cf_check parse_dom0_cpuid(const char *s)
> -{
> - dom0_cpuid_cmdline = true;
> -
> - return parse_cpuid(s, _parse_dom0_cpuid);
> -}
> -custom_param("dom0-cpuid", parse_dom0_cpuid);
> -
> #define EMPTY_LEAF ((struct cpuid_leaf){})
> -static void zero_leaves(struct cpuid_leaf *l,
> - unsigned int first, unsigned int last)
> -{
> - memset(&l[first], 0, sizeof(*l) * (last - first + 1));
> -}
> -
> -static void sanitise_featureset(uint32_t *fs)
> -{
> - /* for_each_set_bit() uses unsigned longs. Extend with zeroes. */
> - uint32_t disabled_features[
> - ROUNDUP(FSCAPINTS, sizeof(unsigned long)/sizeof(uint32_t))] = {};
> - unsigned int i;
> -
> - for ( i = 0; i < FSCAPINTS; ++i )
> - {
> - /* Clamp to known mask. */
> - fs[i] &= known_features[i];
> -
> - /*
> - * Identify which features with deep dependencies have been
> - * disabled.
> - */
> - disabled_features[i] = ~fs[i] & deep_features[i];
> - }
> -
> - for_each_set_bit(i, (void *)disabled_features,
> - sizeof(disabled_features) * 8)
> - {
> - const uint32_t *dfs = x86_cpuid_lookup_deep_deps(i);
> - unsigned int j;
> -
> - ASSERT(dfs); /* deep_features[] should guarentee this. */
> -
> - for ( j = 0; j < FSCAPINTS; ++j )
> - {
> - fs[j] &= ~dfs[j];
> - disabled_features[j] &= ~dfs[j];
> - }
> - }
> -}
> -
> -static void recalculate_xstate(struct cpuid_policy *p)
> -{
> - uint64_t xstates = XSTATE_FP_SSE;
> - uint32_t xstate_size = XSTATE_AREA_MIN_SIZE;
> - unsigned int i, Da1 = p->xstate.Da1;
> -
> - /*
> - * The Da1 leaf is the only piece of information preserved in the common
> - * case. Everything else is derived from other feature state.
> - */
> - memset(&p->xstate, 0, sizeof(p->xstate));
> -
> - if ( !p->basic.xsave )
> - return;
> -
> - if ( p->basic.avx )
> - {
> - xstates |= X86_XCR0_YMM;
> - xstate_size = max(xstate_size,
> - xstate_offsets[X86_XCR0_YMM_POS] +
> - xstate_sizes[X86_XCR0_YMM_POS]);
> - }
> -
> - if ( p->feat.mpx )
> - {
> - xstates |= X86_XCR0_BNDREGS | X86_XCR0_BNDCSR;
> - xstate_size = max(xstate_size,
> - xstate_offsets[X86_XCR0_BNDCSR_POS] +
> - xstate_sizes[X86_XCR0_BNDCSR_POS]);
> - }
> -
> - if ( p->feat.avx512f )
> - {
> - xstates |= X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM;
> - xstate_size = max(xstate_size,
> - xstate_offsets[X86_XCR0_HI_ZMM_POS] +
> - xstate_sizes[X86_XCR0_HI_ZMM_POS]);
> - }
> -
> - if ( p->feat.pku )
> - {
> - xstates |= X86_XCR0_PKRU;
> - xstate_size = max(xstate_size,
> - xstate_offsets[X86_XCR0_PKRU_POS] +
> - xstate_sizes[X86_XCR0_PKRU_POS]);
> - }
> -
> - p->xstate.max_size = xstate_size;
> - p->xstate.xcr0_low = xstates & ~XSTATE_XSAVES_ONLY;
> - p->xstate.xcr0_high = (xstates & ~XSTATE_XSAVES_ONLY) >> 32;
> -
> - p->xstate.Da1 = Da1;
> - if ( p->xstate.xsaves )
> - {
> - p->xstate.xss_low = xstates & XSTATE_XSAVES_ONLY;
> - p->xstate.xss_high = (xstates & XSTATE_XSAVES_ONLY) >> 32;
> - }
> - else
> - xstates &= ~XSTATE_XSAVES_ONLY;
> -
> - for ( i = 2; i < min(63ul, ARRAY_SIZE(p->xstate.comp)); ++i )
> - {
> - uint64_t curr_xstate = 1ul << i;
> -
> - if ( !(xstates & curr_xstate) )
> - continue;
> -
> - p->xstate.comp[i].size = xstate_sizes[i];
> - p->xstate.comp[i].offset = xstate_offsets[i];
> - p->xstate.comp[i].xss = curr_xstate & XSTATE_XSAVES_ONLY;
> - p->xstate.comp[i].align = curr_xstate & xstate_align;
> - }
> -}
> -
> -/*
> - * Misc adjustments to the policy. Mostly clobbering reserved fields and
> - * duplicating shared fields. Intentionally hidden fields are annotated.
> - */
> -static void recalculate_misc(struct cpuid_policy *p)
> -{
> - p->basic.raw_fms &= 0x0fff0fff; /* Clobber Processor Type on Intel. */
> - p->basic.apic_id = 0; /* Dynamic. */
> -
> - p->basic.raw[0x5] = EMPTY_LEAF; /* MONITOR not exposed to guests. */
> - p->basic.raw[0x6] = EMPTY_LEAF; /* Therm/Power not exposed to guests. */
> -
> - p->basic.raw[0x8] = EMPTY_LEAF;
> -
> - /* TODO: Rework topology logic. */
> - memset(p->topo.raw, 0, sizeof(p->topo.raw));
> -
> - p->basic.raw[0xc] = EMPTY_LEAF;
> -
> - p->extd.e1d &= ~CPUID_COMMON_1D_FEATURES;
> -
> - /* Most of Power/RAS hidden from guests. */
> - p->extd.raw[0x7].a = p->extd.raw[0x7].b = p->extd.raw[0x7].c = 0;
> -
> - p->extd.raw[0x8].d = 0;
> -
> - switch ( p->x86_vendor )
> - {
> - case X86_VENDOR_INTEL:
> - p->basic.l2_nr_queries = 1; /* Fixed to 1 query. */
> - p->basic.raw[0x3] = EMPTY_LEAF; /* PSN - always hidden. */
> - p->basic.raw[0x9] = EMPTY_LEAF; /* DCA - always hidden. */
> -
> - p->extd.vendor_ebx = 0;
> - p->extd.vendor_ecx = 0;
> - p->extd.vendor_edx = 0;
> -
> - p->extd.raw[0x1].a = p->extd.raw[0x1].b = 0;
> -
> - p->extd.raw[0x5] = EMPTY_LEAF;
> - p->extd.raw[0x6].a = p->extd.raw[0x6].b = p->extd.raw[0x6].d = 0;
> -
> - p->extd.raw[0x8].a &= 0x0000ffff;
> - p->extd.raw[0x8].c = 0;
> - break;
> -
> - case X86_VENDOR_AMD:
> - case X86_VENDOR_HYGON:
> - zero_leaves(p->basic.raw, 0x2, 0x3);
> - memset(p->cache.raw, 0, sizeof(p->cache.raw));
> - zero_leaves(p->basic.raw, 0x9, 0xa);
> -
> - p->extd.vendor_ebx = p->basic.vendor_ebx;
> - p->extd.vendor_ecx = p->basic.vendor_ecx;
> - p->extd.vendor_edx = p->basic.vendor_edx;
> -
> - p->extd.raw_fms = p->basic.raw_fms;
> - p->extd.raw[0x1].b &= 0xff00ffff;
> - p->extd.e1d |= p->basic._1d & CPUID_COMMON_1D_FEATURES;
> -
> - p->extd.raw[0x8].a &= 0x0000ffff; /* GuestMaxPhysAddr hidden. */
> - p->extd.raw[0x8].c &= 0x0003f0ff;
> -
> - p->extd.raw[0x9] = EMPTY_LEAF;
> -
> - zero_leaves(p->extd.raw, 0xb, 0x18);
> -
> - /* 0x19 - TLB details. Pass through. */
> - /* 0x1a - Perf hints. Pass through. */
> -
> - p->extd.raw[0x1b] = EMPTY_LEAF; /* IBS - not supported. */
> - p->extd.raw[0x1c] = EMPTY_LEAF; /* LWP - not supported. */
> - p->extd.raw[0x1d] = EMPTY_LEAF; /* TopoExt Cache */
> - p->extd.raw[0x1e] = EMPTY_LEAF; /* TopoExt APIC ID/Core/Node */
> - p->extd.raw[0x1f] = EMPTY_LEAF; /* SEV */
> - p->extd.raw[0x20] = EMPTY_LEAF; /* Platform QoS */
> - break;
> - }
> -}
> -
> -static void __init calculate_raw_policy(void)
> -{
> - struct cpuid_policy *p = &raw_cpu_policy;
> -
> - x86_cpuid_policy_fill_native(p);
> -
> - /* Nothing good will come from Xen and libx86 disagreeing on vendor. */
> - ASSERT(p->x86_vendor == boot_cpu_data.x86_vendor);
> -}
> -
> -static void __init calculate_host_policy(void)
> -{
> - struct cpuid_policy *p = &host_cpu_policy;
> - unsigned int max_extd_leaf;
> -
> - *p = raw_cpu_policy;
> -
> - p->basic.max_leaf =
> - min_t(uint32_t, p->basic.max_leaf, ARRAY_SIZE(p->basic.raw) - 1);
> - p->feat.max_subleaf =
> - min_t(uint32_t, p->feat.max_subleaf, ARRAY_SIZE(p->feat.raw) - 1);
> -
> - max_extd_leaf = p->extd.max_leaf;
> -
> - /*
> - * For AMD/Hygon hardware before Zen3, we unilaterally modify LFENCE to be
> - * dispatch serialising for Spectre mitigations. Extend max_extd_leaf
> - * beyond what hardware supports, to include the feature leaf containing
> - * this information.
> - */
> - if ( cpu_has_lfence_dispatch )
> - max_extd_leaf = max(max_extd_leaf, 0x80000021);
> -
> - p->extd.max_leaf = 0x80000000 | min_t(uint32_t, max_extd_leaf & 0xffff,
> - ARRAY_SIZE(p->extd.raw) - 1);
> -
> - x86_cpu_featureset_to_policy(boot_cpu_data.x86_capability, p);
> - recalculate_xstate(p);
> - recalculate_misc(p);
> -
> - /* When vPMU is disabled, drop it from the host policy. */
> - if ( vpmu_mode == XENPMU_MODE_OFF )
> - p->basic.raw[0xa] = EMPTY_LEAF;
> -
> - if ( p->extd.svm )
> - {
> - /* Clamp to implemented features which require hardware support. */
> - p->extd.raw[0xa].d &= ((1u << SVM_FEATURE_NPT) |
> - (1u << SVM_FEATURE_LBRV) |
> - (1u << SVM_FEATURE_NRIPS) |
> - (1u << SVM_FEATURE_PAUSEFILTER) |
> - (1u << SVM_FEATURE_DECODEASSISTS));
> - /* Enable features which are always emulated. */
> - p->extd.raw[0xa].d |= ((1u << SVM_FEATURE_VMCBCLEAN) |
> - (1u << SVM_FEATURE_TSCRATEMSR));
> - }
> -}
> -
> -static void __init guest_common_default_feature_adjustments(uint32_t *fs)
> -{
> - /*
> - * IvyBridge client parts suffer from leakage of RDRAND data due to SRBDS
> - * (XSA-320 / CVE-2020-0543), and won't be receiving microcode to
> - * compensate.
> - *
> - * Mitigate by hiding RDRAND from guests by default, unless explicitly
> - * overridden on the Xen command line (cpuid=rdrand). Irrespective of the
> - * default setting, guests can use RDRAND if explicitly enabled
> - * (cpuid="host,rdrand=1") in the VM's config file, and VMs which were
> - * previously using RDRAND can migrate in.
> - */
> - if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
> - boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 0x3a &&
> - cpu_has_rdrand && !is_forced_cpu_cap(X86_FEATURE_RDRAND) )
> - __clear_bit(X86_FEATURE_RDRAND, fs);
> -
> - /*
> - * On certain hardware, speculative or errata workarounds can result in
> - * TSX being placed in "force-abort" mode, where it doesn't actually
> - * function as expected, but is technically compatible with the ISA.
> - *
> - * Do not advertise RTM to guests by default if it won't actually work.
> - */
> - if ( rtm_disabled )
> - __clear_bit(X86_FEATURE_RTM, fs);
> -}
> -
> -static void __init guest_common_feature_adjustments(uint32_t *fs)
> -{
> - /* Unconditionally claim to be able to set the hypervisor bit. */
> - __set_bit(X86_FEATURE_HYPERVISOR, fs);
> -
> - /*
> - * If IBRS is offered to the guest, unconditionally offer STIBP. It is a
> - * nop on non-HT hardware, and has this behaviour to make heterogeneous
> - * setups easier to manage.
> - */
> - if ( test_bit(X86_FEATURE_IBRSB, fs) )
> - __set_bit(X86_FEATURE_STIBP, fs);
> - if ( test_bit(X86_FEATURE_IBRS, fs) )
> - __set_bit(X86_FEATURE_AMD_STIBP, fs);
> -
> - /*
> - * On hardware which supports IBRS/IBPB, we can offer IBPB independently
> - * of IBRS by using the AMD feature bit. An administrator may wish for
> - * performance reasons to offer IBPB without IBRS.
> - */
> - if ( host_cpu_policy.feat.ibrsb )
> - __set_bit(X86_FEATURE_IBPB, fs);
> -}
> -
> -static void __init calculate_pv_max_policy(void)
> -{
> - struct cpuid_policy *p = &pv_max_cpu_policy;
> - uint32_t pv_featureset[FSCAPINTS];
> - unsigned int i;
> -
> - *p = host_cpu_policy;
> - x86_cpu_policy_to_featureset(p, pv_featureset);
> -
> - for ( i = 0; i < ARRAY_SIZE(pv_featureset); ++i )
> - pv_featureset[i] &= pv_max_featuremask[i];
> -
> - /*
> - * If Xen isn't virtualising MSR_SPEC_CTRL for PV guests (functional
> - * availability, or admin choice), hide the feature.
> - */
> - if ( !boot_cpu_has(X86_FEATURE_SC_MSR_PV) )
> - {
> - __clear_bit(X86_FEATURE_IBRSB, pv_featureset);
> - __clear_bit(X86_FEATURE_IBRS, pv_featureset);
> - }
> -
> - guest_common_feature_adjustments(pv_featureset);
> -
> - sanitise_featureset(pv_featureset);
> - x86_cpu_featureset_to_policy(pv_featureset, p);
> - recalculate_xstate(p);
> -
> - p->extd.raw[0xa] = EMPTY_LEAF; /* No SVM for PV guests. */
> -}
> -
> -static void __init calculate_pv_def_policy(void)
> -{
> - struct cpuid_policy *p = &pv_def_cpu_policy;
> - uint32_t pv_featureset[FSCAPINTS];
> - unsigned int i;
> -
> - *p = pv_max_cpu_policy;
> - x86_cpu_policy_to_featureset(p, pv_featureset);
> -
> - for ( i = 0; i < ARRAY_SIZE(pv_featureset); ++i )
> - pv_featureset[i] &= pv_def_featuremask[i];
> -
> - guest_common_feature_adjustments(pv_featureset);
> - guest_common_default_feature_adjustments(pv_featureset);
> -
> - sanitise_featureset(pv_featureset);
> - x86_cpu_featureset_to_policy(pv_featureset, p);
> - recalculate_xstate(p);
> -}
> -
> -static void __init calculate_hvm_max_policy(void)
> -{
> - struct cpuid_policy *p = &hvm_max_cpu_policy;
> - uint32_t hvm_featureset[FSCAPINTS];
> - unsigned int i;
> - const uint32_t *hvm_featuremask;
> -
> - *p = host_cpu_policy;
> - x86_cpu_policy_to_featureset(p, hvm_featureset);
> -
> - hvm_featuremask = hvm_hap_supported() ?
> - hvm_hap_max_featuremask : hvm_shadow_max_featuremask;
> -
> - for ( i = 0; i < ARRAY_SIZE(hvm_featureset); ++i )
> - hvm_featureset[i] &= hvm_featuremask[i];
> -
> - /*
> - * Xen can provide an (x2)APIC emulation to HVM guests even if the host's
> - * (x2)APIC isn't enabled.
> - */
> - __set_bit(X86_FEATURE_APIC, hvm_featureset);
> - __set_bit(X86_FEATURE_X2APIC, hvm_featureset);
> -
> - /*
> - * We don't support EFER.LMSLE at all. AMD has dropped the feature from
> - * hardware and allocated a CPUID bit to indicate its absence.
> - */
> - __set_bit(X86_FEATURE_NO_LMSL, hvm_featureset);
> -
> - /*
> - * On AMD, PV guests are entirely unable to use SYSENTER as Xen runs in
> - * long mode (and init_amd() has cleared it out of host capabilities), but
> - * HVM guests are able if running in protected mode.
> - */
> - if ( (boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) &&
> - raw_cpu_policy.basic.sep )
> - __set_bit(X86_FEATURE_SEP, hvm_featureset);
> -
> - /*
> - * VIRT_SSBD is exposed in the default policy as a result of
> - * amd_virt_spec_ctrl being set, it also needs exposing in the max policy.
> - */
> - if ( amd_virt_spec_ctrl )
> - __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
> -
> - /*
> - * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional
> - * availability, or admin choice), hide the feature.
> - */
> - if ( !boot_cpu_has(X86_FEATURE_SC_MSR_HVM) )
> - {
> - __clear_bit(X86_FEATURE_IBRSB, hvm_featureset);
> - __clear_bit(X86_FEATURE_IBRS, hvm_featureset);
> - }
> - else if ( boot_cpu_has(X86_FEATURE_AMD_SSBD) )
> - /*
> - * If SPEC_CTRL.SSBD is available VIRT_SPEC_CTRL.SSBD can be exposed
> - * and implemented using the former. Expose in the max policy only as
> - * the preference is for guests to use SPEC_CTRL.SSBD if available.
> - */
> - __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
> -
> - /*
> - * With VT-x, some features are only supported by Xen if dedicated
> - * hardware support is also available.
> - */
> - if ( cpu_has_vmx )
> - {
> - if ( !cpu_has_vmx_mpx )
> - __clear_bit(X86_FEATURE_MPX, hvm_featureset);
> -
> - if ( !cpu_has_vmx_xsaves )
> - __clear_bit(X86_FEATURE_XSAVES, hvm_featureset);
> - }
> -
> - /*
> - * Xen doesn't use PKS, so the guest support for it has opted to not use
> - * the VMCS load/save controls for efficiency reasons. This depends on
> - * the exact vmentry/exit behaviour, so don't expose PKS in other
> - * situations until someone has cross-checked the behaviour for safety.
> - */
> - if ( !cpu_has_vmx )
> - __clear_bit(X86_FEATURE_PKS, hvm_featureset);
> -
> - guest_common_feature_adjustments(hvm_featureset);
> -
> - sanitise_featureset(hvm_featureset);
> - x86_cpu_featureset_to_policy(hvm_featureset, p);
> - recalculate_xstate(p);
> -}
> -
> -static void __init calculate_hvm_def_policy(void)
> -{
> - struct cpuid_policy *p = &hvm_def_cpu_policy;
> - uint32_t hvm_featureset[FSCAPINTS];
> - unsigned int i;
> - const uint32_t *hvm_featuremask;
> -
> - *p = hvm_max_cpu_policy;
> - x86_cpu_policy_to_featureset(p, hvm_featureset);
> -
> - hvm_featuremask = hvm_hap_supported() ?
> - hvm_hap_def_featuremask : hvm_shadow_def_featuremask;
> -
> - for ( i = 0; i < ARRAY_SIZE(hvm_featureset); ++i )
> - hvm_featureset[i] &= hvm_featuremask[i];
> -
> - guest_common_feature_adjustments(hvm_featureset);
> - guest_common_default_feature_adjustments(hvm_featureset);
> -
> - /*
> - * Only expose VIRT_SSBD if AMD_SSBD is not available, and thus
> - * amd_virt_spec_ctrl is set.
> - */
> - if ( amd_virt_spec_ctrl )
> - __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
> -
> - sanitise_featureset(hvm_featureset);
> - x86_cpu_featureset_to_policy(hvm_featureset, p);
> - recalculate_xstate(p);
> -}
> -
> -void __init init_guest_cpuid(void)
> -{
> - calculate_raw_policy();
> - calculate_host_policy();
> -
> - if ( IS_ENABLED(CONFIG_PV) )
> - {
> - calculate_pv_max_policy();
> - calculate_pv_def_policy();
> - }
> -
> - if ( hvm_enabled )
> - {
> - calculate_hvm_max_policy();
> - calculate_hvm_def_policy();
> - }
> -}
>
> bool recheck_cpu_features(unsigned int cpu)
> {
> @@ -656,170 +32,6 @@ bool recheck_cpu_features(unsigned int cpu)
> return okay;
> }
>
> -void recalculate_cpuid_policy(struct domain *d)
> -{
> - struct cpuid_policy *p = d->arch.cpuid;
> - const struct cpuid_policy *max = is_pv_domain(d)
> - ? (IS_ENABLED(CONFIG_PV) ? &pv_max_cpu_policy : NULL)
> - : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
> - uint32_t fs[FSCAPINTS], max_fs[FSCAPINTS];
> - unsigned int i;
> -
> - if ( !max )
> - {
> - ASSERT_UNREACHABLE();
> - return;
> - }
> -
> - p->x86_vendor = x86_cpuid_lookup_vendor(
> - p->basic.vendor_ebx, p->basic.vendor_ecx, p->basic.vendor_edx);
> -
> - p->basic.max_leaf = min(p->basic.max_leaf, max->basic.max_leaf);
> - p->feat.max_subleaf = min(p->feat.max_subleaf, max->feat.max_subleaf);
> - p->extd.max_leaf = 0x80000000 | min(p->extd.max_leaf & 0xffff,
> - ((p->x86_vendor & (X86_VENDOR_AMD |
> - X86_VENDOR_HYGON))
> - ? CPUID_GUEST_NR_EXTD_AMD
> - : CPUID_GUEST_NR_EXTD_INTEL) - 1);
> -
> - x86_cpu_policy_to_featureset(p, fs);
> - x86_cpu_policy_to_featureset(max, max_fs);
> -
> - if ( is_hvm_domain(d) )
> - {
> - /*
> - * HVM domains using Shadow paging have further restrictions on their
> - * available paging features.
> - */
> - if ( !hap_enabled(d) )
> - {
> - for ( i = 0; i < ARRAY_SIZE(max_fs); i++ )
> - max_fs[i] &= hvm_shadow_max_featuremask[i];
> - }
> -
> - /* Hide nested-virt if it hasn't been explicitly configured. */
> - if ( !nestedhvm_enabled(d) )
> - {
> - __clear_bit(X86_FEATURE_VMX, max_fs);
> - __clear_bit(X86_FEATURE_SVM, max_fs);
> - }
> - }
> -
> - /*
> - * Allow the toolstack to set HTT, X2APIC and CMP_LEGACY. These bits
> - * affect how to interpret topology information in other cpuid leaves.
> - */
> - __set_bit(X86_FEATURE_HTT, max_fs);
> - __set_bit(X86_FEATURE_X2APIC, max_fs);
> - __set_bit(X86_FEATURE_CMP_LEGACY, max_fs);
> -
> - /*
> - * 32bit PV domains can't use any Long Mode features, and cannot use
> - * SYSCALL on non-AMD hardware.
> - */
> - if ( is_pv_32bit_domain(d) )
> - {
> - __clear_bit(X86_FEATURE_LM, max_fs);
> - if ( !(boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
> - __clear_bit(X86_FEATURE_SYSCALL, max_fs);
> - }
> -
> - /* Clamp the toolstacks choices to reality. */
> - for ( i = 0; i < ARRAY_SIZE(fs); i++ )
> - fs[i] &= max_fs[i];
> -
> - if ( p->basic.max_leaf < XSTATE_CPUID )
> - __clear_bit(X86_FEATURE_XSAVE, fs);
> -
> - sanitise_featureset(fs);
> -
> - /* Fold host's FDP_EXCP_ONLY and NO_FPU_SEL into guest's view. */
> - fs[FEATURESET_7b0] &= ~(cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
> - cpufeat_mask(X86_FEATURE_NO_FPU_SEL));
> - fs[FEATURESET_7b0] |= (host_cpu_policy.feat._7b0 &
> - (cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
> - cpufeat_mask(X86_FEATURE_NO_FPU_SEL)));
> -
> - x86_cpu_featureset_to_policy(fs, p);
> -
> - /* Pass host cacheline size through to guests. */
> - p->basic.clflush_size = max->basic.clflush_size;
> -
> - p->extd.maxphysaddr = min(p->extd.maxphysaddr, max->extd.maxphysaddr);
> - p->extd.maxphysaddr = min_t(uint8_t, p->extd.maxphysaddr,
> - paging_max_paddr_bits(d));
> - p->extd.maxphysaddr = max_t(uint8_t, p->extd.maxphysaddr,
> - (p->basic.pae || p->basic.pse36) ? 36 : 32);
> -
> - p->extd.maxlinaddr = p->extd.lm ? 48 : 32;
> -
> - recalculate_xstate(p);
> - recalculate_misc(p);
> -
> - for ( i = 0; i < ARRAY_SIZE(p->cache.raw); ++i )
> - {
> - if ( p->cache.subleaf[i].type >= 1 &&
> - p->cache.subleaf[i].type <= 3 )
> - {
> - /* Subleaf has a valid cache type. Zero reserved fields. */
> - p->cache.raw[i].a &= 0xffffc3ffu;
> - p->cache.raw[i].d &= 0x00000007u;
> - }
> - else
> - {
> - /* Subleaf is not valid. Zero the rest of the union. */
> - zero_leaves(p->cache.raw, i, ARRAY_SIZE(p->cache.raw) - 1);
> - break;
> - }
> - }
> -
> - if ( vpmu_mode == XENPMU_MODE_OFF ||
> - ((vpmu_mode & XENPMU_MODE_ALL) && !is_hardware_domain(d)) )
> - p->basic.raw[0xa] = EMPTY_LEAF;
> -
> - if ( !p->extd.svm )
> - p->extd.raw[0xa] = EMPTY_LEAF;
> -
> - if ( !p->extd.page1gb )
> - p->extd.raw[0x19] = EMPTY_LEAF;
> -}
> -
> -void __init init_dom0_cpuid_policy(struct domain *d)
> -{
> - struct cpuid_policy *p = d->arch.cpuid;
> -
> - /* dom0 can't migrate. Give it ITSC if available. */
> - if ( cpu_has_itsc )
> - p->extd.itsc = true;
> -
> - /*
> - * Expose the "hardware speculation behaviour" bits of ARCH_CAPS to dom0,
> - * so dom0 can turn off workarounds as appropriate. Temporary, until the
> - * domain policy logic gains a better understanding of MSRs.
> - */
> - if ( cpu_has_arch_caps )
> - p->feat.arch_caps = true;
> -
> - /* Apply dom0-cpuid= command line settings, if provided. */
> - if ( dom0_cpuid_cmdline )
> - {
> - uint32_t fs[FSCAPINTS];
> - unsigned int i;
> -
> - x86_cpu_policy_to_featureset(p, fs);
> -
> - for ( i = 0; i < ARRAY_SIZE(fs); ++i )
> - {
> - fs[i] |= dom0_enable_feat [i];
> - fs[i] &= ~dom0_disable_feat[i];
> - }
> -
> - x86_cpu_featureset_to_policy(fs, p);
> -
> - recalculate_cpuid_policy(d);
> - }
> -}
> -
> void guest_cpuid(const struct vcpu *v, uint32_t leaf,
> uint32_t subleaf, struct cpuid_leaf *res)
> {
> @@ -1190,27 +402,6 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
> }
> }
>
> -static void __init __maybe_unused build_assertions(void)
> -{
> - BUILD_BUG_ON(ARRAY_SIZE(known_features) != FSCAPINTS);
> - BUILD_BUG_ON(ARRAY_SIZE(pv_max_featuremask) != FSCAPINTS);
> - BUILD_BUG_ON(ARRAY_SIZE(hvm_shadow_max_featuremask) != FSCAPINTS);
> - BUILD_BUG_ON(ARRAY_SIZE(hvm_hap_max_featuremask) != FSCAPINTS);
> - BUILD_BUG_ON(ARRAY_SIZE(deep_features) != FSCAPINTS);
> -
> - /* Find some more clever allocation scheme if this trips. */
> - BUILD_BUG_ON(sizeof(struct cpuid_policy) > PAGE_SIZE);
> -
> - BUILD_BUG_ON(sizeof(raw_cpu_policy.basic) !=
> - sizeof(raw_cpu_policy.basic.raw));
> - BUILD_BUG_ON(sizeof(raw_cpu_policy.feat) !=
> - sizeof(raw_cpu_policy.feat.raw));
> - BUILD_BUG_ON(sizeof(raw_cpu_policy.xstate) !=
> - sizeof(raw_cpu_policy.xstate.raw));
> - BUILD_BUG_ON(sizeof(raw_cpu_policy.extd) !=
> - sizeof(raw_cpu_policy.extd.raw));
> -}
> -
> /*
> * Local variables:
> * mode: C
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index d326fa1c0136..675c523d9909 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -77,7 +77,6 @@
> #include <public/memory.h>
> #include <public/vm_event.h>
> #include <public/arch-x86/cpuid.h>
> -#include <asm/cpuid.h>
>
> #include <compat/hvm/hvm_op.h>
>
> diff --git a/xen/arch/x86/include/asm/cpu-policy.h b/xen/arch/x86/include/asm/cpu-policy.h
> index 13e2a1f86d13..b361537a602b 100644
> --- a/xen/arch/x86/include/asm/cpu-policy.h
> +++ b/xen/arch/x86/include/asm/cpu-policy.h
> @@ -18,4 +18,10 @@ void init_guest_cpu_policies(void);
> /* Allocate and initialise a CPU policy suitable for the domain. */
> int init_domain_cpu_policy(struct domain *d);
>
> +/* Apply dom0-specific tweaks to the CPUID policy. */
> +void init_dom0_cpuid_policy(struct domain *d);
> +
> +/* Clamp the CPUID policy to reality. */
> +void recalculate_cpuid_policy(struct domain *d);
> +
> #endif /* X86_CPU_POLICY_H */
> diff --git a/xen/arch/x86/include/asm/cpuid.h b/xen/arch/x86/include/asm/cpuid.h
> index 7f81b998ce01..b32ba0bbfe5c 100644
> --- a/xen/arch/x86/include/asm/cpuid.h
> +++ b/xen/arch/x86/include/asm/cpuid.h
> @@ -8,14 +8,10 @@
> #include <xen/kernel.h>
> #include <xen/percpu.h>
>
> -#include <xen/lib/x86/cpu-policy.h>
> -
> #include <public/sysctl.h>
>
> extern const uint32_t known_features[FSCAPINTS];
>
> -void init_guest_cpuid(void);
> -
> /*
> * Expected levelling capabilities (given cpuid vendor/family information),
> * and levelling capabilities actually available (given MSR probing).
> @@ -49,13 +45,8 @@ extern struct cpuidmasks cpuidmask_defaults;
> /* Check that all previously present features are still available. */
> bool recheck_cpu_features(unsigned int cpu);
>
> -/* Apply dom0-specific tweaks to the CPUID policy. */
> -void init_dom0_cpuid_policy(struct domain *d);
> -
> -/* Clamp the CPUID policy to reality. */
> -void recalculate_cpuid_policy(struct domain *d);
> -
> struct vcpu;
> +struct cpuid_leaf;
> void guest_cpuid(const struct vcpu *v, uint32_t leaf,
> uint32_t subleaf, struct cpuid_leaf *res);
>
> diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
> index f94f28c8e271..95492715d8ad 100644
> --- a/xen/arch/x86/pv/domain.c
> +++ b/xen/arch/x86/pv/domain.c
> @@ -10,6 +10,7 @@
> #include <xen/param.h>
> #include <xen/sched.h>
>
> +#include <asm/cpu-policy.h>
> #include <asm/cpufeature.h>
> #include <asm/invpcid.h>
> #include <asm/spec_ctrl.h>
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index 51a19b9019eb..08ade715a3ce 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -51,7 +51,6 @@
> #include <asm/alternative.h>
> #include <asm/mc146818rtc.h>
> #include <asm/cpu-policy.h>
> -#include <asm/cpuid.h>
> #include <asm/spec_ctrl.h>
> #include <asm/guest.h>
> #include <asm/microcode.h>
> @@ -1991,7 +1990,6 @@ void __init noreturn __start_xen(unsigned long mbi_p)
> if ( !tboot_protect_mem_regions() )
> panic("Could not protect TXT memory regions\n");
>
> - init_guest_cpuid();
> init_guest_cpu_policies();
>
> if ( xen_cpuidle )
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v2 12/15] x86/emul: Switch x86_emulate_ctxt to cpu_policy
2023-04-04 9:52 ` [PATCH v2 12/15] x86/emul: Switch x86_emulate_ctxt to cpu_policy Andrew Cooper
@ 2023-04-04 15:22 ` Jan Beulich
0 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2023-04-04 15:22 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Xen-devel
On 04.04.2023 11:52, Andrew Cooper wrote:
> As with struct domain, retain cpuid as a valid alias for local code clarity.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v2 13/15] tools/fuzz: Rework afl-policy-fuzzer
2023-04-04 9:52 ` [PATCH v2 13/15] tools/fuzz: Rework afl-policy-fuzzer Andrew Cooper
@ 2023-04-04 15:25 ` Jan Beulich
0 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2023-04-04 15:25 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Xen-devel
On 04.04.2023 11:52, Andrew Cooper wrote:
> With cpuid_policy and msr_policy merged to form cpu_policy, merge the
> respective fuzzing logic.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v2 09/15] x86: Out-of-inline the policy<->featureset convertors
2023-04-04 15:01 ` Jan Beulich
@ 2023-04-04 15:26 ` Andrew Cooper
0 siblings, 0 replies; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 15:26 UTC (permalink / raw)
To: Jan Beulich; +Cc: Roger Pau Monné, Wei Liu, Xen-devel
On 04/04/2023 4:01 pm, Jan Beulich wrote:
> On 04.04.2023 11:52, Andrew Cooper wrote:
>> These are already getting over-large for being inline functions, and are only
>> going to grow more over time. Out of line them, yielding the following net
>> delta from bloat-o-meter:
>>
>> add/remove: 2/0 grow/shrink: 0/4 up/down: 276/-1877 (-1601)
>>
>> Switch to the newer cpu_policy terminology while doing so.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>
Thanks.
>
> I take it you have a reason to ...
>
>> --- a/xen/lib/x86/cpuid.c
>> +++ b/xen/lib/x86/cpuid.c
>> @@ -60,6 +60,48 @@ const char *x86_cpuid_vendor_to_str(unsigned int vendor)
>> }
>> }
>>
>> +void x86_cpu_policy_to_featureset(
>> + const struct cpu_policy *p, uint32_t fs[FEATURESET_NR_ENTRIES])
>> +{
>> + fs[FEATURESET_1d] = p->basic._1d;
>> + fs[FEATURESET_1c] = p->basic._1c;
>> + fs[FEATURESET_e1d] = p->extd.e1d;
>> + fs[FEATURESET_e1c] = p->extd.e1c;
>> + fs[FEATURESET_Da1] = p->xstate.Da1;
>> + fs[FEATURESET_7b0] = p->feat._7b0;
>> + fs[FEATURESET_7c0] = p->feat._7c0;
>> + fs[FEATURESET_e7d] = p->extd.e7d;
>> + fs[FEATURESET_e8b] = p->extd.e8b;
>> + fs[FEATURESET_7d0] = p->feat._7d0;
>> + fs[FEATURESET_7a1] = p->feat._7a1;
>> + fs[FEATURESET_e21a] = p->extd.e21a;
>> + fs[FEATURESET_7b1] = p->feat._7b1;
>> + fs[FEATURESET_7d2] = p->feat._7d2;
>> + fs[FEATURESET_7c1] = p->feat._7c1;
>> + fs[FEATURESET_7d1] = p->feat._7d1;
>> +}
>> +
>> +void x86_cpu_featureset_to_policy(
>> + const uint32_t fs[FEATURESET_NR_ENTRIES], struct cpu_policy *p)
>> +{
>> + p->basic._1d = fs[FEATURESET_1d];
>> + p->basic._1c = fs[FEATURESET_1c];
>> + p->extd.e1d = fs[FEATURESET_e1d];
>> + p->extd.e1c = fs[FEATURESET_e1c];
>> + p->xstate.Da1 = fs[FEATURESET_Da1];
>> + p->feat._7b0 = fs[FEATURESET_7b0];
>> + p->feat._7c0 = fs[FEATURESET_7c0];
>> + p->extd.e7d = fs[FEATURESET_e7d];
>> + p->extd.e8b = fs[FEATURESET_e8b];
>> + p->feat._7d0 = fs[FEATURESET_7d0];
>> + p->feat._7a1 = fs[FEATURESET_7a1];
>> + p->extd.e21a = fs[FEATURESET_e21a];
>> + p->feat._7b1 = fs[FEATURESET_7b1];
>> + p->feat._7d2 = fs[FEATURESET_7d2];
>> + p->feat._7c1 = fs[FEATURESET_7c1];
>> + p->feat._7d1 = fs[FEATURESET_7d1];
>> +}
> ... add quite a few padding blanks in here, unlike in the originals?
Yeah. There was already one misalignment, and I haven't quite decided
on the MSR syntax yet but it's going to be longer still.
Here specifically, we've got p->arch_caps.{a,d} at a minimum, so column
width is based on the MSR name.
This is just a guestimate of "plenty for now".
~Andrew
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v2 14/15] libx86: Update library API for cpu_policy
2023-04-04 9:52 ` [PATCH v2 14/15] libx86: Update library API for cpu_policy Andrew Cooper
@ 2023-04-04 15:34 ` Jan Beulich
2023-04-04 15:36 ` Andrew Cooper
2023-04-04 21:06 ` [PATCH v3] " Andrew Cooper
1 sibling, 1 reply; 30+ messages in thread
From: Jan Beulich @ 2023-04-04 15:34 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Xen-devel
On 04.04.2023 11:52, Andrew Cooper wrote:
> Adjust the API and comments appropriately.
>
> x86_cpu_policy_fill_native() will eventually contain MSR reads, but leave a
> TODO in the short term.
That'll then require passing in a callback function anyway, such that
different environments can use different ways of getting at the wanted
MSR values. (IOW a bigger change anyway.)
> No practical change.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
What about x86_cpuid_lookup_deep_deps()? That'll be looking at more than
just CPUID bits as well, won't it?
Jan
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v2 14/15] libx86: Update library API for cpu_policy
2023-04-04 15:34 ` Jan Beulich
@ 2023-04-04 15:36 ` Andrew Cooper
0 siblings, 0 replies; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 15:36 UTC (permalink / raw)
To: Jan Beulich; +Cc: Roger Pau Monné, Wei Liu, Xen-devel
On 04/04/2023 4:34 pm, Jan Beulich wrote:
> On 04.04.2023 11:52, Andrew Cooper wrote:
>> Adjust the API and comments appropriately.
>>
>> x86_cpu_policy_fill_native() will eventually contain MSR reads, but leave a
>> TODO in the short term.
> That'll then require passing in a callback function anyway, such that
> different environments can use different ways of getting at the wanted
> MSR values. (IOW a bigger change anyway.)
We've already got #if __XEN__'s in there already. I was going to add
one more in the short term.
>
>> No practical change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> What about x86_cpuid_lookup_deep_deps()? That'll be looking at more than
> just CPUID bits as well, won't it?
Good point. I'll adjust.
~Andrew
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v2 15/15] x86: Remove temporary {cpuid,msr}_policy defines
2023-04-04 9:52 ` [PATCH v2 15/15] x86: Remove temporary {cpuid,msr}_policy defines Andrew Cooper
@ 2023-04-04 15:37 ` Jan Beulich
0 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2023-04-04 15:37 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Xen-devel
On 04.04.2023 11:52, Andrew Cooper wrote:
> With all code areas updated, drop the temporary defines and adjust all
> remaining users.
>
> No practical change.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v2 11/15] x86/boot: Merge CPUID policy initialisation logic into cpu-policy.c
2023-04-04 15:16 ` Jan Beulich
@ 2023-04-04 15:45 ` Andrew Cooper
2023-04-04 16:14 ` Jan Beulich
0 siblings, 1 reply; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 15:45 UTC (permalink / raw)
To: Jan Beulich; +Cc: Roger Pau Monné, Wei Liu, Xen-devel
On 04/04/2023 4:16 pm, Jan Beulich wrote:
> On 04.04.2023 11:52, Andrew Cooper wrote:
>> Switch to the newer cpu_policy nomenclature. Do some easy cleanup of
>> includes.
>>
>> No practical change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Wei Liu <wl@xen.org>
>>
>> v2:
>> * New
>> ---
>> xen/arch/x86/cpu-policy.c | 752 ++++++++++++++++++++++++
>> xen/arch/x86/cpuid.c | 817 +-------------------------
>> xen/arch/x86/hvm/hvm.c | 1 -
>> xen/arch/x86/include/asm/cpu-policy.h | 6 +
>> xen/arch/x86/include/asm/cpuid.h | 11 +-
>> xen/arch/x86/pv/domain.c | 1 +
>> xen/arch/x86/setup.c | 2 -
>> 7 files changed, 764 insertions(+), 826 deletions(-)
>>
>> diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
>> index f6a2317ed7bd..83186e940ca7 100644
>> --- a/xen/arch/x86/cpu-policy.c
>> +++ b/xen/arch/x86/cpu-policy.c
>> @@ -1,13 +1,19 @@
>> /* SPDX-License-Identifier: GPL-2.0-or-later */
>> #include <xen/cache.h>
>> #include <xen/kernel.h>
>> +#include <xen/param.h>
>> #include <xen/sched.h>
>>
>> #include <xen/lib/x86/cpu-policy.h>
>>
>> +#include <asm/amd.h>
>> #include <asm/cpu-policy.h>
>> +#include <asm/hvm/nestedhvm.h>
>> +#include <asm/hvm/svm/svm.h>
>> #include <asm/msr-index.h>
>> +#include <asm/paging.h>
>> #include <asm/setup.h>
>> +#include <asm/xstate.h>
>>
>> struct cpu_policy __ro_after_init raw_cpu_policy;
>> struct cpu_policy __ro_after_init host_cpu_policy;
>> @@ -20,10 +26,332 @@ struct cpu_policy __ro_after_init hvm_max_cpu_policy;
>> struct cpu_policy __ro_after_init hvm_def_cpu_policy;
>> #endif
>>
>> +const uint32_t known_features[] = INIT_KNOWN_FEATURES;
>> +
>> +static const uint32_t __initconst pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
>> +static const uint32_t hvm_shadow_max_featuremask[] = INIT_HVM_SHADOW_MAX_FEATURES;
>> +static const uint32_t __initconst hvm_hap_max_featuremask[] =
>> + INIT_HVM_HAP_MAX_FEATURES;
>> +static const uint32_t __initconst pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
>> +static const uint32_t __initconst hvm_shadow_def_featuremask[] =
>> + INIT_HVM_SHADOW_DEF_FEATURES;
>> +static const uint32_t __initconst hvm_hap_def_featuremask[] =
>> + INIT_HVM_HAP_DEF_FEATURES;
>> +static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
>> +
>> +static const struct feature_name {
>> + const char *name;
>> + unsigned int bit;
>> +} feature_names[] __initconstrel = INIT_FEATURE_NAMES;
>> +
>> +/*
>> + * Parse a list of cpuid feature names -> bool, calling the callback for any
>> + * matches found.
>> + *
>> + * always_inline, because this is init code only and we really don't want a
>> + * function pointer call in the middle of the loop.
>> + */
>> +static int __init always_inline parse_cpuid(
>> + const char *s, void (*callback)(unsigned int feat, bool val))
>> +{
>> + const char *ss;
>> + int val, rc = 0;
>> +
>> + do {
>> + const struct feature_name *lhs, *rhs, *mid = NULL /* GCC... */;
>> + const char *feat;
>> +
>> + ss = strchr(s, ',');
>> + if ( !ss )
>> + ss = strchr(s, '\0');
>> +
>> + /* Skip the 'no-' prefix for name comparisons. */
>> + feat = s;
>> + if ( strncmp(s, "no-", 3) == 0 )
>> + feat += 3;
>> +
>> + /* (Re)initalise lhs and rhs for binary search. */
>> + lhs = feature_names;
>> + rhs = feature_names + ARRAY_SIZE(feature_names);
>> +
>> + while ( lhs < rhs )
>> + {
>> + int res;
>> +
>> + mid = lhs + (rhs - lhs) / 2;
>> + res = cmdline_strcmp(feat, mid->name);
>> +
>> + if ( res < 0 )
>> + {
>> + rhs = mid;
>> + continue;
>> + }
>> + if ( res > 0 )
>> + {
>> + lhs = mid + 1;
>> + continue;
>> + }
>> +
>> + if ( (val = parse_boolean(mid->name, s, ss)) >= 0 )
>> + {
>> + callback(mid->bit, val);
>> + mid = NULL;
>> + }
>> +
>> + break;
>> + }
>> +
>> + /*
>> + * Mid being NULL means that the name and boolean were successfully
>> + * identified. Everything else is an error.
>> + */
>> + if ( mid )
>> + rc = -EINVAL;
>> +
>> + s = ss + 1;
>> + } while ( *ss );
>> +
>> + return rc;
>> +}
>> +
>> +static void __init cf_check _parse_xen_cpuid(unsigned int feat, bool val)
>> +{
>> + if ( !val )
>> + setup_clear_cpu_cap(feat);
>> + else if ( feat == X86_FEATURE_RDRAND &&
>> + (cpuid_ecx(1) & cpufeat_mask(X86_FEATURE_RDRAND)) )
>> + setup_force_cpu_cap(X86_FEATURE_RDRAND);
>> +}
>> +
>> +static int __init cf_check parse_xen_cpuid(const char *s)
>> +{
>> + return parse_cpuid(s, _parse_xen_cpuid);
>> +}
>> +custom_param("cpuid", parse_xen_cpuid);
>> +
>> +static bool __initdata dom0_cpuid_cmdline;
>> +static uint32_t __initdata dom0_enable_feat[FSCAPINTS];
>> +static uint32_t __initdata dom0_disable_feat[FSCAPINTS];
>> +
>> +static void __init cf_check _parse_dom0_cpuid(unsigned int feat, bool val)
>> +{
>> + __set_bit (feat, val ? dom0_enable_feat : dom0_disable_feat);
>> + __clear_bit(feat, val ? dom0_disable_feat : dom0_enable_feat );
>> +}
>> +
>> +static int __init cf_check parse_dom0_cpuid(const char *s)
>> +{
>> + dom0_cpuid_cmdline = true;
>> +
>> + return parse_cpuid(s, _parse_dom0_cpuid);
>> +}
>> +custom_param("dom0-cpuid", parse_dom0_cpuid);
> Unless the plan is to completely remove cpuid.c, this command line
> handling would imo better fit there. I understand that to keep
> dom0_{en,dis}able_feat[] static, the _parse_dom0_cpuid() helper
> would then need to be exposed (under a different name), but I think
> that's quite okay, the more that it's an __init function.
I'm not sure I agree. (I did debate this for a while before moving the
cmdline parsing.)
I do have some cleanup plans which will move code into cpuid.c, and
guest_cpuid() absolutely still lives there, but for these options
specifically, the moment I add MSR_ARCH_CAPS into a featureset, their
bit names names will work here too.
So arguably {dom0-}cpuid= don't be a great name moving forwards, but it
is is absolutely more cpu-policy.c content than cpuid.c content.
We can't get rid of the existing cmdline names, and I think documenting
our way out of the "it's not only CPUID bits any more" is better than
adding yet a new name.
>> @@ -149,3 +716,188 @@ int init_domain_cpu_policy(struct domain *d)
>>
>> return 0;
>> }
>> +
>> +void recalculate_cpuid_policy(struct domain *d)
>> +{
>> + struct cpu_policy *p = d->arch.cpuid;
>> + const struct cpu_policy *max = is_pv_domain(d)
>> + ? (IS_ENABLED(CONFIG_PV) ? &pv_max_cpu_policy : NULL)
>> + : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
> While this is how the original code was, wouldn't this want to use
> hvm_enabled, just like init_guest_cpu_policies() does (patch 10)?
No. That will fail to link.
This trickery is necessary to drop the compiler-visible reference to
hvm_max_cpu_policy in !CONFIG_HVM builds.
This function is only called after the domain type has already been
established, which precludes calling it in a case where max will
evaluate to NULL, hence the ASSERT_UNREACHABLE() just below.
~Andrew
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v2 11/15] x86/boot: Merge CPUID policy initialisation logic into cpu-policy.c
2023-04-04 15:45 ` Andrew Cooper
@ 2023-04-04 16:14 ` Jan Beulich
2023-04-04 20:50 ` Andrew Cooper
0 siblings, 1 reply; 30+ messages in thread
From: Jan Beulich @ 2023-04-04 16:14 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Xen-devel
On 04.04.2023 17:45, Andrew Cooper wrote:
> On 04/04/2023 4:16 pm, Jan Beulich wrote:
>> On 04.04.2023 11:52, Andrew Cooper wrote:
>>> @@ -20,10 +26,332 @@ struct cpu_policy __ro_after_init hvm_max_cpu_policy;
>>> struct cpu_policy __ro_after_init hvm_def_cpu_policy;
>>> #endif
>>>
>>> +const uint32_t known_features[] = INIT_KNOWN_FEATURES;
>>> +
>>> +static const uint32_t __initconst pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
>>> +static const uint32_t hvm_shadow_max_featuremask[] = INIT_HVM_SHADOW_MAX_FEATURES;
>>> +static const uint32_t __initconst hvm_hap_max_featuremask[] =
>>> + INIT_HVM_HAP_MAX_FEATURES;
>>> +static const uint32_t __initconst pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
>>> +static const uint32_t __initconst hvm_shadow_def_featuremask[] =
>>> + INIT_HVM_SHADOW_DEF_FEATURES;
>>> +static const uint32_t __initconst hvm_hap_def_featuremask[] =
>>> + INIT_HVM_HAP_DEF_FEATURES;
>>> +static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
>>> +
>>> +static const struct feature_name {
>>> + const char *name;
>>> + unsigned int bit;
>>> +} feature_names[] __initconstrel = INIT_FEATURE_NAMES;
>>> +
>>> +/*
>>> + * Parse a list of cpuid feature names -> bool, calling the callback for any
>>> + * matches found.
>>> + *
>>> + * always_inline, because this is init code only and we really don't want a
>>> + * function pointer call in the middle of the loop.
>>> + */
>>> +static int __init always_inline parse_cpuid(
>>> + const char *s, void (*callback)(unsigned int feat, bool val))
>>> +{
>>> + const char *ss;
>>> + int val, rc = 0;
>>> +
>>> + do {
>>> + const struct feature_name *lhs, *rhs, *mid = NULL /* GCC... */;
>>> + const char *feat;
>>> +
>>> + ss = strchr(s, ',');
>>> + if ( !ss )
>>> + ss = strchr(s, '\0');
>>> +
>>> + /* Skip the 'no-' prefix for name comparisons. */
>>> + feat = s;
>>> + if ( strncmp(s, "no-", 3) == 0 )
>>> + feat += 3;
>>> +
>>> + /* (Re)initalise lhs and rhs for binary search. */
>>> + lhs = feature_names;
>>> + rhs = feature_names + ARRAY_SIZE(feature_names);
>>> +
>>> + while ( lhs < rhs )
>>> + {
>>> + int res;
>>> +
>>> + mid = lhs + (rhs - lhs) / 2;
>>> + res = cmdline_strcmp(feat, mid->name);
>>> +
>>> + if ( res < 0 )
>>> + {
>>> + rhs = mid;
>>> + continue;
>>> + }
>>> + if ( res > 0 )
>>> + {
>>> + lhs = mid + 1;
>>> + continue;
>>> + }
>>> +
>>> + if ( (val = parse_boolean(mid->name, s, ss)) >= 0 )
>>> + {
>>> + callback(mid->bit, val);
>>> + mid = NULL;
>>> + }
>>> +
>>> + break;
>>> + }
>>> +
>>> + /*
>>> + * Mid being NULL means that the name and boolean were successfully
>>> + * identified. Everything else is an error.
>>> + */
>>> + if ( mid )
>>> + rc = -EINVAL;
>>> +
>>> + s = ss + 1;
>>> + } while ( *ss );
>>> +
>>> + return rc;
>>> +}
>>> +
>>> +static void __init cf_check _parse_xen_cpuid(unsigned int feat, bool val)
>>> +{
>>> + if ( !val )
>>> + setup_clear_cpu_cap(feat);
>>> + else if ( feat == X86_FEATURE_RDRAND &&
>>> + (cpuid_ecx(1) & cpufeat_mask(X86_FEATURE_RDRAND)) )
>>> + setup_force_cpu_cap(X86_FEATURE_RDRAND);
>>> +}
>>> +
>>> +static int __init cf_check parse_xen_cpuid(const char *s)
>>> +{
>>> + return parse_cpuid(s, _parse_xen_cpuid);
>>> +}
>>> +custom_param("cpuid", parse_xen_cpuid);
>>> +
>>> +static bool __initdata dom0_cpuid_cmdline;
>>> +static uint32_t __initdata dom0_enable_feat[FSCAPINTS];
>>> +static uint32_t __initdata dom0_disable_feat[FSCAPINTS];
>>> +
>>> +static void __init cf_check _parse_dom0_cpuid(unsigned int feat, bool val)
>>> +{
>>> + __set_bit (feat, val ? dom0_enable_feat : dom0_disable_feat);
>>> + __clear_bit(feat, val ? dom0_disable_feat : dom0_enable_feat );
>>> +}
>>> +
>>> +static int __init cf_check parse_dom0_cpuid(const char *s)
>>> +{
>>> + dom0_cpuid_cmdline = true;
>>> +
>>> + return parse_cpuid(s, _parse_dom0_cpuid);
>>> +}
>>> +custom_param("dom0-cpuid", parse_dom0_cpuid);
>> Unless the plan is to completely remove cpuid.c, this command line
>> handling would imo better fit there. I understand that to keep
>> dom0_{en,dis}able_feat[] static, the _parse_dom0_cpuid() helper
>> would then need to be exposed (under a different name), but I think
>> that's quite okay, the more that it's an __init function.
>
> I'm not sure I agree. (I did debate this for a while before moving the
> cmdline parsing.)
>
> I do have some cleanup plans which will move code into cpuid.c, and
> guest_cpuid() absolutely still lives there, but for these options
> specifically, the moment I add MSR_ARCH_CAPS into a featureset, their
> bit names names will work here too.
>
> So arguably {dom0-}cpuid= don't be a great name moving forwards, but it
> is is absolutely more cpu-policy.c content than cpuid.c content.
>
> We can't get rid of the existing cmdline names, and I think documenting
> our way out of the "it's not only CPUID bits any more" is better than
> adding yet a new name.
Hmm, yes:
Acked-by: Jan Beulich <jbeulich@suse.com>
>>> @@ -149,3 +716,188 @@ int init_domain_cpu_policy(struct domain *d)
>>>
>>> return 0;
>>> }
>>> +
>>> +void recalculate_cpuid_policy(struct domain *d)
>>> +{
>>> + struct cpu_policy *p = d->arch.cpuid;
>>> + const struct cpu_policy *max = is_pv_domain(d)
>>> + ? (IS_ENABLED(CONFIG_PV) ? &pv_max_cpu_policy : NULL)
>>> + : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
>> While this is how the original code was, wouldn't this want to use
>> hvm_enabled, just like init_guest_cpu_policies() does (patch 10)?
>
> No. That will fail to link.
Why? hvm_enabled is a #define (to false) only when !HVM.
> This trickery is necessary to drop the compiler-visible reference to
> hvm_max_cpu_policy in !CONFIG_HVM builds.
>
> This function is only called after the domain type has already been
> established, which precludes calling it in a case where max will
> evaluate to NULL, hence the ASSERT_UNREACHABLE() just below.
Right, and this will hold when HVM=y but no VMX/SVM was found.
Jan
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v2 11/15] x86/boot: Merge CPUID policy initialisation logic into cpu-policy.c
2023-04-04 16:14 ` Jan Beulich
@ 2023-04-04 20:50 ` Andrew Cooper
0 siblings, 0 replies; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 20:50 UTC (permalink / raw)
To: Jan Beulich; +Cc: Roger Pau Monné, Wei Liu, Xen-devel
On 04/04/2023 5:14 pm, Jan Beulich wrote:
> On 04.04.2023 17:45, Andrew Cooper wrote:
>> On 04/04/2023 4:16 pm, Jan Beulich wrote:
>>> On 04.04.2023 11:52, Andrew Cooper wrote:
>>>> @@ -20,10 +26,332 @@ struct cpu_policy __ro_after_init hvm_max_cpu_policy;
>>>> struct cpu_policy __ro_after_init hvm_def_cpu_policy;
>>>> #endif
>>>>
>>>> +const uint32_t known_features[] = INIT_KNOWN_FEATURES;
>>>> +
>>>> +static const uint32_t __initconst pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
>>>> +static const uint32_t hvm_shadow_max_featuremask[] = INIT_HVM_SHADOW_MAX_FEATURES;
>>>> +static const uint32_t __initconst hvm_hap_max_featuremask[] =
>>>> + INIT_HVM_HAP_MAX_FEATURES;
>>>> +static const uint32_t __initconst pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
>>>> +static const uint32_t __initconst hvm_shadow_def_featuremask[] =
>>>> + INIT_HVM_SHADOW_DEF_FEATURES;
>>>> +static const uint32_t __initconst hvm_hap_def_featuremask[] =
>>>> + INIT_HVM_HAP_DEF_FEATURES;
>>>> +static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
>>>> +
>>>> +static const struct feature_name {
>>>> + const char *name;
>>>> + unsigned int bit;
>>>> +} feature_names[] __initconstrel = INIT_FEATURE_NAMES;
>>>> +
>>>> +/*
>>>> + * Parse a list of cpuid feature names -> bool, calling the callback for any
>>>> + * matches found.
>>>> + *
>>>> + * always_inline, because this is init code only and we really don't want a
>>>> + * function pointer call in the middle of the loop.
>>>> + */
>>>> +static int __init always_inline parse_cpuid(
>>>> + const char *s, void (*callback)(unsigned int feat, bool val))
>>>> +{
>>>> + const char *ss;
>>>> + int val, rc = 0;
>>>> +
>>>> + do {
>>>> + const struct feature_name *lhs, *rhs, *mid = NULL /* GCC... */;
>>>> + const char *feat;
>>>> +
>>>> + ss = strchr(s, ',');
>>>> + if ( !ss )
>>>> + ss = strchr(s, '\0');
>>>> +
>>>> + /* Skip the 'no-' prefix for name comparisons. */
>>>> + feat = s;
>>>> + if ( strncmp(s, "no-", 3) == 0 )
>>>> + feat += 3;
>>>> +
>>>> + /* (Re)initalise lhs and rhs for binary search. */
>>>> + lhs = feature_names;
>>>> + rhs = feature_names + ARRAY_SIZE(feature_names);
>>>> +
>>>> + while ( lhs < rhs )
>>>> + {
>>>> + int res;
>>>> +
>>>> + mid = lhs + (rhs - lhs) / 2;
>>>> + res = cmdline_strcmp(feat, mid->name);
>>>> +
>>>> + if ( res < 0 )
>>>> + {
>>>> + rhs = mid;
>>>> + continue;
>>>> + }
>>>> + if ( res > 0 )
>>>> + {
>>>> + lhs = mid + 1;
>>>> + continue;
>>>> + }
>>>> +
>>>> + if ( (val = parse_boolean(mid->name, s, ss)) >= 0 )
>>>> + {
>>>> + callback(mid->bit, val);
>>>> + mid = NULL;
>>>> + }
>>>> +
>>>> + break;
>>>> + }
>>>> +
>>>> + /*
>>>> + * Mid being NULL means that the name and boolean were successfully
>>>> + * identified. Everything else is an error.
>>>> + */
>>>> + if ( mid )
>>>> + rc = -EINVAL;
>>>> +
>>>> + s = ss + 1;
>>>> + } while ( *ss );
>>>> +
>>>> + return rc;
>>>> +}
>>>> +
>>>> +static void __init cf_check _parse_xen_cpuid(unsigned int feat, bool val)
>>>> +{
>>>> + if ( !val )
>>>> + setup_clear_cpu_cap(feat);
>>>> + else if ( feat == X86_FEATURE_RDRAND &&
>>>> + (cpuid_ecx(1) & cpufeat_mask(X86_FEATURE_RDRAND)) )
>>>> + setup_force_cpu_cap(X86_FEATURE_RDRAND);
>>>> +}
>>>> +
>>>> +static int __init cf_check parse_xen_cpuid(const char *s)
>>>> +{
>>>> + return parse_cpuid(s, _parse_xen_cpuid);
>>>> +}
>>>> +custom_param("cpuid", parse_xen_cpuid);
>>>> +
>>>> +static bool __initdata dom0_cpuid_cmdline;
>>>> +static uint32_t __initdata dom0_enable_feat[FSCAPINTS];
>>>> +static uint32_t __initdata dom0_disable_feat[FSCAPINTS];
>>>> +
>>>> +static void __init cf_check _parse_dom0_cpuid(unsigned int feat, bool val)
>>>> +{
>>>> + __set_bit (feat, val ? dom0_enable_feat : dom0_disable_feat);
>>>> + __clear_bit(feat, val ? dom0_disable_feat : dom0_enable_feat );
>>>> +}
>>>> +
>>>> +static int __init cf_check parse_dom0_cpuid(const char *s)
>>>> +{
>>>> + dom0_cpuid_cmdline = true;
>>>> +
>>>> + return parse_cpuid(s, _parse_dom0_cpuid);
>>>> +}
>>>> +custom_param("dom0-cpuid", parse_dom0_cpuid);
>>> Unless the plan is to completely remove cpuid.c, this command line
>>> handling would imo better fit there. I understand that to keep
>>> dom0_{en,dis}able_feat[] static, the _parse_dom0_cpuid() helper
>>> would then need to be exposed (under a different name), but I think
>>> that's quite okay, the more that it's an __init function.
>> I'm not sure I agree. (I did debate this for a while before moving the
>> cmdline parsing.)
>>
>> I do have some cleanup plans which will move code into cpuid.c, and
>> guest_cpuid() absolutely still lives there, but for these options
>> specifically, the moment I add MSR_ARCH_CAPS into a featureset, their
>> bit names names will work here too.
>>
>> So arguably {dom0-}cpuid= don't be a great name moving forwards, but it
>> is is absolutely more cpu-policy.c content than cpuid.c content.
>>
>> We can't get rid of the existing cmdline names, and I think documenting
>> our way out of the "it's not only CPUID bits any more" is better than
>> adding yet a new name.
> Hmm, yes:
> Acked-by: Jan Beulich <jbeulich@suse.com>
Thanks.
>
>>>> @@ -149,3 +716,188 @@ int init_domain_cpu_policy(struct domain *d)
>>>>
>>>> return 0;
>>>> }
>>>> +
>>>> +void recalculate_cpuid_policy(struct domain *d)
>>>> +{
>>>> + struct cpu_policy *p = d->arch.cpuid;
>>>> + const struct cpu_policy *max = is_pv_domain(d)
>>>> + ? (IS_ENABLED(CONFIG_PV) ? &pv_max_cpu_policy : NULL)
>>>> + : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
>>> While this is how the original code was, wouldn't this want to use
>>> hvm_enabled, just like init_guest_cpu_policies() does (patch 10)?
>> No. That will fail to link.
> Why? hvm_enabled is a #define (to false) only when !HVM.
Hmm, maybe.
But honestly, I want to keep the code as it is because this is trying to
only be code-movement, and because it's currently symmetric between the
two cases.
~Andrew
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v3] libx86: Update library API for cpu_policy
2023-04-04 9:52 ` [PATCH v2 14/15] libx86: Update library API for cpu_policy Andrew Cooper
2023-04-04 15:34 ` Jan Beulich
@ 2023-04-04 21:06 ` Andrew Cooper
2023-04-05 8:19 ` Jan Beulich
1 sibling, 1 reply; 30+ messages in thread
From: Andrew Cooper @ 2023-04-04 21:06 UTC (permalink / raw)
To: Xen-devel; +Cc: Andrew Cooper, Jan Beulich, Roger Pau Monné, Wei Liu
Adjust the API and comments appropriately.
x86_cpu_policy_fill_native() will eventually contain MSR reads, but leave a
TODO in the short term.
No practical change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
v2:
* New
v3:
* Update x86_cpu_policy_lookup_deep_deps() too and write an API doc.
---
tools/fuzz/cpu-policy/afl-policy-fuzzer.c | 4 +-
tools/tests/cpu-policy/test-cpu-policy.c | 4 +-
tools/tests/x86_emulator/x86-emulate.c | 2 +-
xen/arch/x86/cpu-policy.c | 4 +-
xen/arch/x86/cpu/common.c | 2 +-
xen/arch/x86/domctl.c | 2 +-
xen/arch/x86/xstate.c | 4 +-
xen/include/xen/lib/x86/cpu-policy.h | 49 +++++++++++++----------
xen/lib/x86/cpuid.c | 26 ++++++------
xen/lib/x86/msr.c | 4 +-
10 files changed, 55 insertions(+), 46 deletions(-)
diff --git a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
index 466bdbb1d91a..7d8467b4b258 100644
--- a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
+++ b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
@@ -48,8 +48,8 @@ static void check_policy(struct cpu_policy *cp)
* Fix up the data in the source policy which isn't expected to survive
* serialisation.
*/
- x86_cpuid_policy_clear_out_of_range_leaves(cp);
- x86_cpuid_policy_recalc_synth(cp);
+ x86_cpu_policy_clear_out_of_range_leaves(cp);
+ x86_cpu_policy_recalc_synth(cp);
/* Serialise... */
rc = x86_cpuid_copy_to_buffer(cp, leaves, &nr_leaves);
diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
index a4ca07f33973..f1d968adfc39 100644
--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -105,7 +105,7 @@ static void test_cpuid_current(void)
printf("Testing CPUID on current CPU\n");
- x86_cpuid_policy_fill_native(&p);
+ x86_cpu_policy_fill_native(&p);
rc = x86_cpuid_copy_to_buffer(&p, leaves, &nr);
if ( rc != 0 )
@@ -554,7 +554,7 @@ static void test_cpuid_out_of_range_clearing(void)
void *ptr;
unsigned int nr_markers;
- x86_cpuid_policy_clear_out_of_range_leaves(p);
+ x86_cpu_policy_clear_out_of_range_leaves(p);
/* Count the number of 0xc2's still remaining. */
for ( ptr = p, nr_markers = 0;
diff --git a/tools/tests/x86_emulator/x86-emulate.c b/tools/tests/x86_emulator/x86-emulate.c
index 2692404df906..7d2d57f7591a 100644
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -75,7 +75,7 @@ bool emul_test_init(void)
unsigned long sp;
- x86_cpuid_policy_fill_native(&cp);
+ x86_cpu_policy_fill_native(&cp);
/*
* The emulator doesn't use these instructions, so can always emulate
diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index 83186e940ca7..a58bf6cad54e 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -176,7 +176,7 @@ static void sanitise_featureset(uint32_t *fs)
for_each_set_bit(i, (void *)disabled_features,
sizeof(disabled_features) * 8)
{
- const uint32_t *dfs = x86_cpuid_lookup_deep_deps(i);
+ const uint32_t *dfs = x86_cpu_policy_lookup_deep_deps(i);
unsigned int j;
ASSERT(dfs); /* deep_features[] should guarentee this. */
@@ -347,7 +347,7 @@ static void __init calculate_raw_policy(void)
{
struct cpu_policy *p = &raw_cpu_policy;
- x86_cpuid_policy_fill_native(p);
+ x86_cpu_policy_fill_native(p);
/* Nothing good will come from Xen and libx86 disagreeing on vendor. */
ASSERT(p->x86_vendor == boot_cpu_data.x86_vendor);
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index f11dcda57a69..edc4db1335eb 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -75,7 +75,7 @@ void __init setup_clear_cpu_cap(unsigned int cap)
__builtin_return_address(0), cap);
__clear_bit(cap, boot_cpu_data.x86_capability);
- dfs = x86_cpuid_lookup_deep_deps(cap);
+ dfs = x86_cpu_policy_lookup_deep_deps(cap);
if (!dfs)
return;
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index c02528594102..1a8b4cff48ee 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -66,7 +66,7 @@ static int update_domain_cpu_policy(struct domain *d,
goto out;
/* Trim any newly-stale out-of-range leaves. */
- x86_cpuid_policy_clear_out_of_range_leaves(new);
+ x86_cpu_policy_clear_out_of_range_leaves(new);
/* Audit the combined dataset. */
ret = x86_cpu_policies_are_compatible(sys, new, &err);
diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
index d481e1db3e7e..92496f379546 100644
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -684,7 +684,7 @@ void xstate_init(struct cpuinfo_x86 *c)
int validate_xstate(const struct domain *d, uint64_t xcr0, uint64_t xcr0_accum,
const struct xsave_hdr *hdr)
{
- uint64_t xcr0_max = cpuid_policy_xcr0_max(d->arch.cpuid);
+ uint64_t xcr0_max = cpu_policy_xcr0_max(d->arch.cpuid);
unsigned int i;
if ( (hdr->xstate_bv & ~xcr0_accum) ||
@@ -708,7 +708,7 @@ int validate_xstate(const struct domain *d, uint64_t xcr0, uint64_t xcr0_accum,
int handle_xsetbv(u32 index, u64 new_bv)
{
struct vcpu *curr = current;
- uint64_t xcr0_max = cpuid_policy_xcr0_max(curr->domain->arch.cpuid);
+ uint64_t xcr0_max = cpu_policy_xcr0_max(curr->domain->arch.cpuid);
u64 mask;
if ( index != XCR_XFEATURE_ENABLED_MASK )
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index 57b4633c861e..cf7de0f29ccd 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -399,33 +399,38 @@ void x86_cpu_policy_to_featureset(const struct cpu_policy *p,
void x86_cpu_featureset_to_policy(const uint32_t fs[FEATURESET_NR_ENTRIES],
struct cpu_policy *p);
-static inline uint64_t cpuid_policy_xcr0_max(const struct cpuid_policy *p)
+static inline uint64_t cpu_policy_xcr0_max(const struct cpu_policy *p)
{
return ((uint64_t)p->xstate.xcr0_high << 32) | p->xstate.xcr0_low;
}
-static inline uint64_t cpuid_policy_xstates(const struct cpuid_policy *p)
+static inline uint64_t cpu_policy_xstates(const struct cpu_policy *p)
{
uint64_t val = p->xstate.xcr0_high | p->xstate.xss_high;
return (val << 32) | p->xstate.xcr0_low | p->xstate.xss_low;
}
-const uint32_t *x86_cpuid_lookup_deep_deps(uint32_t feature);
+/**
+ * For a specific feature, look up the dependent features. Returns NULL if
+ * this feature has no dependencies. Otherwise return a featureset of
+ * dependent features, which has been recursively flattened.
+ */
+const uint32_t *x86_cpu_policy_lookup_deep_deps(uint32_t feature);
/**
- * Recalculate the content in a CPUID policy which is derived from raw data.
+ * Recalculate the content in a CPU policy which is derived from raw data.
*/
-void x86_cpuid_policy_recalc_synth(struct cpuid_policy *p);
+void x86_cpu_policy_recalc_synth(struct cpu_policy *p);
/**
- * Fill a CPUID policy using the native CPUID instruction.
+ * Fill CPU policy using the native CPUID/RDMSR instruction.
*
* No sanitisation is performed, but synthesised values are calculated.
* Values may be influenced by a hypervisor or from masking/faulting
* configuration.
*/
-void x86_cpuid_policy_fill_native(struct cpuid_policy *p);
+void x86_cpu_policy_fill_native(struct cpu_policy *p);
/**
* Clear leaf data beyond the policies max leaf/subleaf settings.
@@ -436,7 +441,7 @@ void x86_cpuid_policy_fill_native(struct cpuid_policy *p);
* with out-of-range leaves with stale content in them. This helper clears
* them.
*/
-void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p);
+void x86_cpu_policy_clear_out_of_range_leaves(struct cpu_policy *p);
#ifdef __XEN__
#include <public/arch-x86/xen.h>
@@ -449,9 +454,10 @@ typedef xen_msr_entry_t msr_entry_buffer_t[];
#endif
/**
- * Serialise a cpuid_policy object into an array of cpuid leaves.
+ * Serialise the CPUID leaves of a cpu_policy object into an array of cpuid
+ * leaves.
*
- * @param policy The cpuid_policy to serialise.
+ * @param policy The cpu_policy to serialise.
* @param leaves The array of leaves to serialise into.
* @param nr_entries The number of entries in 'leaves'.
* @returns -errno
@@ -460,13 +466,14 @@ typedef xen_msr_entry_t msr_entry_buffer_t[];
* leaves array is too short. On success, nr_entries is updated with the
* actual number of leaves written.
*/
-int x86_cpuid_copy_to_buffer(const struct cpuid_policy *policy,
+int x86_cpuid_copy_to_buffer(const struct cpu_policy *policy,
cpuid_leaf_buffer_t leaves, uint32_t *nr_entries);
/**
- * Unserialise a cpuid_policy object from an array of cpuid leaves.
+ * Unserialise the CPUID leaves of a cpu_policy object into an array of cpuid
+ * leaves.
*
- * @param policy The cpuid_policy to unserialise into.
+ * @param policy The cpu_policy to unserialise into.
* @param leaves The array of leaves to unserialise from.
* @param nr_entries The number of entries in 'leaves'.
* @param err_leaf Optional hint for error diagnostics.
@@ -474,21 +481,21 @@ int x86_cpuid_copy_to_buffer(const struct cpuid_policy *policy,
* @returns -errno
*
* Reads at most CPUID_MAX_SERIALISED_LEAVES. May return -ERANGE if an
- * incoming leaf is out of range of cpuid_policy, in which case the optional
+ * incoming leaf is out of range of cpu_policy, in which case the optional
* err_* pointers will identify the out-of-range indicies.
*
* No content validation of in-range leaves is performed. Synthesised data is
* recalculated.
*/
-int x86_cpuid_copy_from_buffer(struct cpuid_policy *policy,
+int x86_cpuid_copy_from_buffer(struct cpu_policy *policy,
const cpuid_leaf_buffer_t leaves,
uint32_t nr_entries, uint32_t *err_leaf,
uint32_t *err_subleaf);
/**
- * Serialise an msr_policy object into an array.
+ * Serialise the MSRs of a cpu_policy object into an array.
*
- * @param policy The msr_policy to serialise.
+ * @param policy The cpu_policy to serialise.
* @param msrs The array of msrs to serialise into.
* @param nr_entries The number of entries in 'msrs'.
* @returns -errno
@@ -497,13 +504,13 @@ int x86_cpuid_copy_from_buffer(struct cpuid_policy *policy,
* buffer array is too short. On success, nr_entries is updated with the
* actual number of msrs written.
*/
-int x86_msr_copy_to_buffer(const struct msr_policy *policy,
+int x86_msr_copy_to_buffer(const struct cpu_policy *policy,
msr_entry_buffer_t msrs, uint32_t *nr_entries);
/**
- * Unserialise an msr_policy object from an array of msrs.
+ * Unserialise the MSRs of a cpu_policy object from an array of msrs.
*
- * @param policy The msr_policy object to unserialise into.
+ * @param policy The cpu_policy object to unserialise into.
* @param msrs The array of msrs to unserialise from.
* @param nr_entries The number of entries in 'msrs'.
* @param err_msr Optional hint for error diagnostics.
@@ -517,7 +524,7 @@ int x86_msr_copy_to_buffer(const struct msr_policy *policy,
*
* No content validation is performed on the data stored in the policy object.
*/
-int x86_msr_copy_from_buffer(struct msr_policy *policy,
+int x86_msr_copy_from_buffer(struct cpu_policy *policy,
const msr_entry_buffer_t msrs, uint32_t nr_entries,
uint32_t *err_msr);
diff --git a/xen/lib/x86/cpuid.c b/xen/lib/x86/cpuid.c
index 734e90823a63..68aafb404927 100644
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -102,13 +102,13 @@ void x86_cpu_featureset_to_policy(
p->feat._7d1 = fs[FEATURESET_7d1];
}
-void x86_cpuid_policy_recalc_synth(struct cpuid_policy *p)
+void x86_cpu_policy_recalc_synth(struct cpu_policy *p)
{
p->x86_vendor = x86_cpuid_lookup_vendor(
p->basic.vendor_ebx, p->basic.vendor_ecx, p->basic.vendor_edx);
}
-void x86_cpuid_policy_fill_native(struct cpuid_policy *p)
+void x86_cpu_policy_fill_native(struct cpu_policy *p)
{
unsigned int i;
@@ -199,7 +199,7 @@ void x86_cpuid_policy_fill_native(struct cpuid_policy *p)
cpuid_count_leaf(0xd, 0, &p->xstate.raw[0]);
cpuid_count_leaf(0xd, 1, &p->xstate.raw[1]);
- xstates = cpuid_policy_xstates(p);
+ xstates = cpu_policy_xstates(p);
/* This logic will probably need adjusting when XCR0[63] gets used. */
BUILD_BUG_ON(ARRAY_SIZE(p->xstate.raw) > 63);
@@ -222,10 +222,12 @@ void x86_cpuid_policy_fill_native(struct cpuid_policy *p)
p->hv_limit = 0;
p->hv2_limit = 0;
- x86_cpuid_policy_recalc_synth(p);
+ /* TODO MSRs */
+
+ x86_cpu_policy_recalc_synth(p);
}
-void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p)
+void x86_cpu_policy_clear_out_of_range_leaves(struct cpu_policy *p)
{
unsigned int i;
@@ -260,7 +262,7 @@ void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p)
zero_leaves(p->topo.raw, i, ARRAY_SIZE(p->topo.raw) - 1);
}
- if ( p->basic.max_leaf < 0xd || !cpuid_policy_xstates(p) )
+ if ( p->basic.max_leaf < 0xd || !cpu_policy_xstates(p) )
memset(p->xstate.raw, 0, sizeof(p->xstate.raw));
else
{
@@ -268,7 +270,7 @@ void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p)
BUILD_BUG_ON(ARRAY_SIZE(p->xstate.raw) > 63);
/* First two leaves always valid. Rest depend on xstates. */
- i = max(2, 64 - __builtin_clzll(cpuid_policy_xstates(p)));
+ i = max(2, 64 - __builtin_clzll(cpu_policy_xstates(p)));
zero_leaves(p->xstate.raw, i,
ARRAY_SIZE(p->xstate.raw) - 1);
@@ -278,7 +280,7 @@ void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p)
ARRAY_SIZE(p->extd.raw) - 1);
}
-const uint32_t *x86_cpuid_lookup_deep_deps(uint32_t feature)
+const uint32_t *x86_cpu_policy_lookup_deep_deps(uint32_t feature)
{
static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
static const struct {
@@ -333,7 +335,7 @@ static int copy_leaf_to_buffer(uint32_t leaf, uint32_t subleaf,
return 0;
}
-int x86_cpuid_copy_to_buffer(const struct cpuid_policy *p,
+int x86_cpuid_copy_to_buffer(const struct cpu_policy *p,
cpuid_leaf_buffer_t leaves, uint32_t *nr_entries_p)
{
const uint32_t nr_entries = *nr_entries_p;
@@ -383,7 +385,7 @@ int x86_cpuid_copy_to_buffer(const struct cpuid_policy *p,
case 0xd:
{
- uint64_t xstates = cpuid_policy_xstates(p);
+ uint64_t xstates = cpu_policy_xstates(p);
COPY_LEAF(leaf, 0, &p->xstate.raw[0]);
COPY_LEAF(leaf, 1, &p->xstate.raw[1]);
@@ -419,7 +421,7 @@ int x86_cpuid_copy_to_buffer(const struct cpuid_policy *p,
return 0;
}
-int x86_cpuid_copy_from_buffer(struct cpuid_policy *p,
+int x86_cpuid_copy_from_buffer(struct cpu_policy *p,
const cpuid_leaf_buffer_t leaves,
uint32_t nr_entries, uint32_t *err_leaf,
uint32_t *err_subleaf)
@@ -522,7 +524,7 @@ int x86_cpuid_copy_from_buffer(struct cpuid_policy *p,
}
}
- x86_cpuid_policy_recalc_synth(p);
+ x86_cpu_policy_recalc_synth(p);
return 0;
diff --git a/xen/lib/x86/msr.c b/xen/lib/x86/msr.c
index c4d885e7b568..e04b9ca01302 100644
--- a/xen/lib/x86/msr.c
+++ b/xen/lib/x86/msr.c
@@ -23,7 +23,7 @@ static int copy_msr_to_buffer(uint32_t idx, uint64_t val,
return 0;
}
-int x86_msr_copy_to_buffer(const struct msr_policy *p,
+int x86_msr_copy_to_buffer(const struct cpu_policy *p,
msr_entry_buffer_t msrs, uint32_t *nr_entries_p)
{
const uint32_t nr_entries = *nr_entries_p;
@@ -48,7 +48,7 @@ int x86_msr_copy_to_buffer(const struct msr_policy *p,
return 0;
}
-int x86_msr_copy_from_buffer(struct msr_policy *p,
+int x86_msr_copy_from_buffer(struct cpu_policy *p,
const msr_entry_buffer_t msrs, uint32_t nr_entries,
uint32_t *err_msr)
{
--
2.30.2
^ permalink raw reply related [flat|nested] 30+ messages in thread
* Re: [PATCH v3] libx86: Update library API for cpu_policy
2023-04-04 21:06 ` [PATCH v3] " Andrew Cooper
@ 2023-04-05 8:19 ` Jan Beulich
0 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2023-04-05 8:19 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Xen-devel
On 04.04.2023 23:06, Andrew Cooper wrote:
> Adjust the API and comments appropriately.
>
> x86_cpu_policy_fill_native() will eventually contain MSR reads, but leave a
> TODO in the short term.
>
> No practical change.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
^ permalink raw reply [flat|nested] 30+ messages in thread
end of thread, other threads:[~2023-04-05 8:20 UTC | newest]
Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-04 9:52 [PATCH v2 00/15] x86: Merge cpuid and msr policy objects Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 01/15] x86: Rename struct cpu_policy to struct old_cpuid_policy Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 02/15] x86: Rename {domctl,sysctl}.cpu_policy.{cpuid,msr}_policy fields Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 03/15] x86: Rename struct cpuid_policy to struct cpu_policy Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 04/15] x86: Merge struct msr_policy into " Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 05/15] x86: Merge the system {cpuid,msr} policy objects Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 06/15] x86: Merge a domain's " Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 07/15] x86: Merge xc_cpu_policy's cpuid and msr objects Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 08/15] x86: Drop struct old_cpu_policy Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 09/15] x86: Out-of-inline the policy<->featureset convertors Andrew Cooper
2023-04-04 15:01 ` Jan Beulich
2023-04-04 15:26 ` Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 10/15] x86/boot: Move MSR policy initialisation logic into cpu-policy.c Andrew Cooper
2023-04-04 15:04 ` Jan Beulich
2023-04-04 9:52 ` [PATCH v2 11/15] x86/boot: Merge CPUID " Andrew Cooper
2023-04-04 15:16 ` Jan Beulich
2023-04-04 15:45 ` Andrew Cooper
2023-04-04 16:14 ` Jan Beulich
2023-04-04 20:50 ` Andrew Cooper
2023-04-04 9:52 ` [PATCH v2 12/15] x86/emul: Switch x86_emulate_ctxt to cpu_policy Andrew Cooper
2023-04-04 15:22 ` Jan Beulich
2023-04-04 9:52 ` [PATCH v2 13/15] tools/fuzz: Rework afl-policy-fuzzer Andrew Cooper
2023-04-04 15:25 ` Jan Beulich
2023-04-04 9:52 ` [PATCH v2 14/15] libx86: Update library API for cpu_policy Andrew Cooper
2023-04-04 15:34 ` Jan Beulich
2023-04-04 15:36 ` Andrew Cooper
2023-04-04 21:06 ` [PATCH v3] " Andrew Cooper
2023-04-05 8:19 ` Jan Beulich
2023-04-04 9:52 ` [PATCH v2 15/15] x86: Remove temporary {cpuid,msr}_policy defines Andrew Cooper
2023-04-04 15:37 ` Jan Beulich
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.