All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 00/10] x86: Make cpu virtualization support configurable
@ 2023-02-13 14:57 Xenia Ragiadakou
  2023-02-13 14:57 ` [RFC 01/10] x86: introduce AMD-V and Intel VT-x Kconfig options Xenia Ragiadakou
                   ` (9 more replies)
  0 siblings, 10 replies; 28+ messages in thread
From: Xenia Ragiadakou @ 2023-02-13 14:57 UTC (permalink / raw)
  To: xen-devel
  Cc: Jan Beulich, Andrew Cooper, Roger Pau Monné,
	Wei Liu, George Dunlap, Paul Durrant, Jun Nakajima, Kevin Tian

This series aims to provide a means to render the cpu virtualization
technology support in Xen configurable.
Currently, irrespectively of the target platform, both AMD-V and Intel VT-x
drivers are built.
The series adds two new Kconfig controls, AMD_SVM and INTEL_VMX, that can be
used to switch to a finer-grained configuration for a given platform, and
reduce dead code.

The code separation is done using the new config guards.
The series is sent as an RFC to gather feedback in case the changes are in
the wrong direction (I m mostly concerned about hap (ept/npt))

It depends on patch
"x86/vlapic: call vmx_vlapic_msr_changed through an hvm_function callback"
(otherwise it won't compile with Intel VT-x support disabled)
and on the series "Make x86 IOMMU driver support configurable" (AFAIU, the
series has been reviewed but not merged yet) because Intel IOMMU depends on
Intel VT-x for posted interrupts. 

Xenia Ragiadakou (10):
  x86: introduce AMD-V and Intel VT-x Kconfig options
  x86/hvm: separate AMD-V and Intel VT-x hvm_function_table initializers
  x86/p2m: guard vmx specific ept functions with INTEL_VMX
  x86/vpmu: separate AMD-V and Intel VT-x init arch_vpmu_ops
    initializers
  x86/traps: guard vmx specific functions with INTEL_VMX
  x86/domain: guard svm specific functions with AMD_SVM
  x86/oprofile: guard svm specific symbols with AMD_SVM
  x86: wire cpu_has_svm/vmx_* to false when svm/vmx not enabled
  x86/ioreq: guard VIO_realmode_completion with INTEL_VMX
  x86/hvm: make AMD-V and Intel VT-x support configurable

 xen/arch/x86/Kconfig                    | 20 ++++++++++++++
 xen/arch/x86/cpu/Makefile               |  4 ++-
 xen/arch/x86/domain.c                   |  4 +--
 xen/arch/x86/hvm/Makefile               |  4 +--
 xen/arch/x86/hvm/ioreq.c                |  2 ++
 xen/arch/x86/include/asm/hvm/hvm.h      |  8 ++++++
 xen/arch/x86/include/asm/hvm/svm/svm.h  | 17 ++++++++++++
 xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 35 +++++++++++++++++++++++++
 xen/arch/x86/include/asm/hvm/vmx/vmx.h  | 18 +++++++++++++
 xen/arch/x86/include/asm/vpmu.h         |  9 +++++++
 xen/arch/x86/mm/Makefile                |  3 ++-
 xen/arch/x86/mm/hap/Makefile            |  2 +-
 xen/arch/x86/mm/p2m.h                   | 16 +++++++++++
 xen/arch/x86/oprofile/op_model_athlon.c |  2 +-
 xen/arch/x86/traps.c                    |  4 +--
 xen/drivers/passthrough/Kconfig         |  2 +-
 16 files changed, 139 insertions(+), 11 deletions(-)

-- 
2.37.2



^ permalink raw reply	[flat|nested] 28+ messages in thread

* [RFC 01/10] x86: introduce AMD-V and Intel VT-x Kconfig options
  2023-02-13 14:57 [RFC 00/10] x86: Make cpu virtualization support configurable Xenia Ragiadakou
@ 2023-02-13 14:57 ` Xenia Ragiadakou
  2023-02-13 15:11   ` Jan Beulich
  2023-02-13 14:57 ` [RFC 02/10] x86/hvm: separate AMD-V and Intel VT-x hvm_function_table initializers Xenia Ragiadakou
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 28+ messages in thread
From: Xenia Ragiadakou @ 2023-02-13 14:57 UTC (permalink / raw)
  To: xen-devel
  Cc: Jan Beulich, Andrew Cooper, Roger Pau Monné,
	Wei Liu, George Dunlap, Paul Durrant

Introduce two new Kconfig options, AMD_SVM and INTEL_VMX, to allow code
specific to each virtualization technology to be separated and, when not
required, stripped.
AMD_SVM will be used to enable virtual machine extensions on platforms that
implement the AMD Virtualization Technology (AMD-V).
INTEL_VMX will be used to enable virtual machine extensions on platforms that
implement the Intel Virtualization Technology (Intel VT-x).

Both features depend on HVM support.

Since, at this point, disabling any of them would cause Xen to not compile,
the options are enabled by default if HVM and are not selectable by the user.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/arch/x86/Kconfig            | 6 ++++++
 xen/arch/x86/cpu/Makefile       | 4 +++-
 xen/arch/x86/hvm/Makefile       | 4 ++--
 xen/arch/x86/mm/Makefile        | 3 ++-
 xen/arch/x86/mm/hap/Makefile    | 2 +-
 xen/drivers/passthrough/Kconfig | 2 +-
 6 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 6a7825f4ba..2a72111c23 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -117,6 +117,12 @@ config HVM
 
 	  If unsure, say Y.
 
+config AMD_SVM
+	def_bool y if HVM
+
+config INTEL_VMX
+	def_bool y if HVM
+
 config XEN_SHSTK
 	bool "Supervisor Shadow Stacks"
 	depends on HAS_AS_CET_SS
diff --git a/xen/arch/x86/cpu/Makefile b/xen/arch/x86/cpu/Makefile
index 35561fe51d..08bdf4abb8 100644
--- a/xen/arch/x86/cpu/Makefile
+++ b/xen/arch/x86/cpu/Makefile
@@ -10,4 +10,6 @@ obj-y += intel.o
 obj-y += intel_cacheinfo.o
 obj-y += mwait-idle.o
 obj-y += shanghai.o
-obj-y += vpmu.o vpmu_amd.o vpmu_intel.o
+obj-y += vpmu.o
+obj-$(CONFIG_AMD_SVM) += vpmu_amd.o
+obj-$(CONFIG_INTEL_VMX) += vpmu_intel.o
diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index 3464191544..4c1fa5c6c2 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -1,5 +1,5 @@
-obj-y += svm/
-obj-y += vmx/
+obj-$(CONFIG_AMD_SVM) += svm/
+obj-$(CONFIG_INTEL_VMX) += vmx/
 obj-y += viridian/
 
 obj-y += asid.o
diff --git a/xen/arch/x86/mm/Makefile b/xen/arch/x86/mm/Makefile
index 0803ac9297..c53c94308a 100644
--- a/xen/arch/x86/mm/Makefile
+++ b/xen/arch/x86/mm/Makefile
@@ -10,6 +10,7 @@ obj-$(CONFIG_MEM_SHARING) += mem_sharing.o
 obj-$(CONFIG_HVM) += nested.o
 obj-$(CONFIG_HVM) += p2m.o
 obj-y += p2m-basic.o
-obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o p2m-pt.o
+obj-$(CONFIG_HVM) += p2m-pod.o p2m-pt.o
+obj-$(CONFIG_INTEL_VMX) += p2m-ept.o
 obj-y += paging.o
 obj-y += physmap.o
diff --git a/xen/arch/x86/mm/hap/Makefile b/xen/arch/x86/mm/hap/Makefile
index 8ef54b1faa..67c29b2162 100644
--- a/xen/arch/x86/mm/hap/Makefile
+++ b/xen/arch/x86/mm/hap/Makefile
@@ -3,4 +3,4 @@ obj-y += guest_walk_2.o
 obj-y += guest_walk_3.o
 obj-y += guest_walk_4.o
 obj-y += nested_hap.o
-obj-y += nested_ept.o
+obj-$(CONFIG_INTEL_VMX) += nested_ept.o
diff --git a/xen/drivers/passthrough/Kconfig b/xen/drivers/passthrough/Kconfig
index 864fcf3b0c..f95e7e5902 100644
--- a/xen/drivers/passthrough/Kconfig
+++ b/xen/drivers/passthrough/Kconfig
@@ -51,7 +51,7 @@ config AMD_IOMMU
 
 config INTEL_IOMMU
 	bool "Intel VT-d" if EXPERT
-	depends on X86
+	depends on X86 && INTEL_VMX
 	default y
 	help
 	  Enables I/O virtualization on platforms that implement the
-- 
2.37.2



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC 02/10] x86/hvm: separate AMD-V and Intel VT-x hvm_function_table initializers
  2023-02-13 14:57 [RFC 00/10] x86: Make cpu virtualization support configurable Xenia Ragiadakou
  2023-02-13 14:57 ` [RFC 01/10] x86: introduce AMD-V and Intel VT-x Kconfig options Xenia Ragiadakou
@ 2023-02-13 14:57 ` Xenia Ragiadakou
  2023-02-13 16:47   ` Jan Beulich
  2023-02-13 14:57 ` [RFC 03/10] x86/p2m: guard vmx specific ept functions with INTEL_VMX Xenia Ragiadakou
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 28+ messages in thread
From: Xenia Ragiadakou @ 2023-02-13 14:57 UTC (permalink / raw)
  To: xen-devel; +Cc: Jan Beulich, Andrew Cooper, Roger Pau Monné, Wei Liu

Since start_svm() is AMD-V specific while start_vmx() is Intel VT-x specific,
need to be guarded with AMD_SVM and INTEL_VMX, respectively.
Instead of adding #ifdef guards around the function calls, implement them as
static inline null-returning functions when the respective technology is not
enabled.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/arch/x86/include/asm/hvm/hvm.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h
index 43d3fc2498..353e48f4e3 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -261,8 +261,16 @@ extern struct hvm_function_table hvm_funcs;
 extern bool_t hvm_enabled;
 extern s8 hvm_port80_allowed;
 
+#ifdef CONFIG_AMD_SVM
 extern const struct hvm_function_table *start_svm(void);
+#else
+static inline const struct hvm_function_table *start_svm(void) { return NULL; }
+#endif
+#ifdef CONFIG_INTEL_VMX
 extern const struct hvm_function_table *start_vmx(void);
+#else
+static inline const struct hvm_function_table *start_vmx(void) { return NULL; }
+#endif
 
 int hvm_domain_initialise(struct domain *d,
                           const struct xen_domctl_createdomain *config);
-- 
2.37.2



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC 03/10] x86/p2m: guard vmx specific ept functions with INTEL_VMX
  2023-02-13 14:57 [RFC 00/10] x86: Make cpu virtualization support configurable Xenia Ragiadakou
  2023-02-13 14:57 ` [RFC 01/10] x86: introduce AMD-V and Intel VT-x Kconfig options Xenia Ragiadakou
  2023-02-13 14:57 ` [RFC 02/10] x86/hvm: separate AMD-V and Intel VT-x hvm_function_table initializers Xenia Ragiadakou
@ 2023-02-13 14:57 ` Xenia Ragiadakou
  2023-02-13 16:55   ` Jan Beulich
  2023-02-13 14:57 ` [RFC 04/10] x86/vpmu: separate AMD-V and Intel VT-x init arch_vpmu_ops initializers Xenia Ragiadakou
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 28+ messages in thread
From: Xenia Ragiadakou @ 2023-02-13 14:57 UTC (permalink / raw)
  To: xen-devel
  Cc: Jan Beulich, Andrew Cooper, George Dunlap, Roger Pau Monné, Wei Liu

The functions ept_p2m_init(), ept_p2m_uninit() and p2m_init_altp2m_ept() are
VT-x specific.
Implement them as unreachable static inline functions when !INTEL_VMX.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/arch/x86/mm/p2m.h | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/xen/arch/x86/mm/p2m.h b/xen/arch/x86/mm/p2m.h
index cc0f6766e4..ac80414688 100644
--- a/xen/arch/x86/mm/p2m.h
+++ b/xen/arch/x86/mm/p2m.h
@@ -35,9 +35,25 @@ void p2m_nestedp2m_init(struct p2m_domain *p2m);
 int p2m_init_nestedp2m(struct domain *d);
 void p2m_teardown_nestedp2m(struct domain *d);
 
+#ifdef CONFIG_INTEL_VMX
 int ept_p2m_init(struct p2m_domain *p2m);
 void ept_p2m_uninit(struct p2m_domain *p2m);
 void p2m_init_altp2m_ept(struct domain *d, unsigned int i);
+#else
+static inline int ept_p2m_init(struct p2m_domain *p2m)
+{
+    ASSERT_UNREACHABLE();
+    return -EINVAL;
+}
+static inline void ept_p2m_uninit(struct p2m_domain *p2m)
+{
+    ASSERT_UNREACHABLE();
+}
+static inline void p2m_init_altp2m_ept(struct domain *d, unsigned int i)
+{
+    ASSERT_UNREACHABLE();
+}
+#endif
 
 /*
  * Local variables:
-- 
2.37.2



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC 04/10] x86/vpmu: separate AMD-V and Intel VT-x init arch_vpmu_ops initializers
  2023-02-13 14:57 [RFC 00/10] x86: Make cpu virtualization support configurable Xenia Ragiadakou
                   ` (2 preceding siblings ...)
  2023-02-13 14:57 ` [RFC 03/10] x86/p2m: guard vmx specific ept functions with INTEL_VMX Xenia Ragiadakou
@ 2023-02-13 14:57 ` Xenia Ragiadakou
  2023-02-13 14:57 ` [RFC 05/10] x86/traps: guard vmx specific functions with INTEL_VMX Xenia Ragiadakou
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 28+ messages in thread
From: Xenia Ragiadakou @ 2023-02-13 14:57 UTC (permalink / raw)
  To: xen-devel; +Cc: Jan Beulich, Andrew Cooper, Roger Pau Monné, Wei Liu

The function core2_vpmu_init() is VT-x specific while the functions
amd_vpmu_init() and hygon_vpmu_init() are AMD-V specific, thus need to be
guarded with INTEL_VMX and AMD_SVM, respectively.
Instead of adding #ifdef guards around the function calls in common vpu code,
implement them as static inline null-returning functions when the respective
technology is not enabled.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/arch/x86/include/asm/vpmu.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/xen/arch/x86/include/asm/vpmu.h b/xen/arch/x86/include/asm/vpmu.h
index 05e1fbfccf..1e08afb7af 100644
--- a/xen/arch/x86/include/asm/vpmu.h
+++ b/xen/arch/x86/include/asm/vpmu.h
@@ -53,9 +53,18 @@ struct arch_vpmu_ops {
 #endif
 };
 
+#ifdef CONFIG_INTEL_VMX
 const struct arch_vpmu_ops *core2_vpmu_init(void);
+#else
+static inline const struct arch_vpmu_ops *core2_vpmu_init(void) { return NULL; }
+#endif /* CONFIG_INTEL_VMX */
+#ifdef CONFIG_AMD_SVM
 const struct arch_vpmu_ops *amd_vpmu_init(void);
 const struct arch_vpmu_ops *hygon_vpmu_init(void);
+#else
+static inline const struct arch_vpmu_ops *amd_vpmu_init(void) { return NULL; }
+static inline const struct arch_vpmu_ops *hygon_vpmu_init(void) { return NULL; }
+#endif /* CONFIG_AMD_SVM */
 
 struct vpmu_struct {
     u32 flags;
-- 
2.37.2



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC 05/10] x86/traps: guard vmx specific functions with INTEL_VMX
  2023-02-13 14:57 [RFC 00/10] x86: Make cpu virtualization support configurable Xenia Ragiadakou
                   ` (3 preceding siblings ...)
  2023-02-13 14:57 ` [RFC 04/10] x86/vpmu: separate AMD-V and Intel VT-x init arch_vpmu_ops initializers Xenia Ragiadakou
@ 2023-02-13 14:57 ` Xenia Ragiadakou
  2023-02-14 16:20   ` Jan Beulich
  2023-02-13 14:57 ` [RFC 06/10] x86/domain: guard svm specific functions with AMD_SVM Xenia Ragiadakou
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 28+ messages in thread
From: Xenia Ragiadakou @ 2023-02-13 14:57 UTC (permalink / raw)
  To: xen-devel; +Cc: Jan Beulich, Andrew Cooper, Roger Pau Monné, Wei Liu

The functions vmx_vmcs_enter() and vmx_vmcs_exit() are VT-x specific.
Guard their calls with INTEL_VMX.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/arch/x86/traps.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index cade9e12f8..77b4f772f2 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -694,7 +694,7 @@ void vcpu_show_execution_state(struct vcpu *v)
 
     vcpu_pause(v); /* acceptably dangerous */
 
-#ifdef CONFIG_HVM
+#ifdef CONFIG_INTEL_VMX
     /*
      * For VMX special care is needed: Reading some of the register state will
      * require VMCS accesses. Engaging foreign VMCSes involves acquiring of a
@@ -732,7 +732,7 @@ void vcpu_show_execution_state(struct vcpu *v)
         console_unlock_recursive_irqrestore(flags);
     }
 
-#ifdef CONFIG_HVM
+#ifdef CONFIG_INTEL_VMX
     if ( cpu_has_vmx && is_hvm_vcpu(v) )
         vmx_vmcs_exit(v);
 #endif
-- 
2.37.2



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC 06/10] x86/domain: guard svm specific functions with AMD_SVM
  2023-02-13 14:57 [RFC 00/10] x86: Make cpu virtualization support configurable Xenia Ragiadakou
                   ` (4 preceding siblings ...)
  2023-02-13 14:57 ` [RFC 05/10] x86/traps: guard vmx specific functions with INTEL_VMX Xenia Ragiadakou
@ 2023-02-13 14:57 ` Xenia Ragiadakou
  2023-02-14 16:24   ` Jan Beulich
  2023-02-13 14:57 ` [RFC 07/10] x86/oprofile: guard svm specific symbols " Xenia Ragiadakou
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 28+ messages in thread
From: Xenia Ragiadakou @ 2023-02-13 14:57 UTC (permalink / raw)
  To: xen-devel; +Cc: Jan Beulich, Andrew Cooper, Roger Pau Monné, Wei Liu

The functions svm_load_segs() and svm_load_segs_prefetch() are AMD-V specific
so guard their calls in common code with AMD_SVM.

Since AMD_SVM depends on HVM, it can be used alone.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/arch/x86/domain.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index db3ebf062d..576a410f4f 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1628,7 +1628,7 @@ static void load_segments(struct vcpu *n)
         if ( !(n->arch.flags & TF_kernel_mode) )
             SWAP(gsb, gss);
 
-#ifdef CONFIG_HVM
+#ifdef CONFIG_AMD_SVM
         if ( cpu_has_svm && (uregs->fs | uregs->gs) <= 3 )
             fs_gs_done = svm_load_segs(n->arch.pv.ldt_ents, LDT_VIRT_START(n),
                                        n->arch.pv.fs_base, gsb, gss);
@@ -1951,7 +1951,7 @@ static void __context_switch(void)
 
     write_ptbase(n);
 
-#if defined(CONFIG_PV) && defined(CONFIG_HVM)
+#if defined(CONFIG_PV) && defined(CONFIG_AMD_SVM)
     /* Prefetch the VMCB if we expect to use it later in the context switch */
     if ( cpu_has_svm && is_pv_64bit_domain(nd) && !is_idle_domain(nd) )
         svm_load_segs_prefetch();
-- 
2.37.2



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC 07/10] x86/oprofile: guard svm specific symbols with AMD_SVM
  2023-02-13 14:57 [RFC 00/10] x86: Make cpu virtualization support configurable Xenia Ragiadakou
                   ` (5 preceding siblings ...)
  2023-02-13 14:57 ` [RFC 06/10] x86/domain: guard svm specific functions with AMD_SVM Xenia Ragiadakou
@ 2023-02-13 14:57 ` Xenia Ragiadakou
  2023-02-14 16:26   ` Jan Beulich
  2023-02-13 14:57 ` [RFC 08/10] x86: wire cpu_has_svm/vmx_* to false when svm/vmx not enabled Xenia Ragiadakou
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 28+ messages in thread
From: Xenia Ragiadakou @ 2023-02-13 14:57 UTC (permalink / raw)
  To: xen-devel; +Cc: Jan Beulich, Andrew Cooper, Roger Pau Monné, Wei Liu

The symbol svm_stgi_label is AMD-V specific so guard its usage in common code
with AMD_SVM.

Since AMD_SVM depends on HVM, it can be used alone.
Also, use #ifdef instead of #if.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/arch/x86/oprofile/op_model_athlon.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/oprofile/op_model_athlon.c b/xen/arch/x86/oprofile/op_model_athlon.c
index 69fd3fcc86..782fa606ee 100644
--- a/xen/arch/x86/oprofile/op_model_athlon.c
+++ b/xen/arch/x86/oprofile/op_model_athlon.c
@@ -320,7 +320,7 @@ static int cf_check athlon_check_ctrs(
 	struct vcpu *v = current;
 	unsigned int const nr_ctrs = model->num_counters;
 
-#if CONFIG_HVM
+#ifdef CONFIG_AMD_SVM
 	struct cpu_user_regs *guest_regs = guest_cpu_user_regs();
 
 	if (!guest_mode(regs) &&
-- 
2.37.2



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC 08/10] x86: wire cpu_has_svm/vmx_* to false when svm/vmx not enabled
  2023-02-13 14:57 [RFC 00/10] x86: Make cpu virtualization support configurable Xenia Ragiadakou
                   ` (6 preceding siblings ...)
  2023-02-13 14:57 ` [RFC 07/10] x86/oprofile: guard svm specific symbols " Xenia Ragiadakou
@ 2023-02-13 14:57 ` Xenia Ragiadakou
  2023-02-14 16:36   ` Jan Beulich
  2023-02-13 14:57 ` [RFC 09/10] x86/ioreq: guard VIO_realmode_completion with INTEL_VMX Xenia Ragiadakou
  2023-02-13 14:57 ` [RFC 10/10] x86/hvm: make AMD-V and Intel VT-x support configurable Xenia Ragiadakou
  9 siblings, 1 reply; 28+ messages in thread
From: Xenia Ragiadakou @ 2023-02-13 14:57 UTC (permalink / raw)
  To: xen-devel
  Cc: Jan Beulich, Andrew Cooper, Roger Pau Monné,
	Wei Liu, Jun Nakajima, Kevin Tian

To be able to use cpu_has_svm/vmx_* macros in common code without enclosing
them inside #ifdef guards when the respective virtualization technology is
not enabled, define them as false when not applicable.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/arch/x86/include/asm/hvm/svm/svm.h  | 17 ++++++++++++
 xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 35 +++++++++++++++++++++++++
 xen/arch/x86/include/asm/hvm/vmx/vmx.h  | 18 +++++++++++++
 3 files changed, 70 insertions(+)

diff --git a/xen/arch/x86/include/asm/hvm/svm/svm.h b/xen/arch/x86/include/asm/hvm/svm/svm.h
index 65e35a4f59..556584851b 100644
--- a/xen/arch/x86/include/asm/hvm/svm/svm.h
+++ b/xen/arch/x86/include/asm/hvm/svm/svm.h
@@ -80,6 +80,7 @@ extern u32 svm_feature_flags;
 #define SVM_FEATURE_SSS           19 /* NPT Supervisor Shadow Stacks */
 #define SVM_FEATURE_SPEC_CTRL     20 /* MSR_SPEC_CTRL virtualisation */
 
+#ifdef CONFIG_AMD_SVM
 #define cpu_has_svm_feature(f) (svm_feature_flags & (1u << (f)))
 #define cpu_has_svm_npt       cpu_has_svm_feature(SVM_FEATURE_NPT)
 #define cpu_has_svm_lbrv      cpu_has_svm_feature(SVM_FEATURE_LBRV)
@@ -95,6 +96,22 @@ extern u32 svm_feature_flags;
 #define cpu_has_svm_vloadsave cpu_has_svm_feature(SVM_FEATURE_VLOADSAVE)
 #define cpu_has_svm_sss       cpu_has_svm_feature(SVM_FEATURE_SSS)
 #define cpu_has_svm_spec_ctrl cpu_has_svm_feature(SVM_FEATURE_SPEC_CTRL)
+#else
+#define cpu_has_svm_npt       false
+#define cpu_has_svm_lbrv      false
+#define cpu_has_svm_svml      false
+#define cpu_has_svm_nrips     false
+#define cpu_has_svm_cleanbits false
+#define cpu_has_svm_flushbyasid false
+#define cpu_has_svm_decode    false
+#define cpu_has_svm_vgif      false
+#define cpu_has_pause_filter  false
+#define cpu_has_pause_thresh  false
+#define cpu_has_tsc_ratio     false
+#define cpu_has_svm_vloadsave false
+#define cpu_has_svm_sss       false
+#define cpu_has_svm_spec_ctrl false
+#endif /* CONFIG_AMD_SVM */
 
 #define SVM_PAUSEFILTER_INIT    4000
 #define SVM_PAUSETHRESH_INIT    1000
diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
index 0a84e74478..03d1f7480a 100644
--- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
@@ -294,6 +294,7 @@ extern u64 vmx_ept_vpid_cap;
 
 #define VMX_TSC_MULTIPLIER_MAX                  0xffffffffffffffffULL
 
+#ifdef CONFIG_INTEL_VMX
 #define cpu_has_wbinvd_exiting \
     (vmx_secondary_exec_control & SECONDARY_EXEC_WBINVD_EXITING)
 #define cpu_has_vmx_virtualize_apic_accesses \
@@ -352,6 +353,36 @@ extern u64 vmx_ept_vpid_cap;
     (vmx_secondary_exec_control & SECONDARY_EXEC_BUS_LOCK_DETECTION)
 #define cpu_has_vmx_notify_vm_exiting \
     (vmx_secondary_exec_control & SECONDARY_EXEC_NOTIFY_VM_EXITING)
+#else
+#define cpu_has_wbinvd_exiting false
+#define cpu_has_vmx_virtualize_apic_accesses false
+#define cpu_has_vmx_tpr_shadow false
+#define cpu_has_vmx_vnmi false
+#define cpu_has_vmx_msr_bitmap false
+#define cpu_has_vmx_secondary_exec_control false
+#define cpu_has_vmx_ept false
+#define cpu_has_vmx_dt_exiting false
+#define cpu_has_vmx_vpid false
+#define cpu_has_monitor_trap_flag false
+#define cpu_has_vmx_pat false
+#define cpu_has_vmx_efer false
+#define cpu_has_vmx_unrestricted_guest false
+#define vmx_unrestricted_guest(v) false
+#define cpu_has_vmx_ple false
+#define cpu_has_vmx_apic_reg_virt false
+#define cpu_has_vmx_virtual_intr_delivery false
+#define cpu_has_vmx_virtualize_x2apic_mode false
+#define cpu_has_vmx_posted_intr_processing false
+#define cpu_has_vmx_vmcs_shadowing false
+#define cpu_has_vmx_vmfunc false
+#define cpu_has_vmx_virt_exceptions false
+#define cpu_has_vmx_pml false
+#define cpu_has_vmx_mpx false
+#define cpu_has_vmx_xsaves false
+#define cpu_has_vmx_tsc_scaling false
+#define cpu_has_vmx_bus_lock_detection false
+#define cpu_has_vmx_notify_vm_exiting false
+#endif /* CONFIG_INTEL_VMX */
 
 #define VMCS_RID_TYPE_MASK              0x80000000
 
@@ -374,8 +405,12 @@ extern u64 vmx_ept_vpid_cap;
 #define VMX_BASIC_DEFAULT1_ZERO		(1ULL << 55)
 
 extern u64 vmx_basic_msr;
+#ifdef CONFIG_INTEL_VMX
 #define cpu_has_vmx_ins_outs_instr_info \
     (!!(vmx_basic_msr & VMX_BASIC_INS_OUT_INFO))
+#else
+#define cpu_has_vmx_ins_outs_instr_info false
+#endif /* CONFIG_INTEL_VMX */
 
 /* Guest interrupt status */
 #define VMX_GUEST_INTR_STATUS_SUBFIELD_BITMASK  0x0FF
diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmx.h b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
index 97d6b810ec..fe9a5796f7 100644
--- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
@@ -289,6 +289,7 @@ typedef union cr_access_qual {
 
 extern uint8_t posted_intr_vector;
 
+#ifdef CONFIG_INTEL_VMX
 #define cpu_has_vmx_ept_exec_only_supported        \
     (vmx_ept_vpid_cap & VMX_EPT_EXEC_ONLY_SUPPORTED)
 
@@ -301,6 +302,17 @@ extern uint8_t posted_intr_vector;
 #define cpu_has_vmx_ept_ad    (vmx_ept_vpid_cap & VMX_EPT_AD_BIT)
 #define cpu_has_vmx_ept_invept_single_context   \
     (vmx_ept_vpid_cap & VMX_EPT_INVEPT_SINGLE_CONTEXT)
+#else
+#define cpu_has_vmx_ept_exec_only_supported false
+
+#define cpu_has_vmx_ept_wl4_supported false
+#define cpu_has_vmx_ept_mt_uc false
+#define cpu_has_vmx_ept_mt_wb false
+#define cpu_has_vmx_ept_2mb false
+#define cpu_has_vmx_ept_1gb false
+#define cpu_has_vmx_ept_ad false
+#define cpu_has_vmx_ept_invept_single_context false
+#endif /* CONFIG_INTEL_VMX */
 
 #define EPT_2MB_SHIFT     16
 #define EPT_1GB_SHIFT     17
@@ -310,12 +322,18 @@ extern uint8_t posted_intr_vector;
 #define INVEPT_SINGLE_CONTEXT   1
 #define INVEPT_ALL_CONTEXT      2
 
+#ifdef CONFIG_INTEL_VMX
 #define cpu_has_vmx_vpid_invvpid_individual_addr                    \
     (vmx_ept_vpid_cap & VMX_VPID_INVVPID_INDIVIDUAL_ADDR)
 #define cpu_has_vmx_vpid_invvpid_single_context                     \
     (vmx_ept_vpid_cap & VMX_VPID_INVVPID_SINGLE_CONTEXT)
 #define cpu_has_vmx_vpid_invvpid_single_context_retaining_global    \
     (vmx_ept_vpid_cap & VMX_VPID_INVVPID_SINGLE_CONTEXT_RETAINING_GLOBAL)
+#else
+#define cpu_has_vmx_vpid_invvpid_individual_addr false
+#define cpu_has_vmx_vpid_invvpid_single_context false
+#define cpu_has_vmx_vpid_invvpid_single_context_retaining_global false
+#endif /* CONFIG_INTEL_VMX */
 
 #define INVVPID_INDIVIDUAL_ADDR                 0
 #define INVVPID_SINGLE_CONTEXT                  1
-- 
2.37.2



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC 09/10] x86/ioreq: guard VIO_realmode_completion with INTEL_VMX
  2023-02-13 14:57 [RFC 00/10] x86: Make cpu virtualization support configurable Xenia Ragiadakou
                   ` (7 preceding siblings ...)
  2023-02-13 14:57 ` [RFC 08/10] x86: wire cpu_has_svm/vmx_* to false when svm/vmx not enabled Xenia Ragiadakou
@ 2023-02-13 14:57 ` Xenia Ragiadakou
  2023-02-14 16:46   ` Jan Beulich
  2023-02-13 14:57 ` [RFC 10/10] x86/hvm: make AMD-V and Intel VT-x support configurable Xenia Ragiadakou
  9 siblings, 1 reply; 28+ messages in thread
From: Xenia Ragiadakou @ 2023-02-13 14:57 UTC (permalink / raw)
  To: xen-devel
  Cc: Paul Durrant, Jan Beulich, Andrew Cooper, Roger Pau Monné, Wei Liu

VIO_realmode_completion is specific to vmx realmode, so guard the completion
handling code with INTEL_VMX.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/arch/x86/hvm/ioreq.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 0bdcca1e1a..1b0c0f1b12 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -44,6 +44,7 @@ bool arch_vcpu_ioreq_completion(enum vio_completion completion)
 {
     switch ( completion )
     {
+#ifdef CONFIG_INTEL_VMX
     case VIO_realmode_completion:
     {
         struct hvm_emulate_ctxt ctxt;
@@ -54,6 +55,7 @@ bool arch_vcpu_ioreq_completion(enum vio_completion completion)
 
         break;
     }
+#endif
 
     default:
         ASSERT_UNREACHABLE();
-- 
2.37.2



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC 10/10] x86/hvm: make AMD-V and Intel VT-x support configurable
  2023-02-13 14:57 [RFC 00/10] x86: Make cpu virtualization support configurable Xenia Ragiadakou
                   ` (8 preceding siblings ...)
  2023-02-13 14:57 ` [RFC 09/10] x86/ioreq: guard VIO_realmode_completion with INTEL_VMX Xenia Ragiadakou
@ 2023-02-13 14:57 ` Xenia Ragiadakou
  9 siblings, 0 replies; 28+ messages in thread
From: Xenia Ragiadakou @ 2023-02-13 14:57 UTC (permalink / raw)
  To: xen-devel; +Cc: Jan Beulich, Andrew Cooper, Roger Pau Monné, Wei Liu

Provide the user with configuration control over the cpu virtualization support
in Xen by making AMD_SVM and INTEL_VMX options user selectable.

To preserve the current default behavior, both options depend on HVM and
default to Y.

To prevent users from unknowingly disabling virtualization support, make the
controls user selectable only if EXPERT is enabled.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/arch/x86/Kconfig | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 2a72111c23..fce40f08b1 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -118,10 +118,24 @@ config HVM
 	  If unsure, say Y.
 
 config AMD_SVM
-	def_bool y if HVM
+	bool "AMD-V" if EXPERT
+	depends on HVM
+	default y
+	help
+	  Enables virtual machine extensions on platforms that implement the
+	  AMD Virtualization Technology (AMD-V).
+	  If your system includes a processor with AMD-V support, say Y.
+	  If in doubt, say Y.
 
 config INTEL_VMX
-	def_bool y if HVM
+	bool "Intel VT-x" if EXPERT
+	depends on HVM
+	default y
+	help
+	  Enables virtual machine extensions on platforms that implement the
+	  Intel Virtualization Technology (Intel VT-x).
+	  If your system includes a processor with Intel VT-x support, say Y.
+	  If in doubt, say Y.
 
 config XEN_SHSTK
 	bool "Supervisor Shadow Stacks"
-- 
2.37.2



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [RFC 01/10] x86: introduce AMD-V and Intel VT-x Kconfig options
  2023-02-13 14:57 ` [RFC 01/10] x86: introduce AMD-V and Intel VT-x Kconfig options Xenia Ragiadakou
@ 2023-02-13 15:11   ` Jan Beulich
  2023-02-13 16:30     ` Xenia Ragiadakou
  0 siblings, 1 reply; 28+ messages in thread
From: Jan Beulich @ 2023-02-13 15:11 UTC (permalink / raw)
  To: Xenia Ragiadakou
  Cc: Andrew Cooper, Roger Pau Monné,
	Wei Liu, George Dunlap, Paul Durrant, xen-devel

On 13.02.2023 15:57, Xenia Ragiadakou wrote:
> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -117,6 +117,12 @@ config HVM
>  
>  	  If unsure, say Y.
>  
> +config AMD_SVM
> +	def_bool y if HVM
> +
> +config INTEL_VMX
> +	def_bool y if HVM

I'm not convinced we want vendor names here - I'd prefer to go with
just SVM and VMX.

> --- a/xen/arch/x86/cpu/Makefile
> +++ b/xen/arch/x86/cpu/Makefile
> @@ -10,4 +10,6 @@ obj-y += intel.o
>  obj-y += intel_cacheinfo.o
>  obj-y += mwait-idle.o
>  obj-y += shanghai.o
> -obj-y += vpmu.o vpmu_amd.o vpmu_intel.o
> +obj-y += vpmu.o
> +obj-$(CONFIG_AMD_SVM) += vpmu_amd.o
> +obj-$(CONFIG_INTEL_VMX) += vpmu_intel.o

This code was moved from hvm/ to cpu/ for the specific reason of also
being used by PV guests. (Sadly the comments at the top weren't
corrected at that time.)

> --- a/xen/drivers/passthrough/Kconfig
> +++ b/xen/drivers/passthrough/Kconfig
> @@ -51,7 +51,7 @@ config AMD_IOMMU
>  
>  config INTEL_IOMMU
>  	bool "Intel VT-d" if EXPERT
> -	depends on X86
> +	depends on X86 && INTEL_VMX
>  	default y
>  	help
>  	  Enables I/O virtualization on platforms that implement the

This is odd in three ways: For one PV domains (i.e. incl Dom0) also use
the IOMMU. And then earlier on there was a change of yours specifically
relaxing the connection between I/O and CPU virtualization. Plus
finally you make no similar change for AMD.

Jan


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC 01/10] x86: introduce AMD-V and Intel VT-x Kconfig options
  2023-02-13 15:11   ` Jan Beulich
@ 2023-02-13 16:30     ` Xenia Ragiadakou
  2023-02-13 16:41       ` Jan Beulich
  0 siblings, 1 reply; 28+ messages in thread
From: Xenia Ragiadakou @ 2023-02-13 16:30 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Roger Pau Monné,
	Wei Liu, George Dunlap, Paul Durrant, xen-devel


On 2/13/23 17:11, Jan Beulich wrote:
> On 13.02.2023 15:57, Xenia Ragiadakou wrote:
>> --- a/xen/arch/x86/Kconfig
>> +++ b/xen/arch/x86/Kconfig
>> @@ -117,6 +117,12 @@ config HVM
>>   
>>   	  If unsure, say Y.
>>   
>> +config AMD_SVM
>> +	def_bool y if HVM
>> +
>> +config INTEL_VMX
>> +	def_bool y if HVM
> 
> I'm not convinced we want vendor names here - I'd prefer to go with
> just SVM and VMX.

Ok, personally I have not strong preferences.

> 
>> --- a/xen/arch/x86/cpu/Makefile
>> +++ b/xen/arch/x86/cpu/Makefile
>> @@ -10,4 +10,6 @@ obj-y += intel.o
>>   obj-y += intel_cacheinfo.o
>>   obj-y += mwait-idle.o
>>   obj-y += shanghai.o
>> -obj-y += vpmu.o vpmu_amd.o vpmu_intel.o
>> +obj-y += vpmu.o
>> +obj-$(CONFIG_AMD_SVM) += vpmu_amd.o
>> +obj-$(CONFIG_INTEL_VMX) += vpmu_intel.o
> 
> This code was moved from hvm/ to cpu/ for the specific reason of also
> being used by PV guests. (Sadly the comments at the top weren't
> corrected at that time.)

Hmm, the init functions are prefixed with svm/vmx.
Does vpmu depend on svm/vmx support?

> 
>> --- a/xen/drivers/passthrough/Kconfig
>> +++ b/xen/drivers/passthrough/Kconfig
>> @@ -51,7 +51,7 @@ config AMD_IOMMU
>>   
>>   config INTEL_IOMMU
>>   	bool "Intel VT-d" if EXPERT
>> -	depends on X86
>> +	depends on X86 && INTEL_VMX
>>   	default y
>>   	help
>>   	  Enables I/O virtualization on platforms that implement the
> 
> This is odd in three ways: For one PV domains (i.e. incl Dom0) also use
> the IOMMU. And then earlier on there was a change of yours specifically
> relaxing the connection between I/O and CPU virtualization. Plus
> finally you make no similar change for AMD.

I am planning to relax it as it is relevant only to posted interrupts 
(that's why there is no such a dependency for AMD).

> 
> Jan

-- 
Xenia


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC 01/10] x86: introduce AMD-V and Intel VT-x Kconfig options
  2023-02-13 16:30     ` Xenia Ragiadakou
@ 2023-02-13 16:41       ` Jan Beulich
  2023-02-13 20:53         ` Boris Ostrovsky
  0 siblings, 1 reply; 28+ messages in thread
From: Jan Beulich @ 2023-02-13 16:41 UTC (permalink / raw)
  To: Xenia Ragiadakou
  Cc: Andrew Cooper, Roger Pau Monné,
	Wei Liu, George Dunlap, Paul Durrant, xen-devel, Boris Ostrovsky

On 13.02.2023 17:30, Xenia Ragiadakou wrote:
> On 2/13/23 17:11, Jan Beulich wrote:
>> On 13.02.2023 15:57, Xenia Ragiadakou wrote:
>>> --- a/xen/arch/x86/cpu/Makefile
>>> +++ b/xen/arch/x86/cpu/Makefile
>>> @@ -10,4 +10,6 @@ obj-y += intel.o
>>>   obj-y += intel_cacheinfo.o
>>>   obj-y += mwait-idle.o
>>>   obj-y += shanghai.o
>>> -obj-y += vpmu.o vpmu_amd.o vpmu_intel.o
>>> +obj-y += vpmu.o
>>> +obj-$(CONFIG_AMD_SVM) += vpmu_amd.o
>>> +obj-$(CONFIG_INTEL_VMX) += vpmu_intel.o
>>
>> This code was moved from hvm/ to cpu/ for the specific reason of also
>> being used by PV guests. (Sadly the comments at the top weren't
>> corrected at that time.)
> 
> Hmm, the init functions are prefixed with svm/vmx.
> Does vpmu depend on svm/vmx support?

There are interactions, but I don't think there's a dependency. You
may want to ask Boris (whom I have now added to Cc).

Jan


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC 02/10] x86/hvm: separate AMD-V and Intel VT-x hvm_function_table initializers
  2023-02-13 14:57 ` [RFC 02/10] x86/hvm: separate AMD-V and Intel VT-x hvm_function_table initializers Xenia Ragiadakou
@ 2023-02-13 16:47   ` Jan Beulich
  2023-02-13 17:18     ` Xenia Ragiadakou
  0 siblings, 1 reply; 28+ messages in thread
From: Jan Beulich @ 2023-02-13 16:47 UTC (permalink / raw)
  To: Xenia Ragiadakou; +Cc: Andrew Cooper, Roger Pau Monné, Wei Liu, xen-devel

On 13.02.2023 15:57, Xenia Ragiadakou wrote:
> --- a/xen/arch/x86/include/asm/hvm/hvm.h
> +++ b/xen/arch/x86/include/asm/hvm/hvm.h
> @@ -261,8 +261,16 @@ extern struct hvm_function_table hvm_funcs;
>  extern bool_t hvm_enabled;
>  extern s8 hvm_port80_allowed;
>  
> +#ifdef CONFIG_AMD_SVM
>  extern const struct hvm_function_table *start_svm(void);
> +#else
> +static inline const struct hvm_function_table *start_svm(void) { return NULL; }
> +#endif
> +#ifdef CONFIG_INTEL_VMX
>  extern const struct hvm_function_table *start_vmx(void);
> +#else
> +static inline const struct hvm_function_table *start_vmx(void) { return NULL; }
> +#endif
>  
>  int hvm_domain_initialise(struct domain *d,
>                            const struct xen_domctl_createdomain *config);

Instead of this (which I consider harder to read), may I suggest

    if ( IS_ENABLED(CONFIG_VMX) && cpu_has_vmx )
        fns = start_vmx();
    else if ( IS_ENABLED(CONFIG_SVM) && cpu_has_svm )
        fns = start_svm();

in hvm_enable() instead (with DCE taking care of removing the dead
calls)?

Jan


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC 03/10] x86/p2m: guard vmx specific ept functions with INTEL_VMX
  2023-02-13 14:57 ` [RFC 03/10] x86/p2m: guard vmx specific ept functions with INTEL_VMX Xenia Ragiadakou
@ 2023-02-13 16:55   ` Jan Beulich
  0 siblings, 0 replies; 28+ messages in thread
From: Jan Beulich @ 2023-02-13 16:55 UTC (permalink / raw)
  To: Xenia Ragiadakou
  Cc: Andrew Cooper, George Dunlap, Roger Pau Monné, Wei Liu, xen-devel

On 13.02.2023 15:57, Xenia Ragiadakou wrote:
> --- a/xen/arch/x86/mm/p2m.h
> +++ b/xen/arch/x86/mm/p2m.h
> @@ -35,9 +35,25 @@ void p2m_nestedp2m_init(struct p2m_domain *p2m);
>  int p2m_init_nestedp2m(struct domain *d);
>  void p2m_teardown_nestedp2m(struct domain *d);
>  
> +#ifdef CONFIG_INTEL_VMX
>  int ept_p2m_init(struct p2m_domain *p2m);
>  void ept_p2m_uninit(struct p2m_domain *p2m);

For the calls from p2m_initialise() and p2m_free_one() see my reply
to patch 2. For the calls from altp2m functions, including ...

>  void p2m_init_altp2m_ept(struct domain *d, unsigned int i);

... to this one, I think you want to be more rigorous and hide much
(most?) of altp2m altogether when !VMX. (It is perhaps high time
for more of that code to move from p2m.c to altp2m.c, and altp2m.c
then, if it doesn't already, to become dependent on VMX somewhere
in the Makefile in your series.)

Jan


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC 02/10] x86/hvm: separate AMD-V and Intel VT-x hvm_function_table initializers
  2023-02-13 16:47   ` Jan Beulich
@ 2023-02-13 17:18     ` Xenia Ragiadakou
  0 siblings, 0 replies; 28+ messages in thread
From: Xenia Ragiadakou @ 2023-02-13 17:18 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Roger Pau Monné, Wei Liu, xen-devel


On 2/13/23 18:47, Jan Beulich wrote:
> On 13.02.2023 15:57, Xenia Ragiadakou wrote:
>> --- a/xen/arch/x86/include/asm/hvm/hvm.h
>> +++ b/xen/arch/x86/include/asm/hvm/hvm.h
>> @@ -261,8 +261,16 @@ extern struct hvm_function_table hvm_funcs;
>>   extern bool_t hvm_enabled;
>>   extern s8 hvm_port80_allowed;
>>   
>> +#ifdef CONFIG_AMD_SVM
>>   extern const struct hvm_function_table *start_svm(void);
>> +#else
>> +static inline const struct hvm_function_table *start_svm(void) { return NULL; }
>> +#endif
>> +#ifdef CONFIG_INTEL_VMX
>>   extern const struct hvm_function_table *start_vmx(void);
>> +#else
>> +static inline const struct hvm_function_table *start_vmx(void) { return NULL; }
>> +#endif
>>   
>>   int hvm_domain_initialise(struct domain *d,
>>                             const struct xen_domctl_createdomain *config);
> 
> Instead of this (which I consider harder to read), may I suggest
> 
>      if ( IS_ENABLED(CONFIG_VMX) && cpu_has_vmx )
>          fns = start_vmx();
>      else if ( IS_ENABLED(CONFIG_SVM) && cpu_has_svm )
>          fns = start_svm();
> 
> in hvm_enable() instead (with DCE taking care of removing the dead
> calls)?

Sure, it looks much better this way.

> 
> Jan

-- 
Xenia


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC 01/10] x86: introduce AMD-V and Intel VT-x Kconfig options
  2023-02-13 16:41       ` Jan Beulich
@ 2023-02-13 20:53         ` Boris Ostrovsky
  2023-02-14  7:48           ` Jan Beulich
  0 siblings, 1 reply; 28+ messages in thread
From: Boris Ostrovsky @ 2023-02-13 20:53 UTC (permalink / raw)
  To: Jan Beulich, Xenia Ragiadakou
  Cc: Andrew Cooper, Roger Pau Monné,
	Wei Liu, George Dunlap, Paul Durrant, xen-devel


On 2/13/23 11:41 AM, Jan Beulich wrote:
> On 13.02.2023 17:30, Xenia Ragiadakou wrote:
>> On 2/13/23 17:11, Jan Beulich wrote:
>>> On 13.02.2023 15:57, Xenia Ragiadakou wrote:
>>>> --- a/xen/arch/x86/cpu/Makefile
>>>> +++ b/xen/arch/x86/cpu/Makefile
>>>> @@ -10,4 +10,6 @@ obj-y += intel.o
>>>>    obj-y += intel_cacheinfo.o
>>>>    obj-y += mwait-idle.o
>>>>    obj-y += shanghai.o
>>>> -obj-y += vpmu.o vpmu_amd.o vpmu_intel.o
>>>> +obj-y += vpmu.o
>>>> +obj-$(CONFIG_AMD_SVM) += vpmu_amd.o
>>>> +obj-$(CONFIG_INTEL_VMX) += vpmu_intel.o
>>> This code was moved from hvm/ to cpu/ for the specific reason of also
>>> being used by PV guests. (Sadly the comments at the top weren't
>>> corrected at that time.)
>> Hmm, the init functions are prefixed with svm/vmx.
>> Does vpmu depend on svm/vmx support?
> There are interactions, but I don't think there's a dependency. You
> may want to ask Boris (whom I have now added to Cc).


MSR intercept bits need to be manipulated, which is SVM/VMX-specific.


-boris



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC 01/10] x86: introduce AMD-V and Intel VT-x Kconfig options
  2023-02-13 20:53         ` Boris Ostrovsky
@ 2023-02-14  7:48           ` Jan Beulich
  2023-02-14 18:38             ` Boris Ostrovsky
  0 siblings, 1 reply; 28+ messages in thread
From: Jan Beulich @ 2023-02-14  7:48 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: Andrew Cooper, Roger Pau Monné,
	Xenia Ragiadakou, Wei Liu, George Dunlap, Paul Durrant,
	xen-devel

On 13.02.2023 21:53, Boris Ostrovsky wrote:
> 
> On 2/13/23 11:41 AM, Jan Beulich wrote:
>> On 13.02.2023 17:30, Xenia Ragiadakou wrote:
>>> On 2/13/23 17:11, Jan Beulich wrote:
>>>> On 13.02.2023 15:57, Xenia Ragiadakou wrote:
>>>>> --- a/xen/arch/x86/cpu/Makefile
>>>>> +++ b/xen/arch/x86/cpu/Makefile
>>>>> @@ -10,4 +10,6 @@ obj-y += intel.o
>>>>>    obj-y += intel_cacheinfo.o
>>>>>    obj-y += mwait-idle.o
>>>>>    obj-y += shanghai.o
>>>>> -obj-y += vpmu.o vpmu_amd.o vpmu_intel.o
>>>>> +obj-y += vpmu.o
>>>>> +obj-$(CONFIG_AMD_SVM) += vpmu_amd.o
>>>>> +obj-$(CONFIG_INTEL_VMX) += vpmu_intel.o
>>>> This code was moved from hvm/ to cpu/ for the specific reason of also
>>>> being used by PV guests. (Sadly the comments at the top weren't
>>>> corrected at that time.)
>>> Hmm, the init functions are prefixed with svm/vmx.
>>> Does vpmu depend on svm/vmx support?
>> There are interactions, but I don't think there's a dependency. You
>> may want to ask Boris (whom I have now added to Cc).
> 
> MSR intercept bits need to be manipulated, which is SVM/VMX-specific.

But that's "interaction", not "dependency" aiui: The intercept bits aren't
relevant for PV guests, are they?

Jan


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC 05/10] x86/traps: guard vmx specific functions with INTEL_VMX
  2023-02-13 14:57 ` [RFC 05/10] x86/traps: guard vmx specific functions with INTEL_VMX Xenia Ragiadakou
@ 2023-02-14 16:20   ` Jan Beulich
  0 siblings, 0 replies; 28+ messages in thread
From: Jan Beulich @ 2023-02-14 16:20 UTC (permalink / raw)
  To: Xenia Ragiadakou; +Cc: Andrew Cooper, Roger Pau Monné, Wei Liu, xen-devel

On 13.02.2023 15:57, Xenia Ragiadakou wrote:
> The functions vmx_vmcs_enter() and vmx_vmcs_exit() are VT-x specific.
> Guard their calls with INTEL_VMX.
> 
> No functional change intended.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

With whatever the final name of the Kconfig control is going to be
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC 06/10] x86/domain: guard svm specific functions with AMD_SVM
  2023-02-13 14:57 ` [RFC 06/10] x86/domain: guard svm specific functions with AMD_SVM Xenia Ragiadakou
@ 2023-02-14 16:24   ` Jan Beulich
  2023-02-14 17:03     ` Xenia Ragiadakou
  0 siblings, 1 reply; 28+ messages in thread
From: Jan Beulich @ 2023-02-14 16:24 UTC (permalink / raw)
  To: Xenia Ragiadakou; +Cc: Andrew Cooper, Roger Pau Monné, Wei Liu, xen-devel

On 13.02.2023 15:57, Xenia Ragiadakou wrote:
> The functions svm_load_segs() and svm_load_segs_prefetch() are AMD-V specific
> so guard their calls in common code with AMD_SVM.
> 
> Since AMD_SVM depends on HVM, it can be used alone.
> 
> No functional change intended.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

With whatever the final name of the Kconfig control is going to be
Acked-by: Jan Beulich <jbeulich@suse.com>

Thinking about it, both here an in the earlier patch it may be worth
considering to switch to use of IS_ENABLED() while making these
adjustments.

Jan


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC 07/10] x86/oprofile: guard svm specific symbols with AMD_SVM
  2023-02-13 14:57 ` [RFC 07/10] x86/oprofile: guard svm specific symbols " Xenia Ragiadakou
@ 2023-02-14 16:26   ` Jan Beulich
  0 siblings, 0 replies; 28+ messages in thread
From: Jan Beulich @ 2023-02-14 16:26 UTC (permalink / raw)
  To: Xenia Ragiadakou; +Cc: Andrew Cooper, Roger Pau Monné, Wei Liu, xen-devel

On 13.02.2023 15:57, Xenia Ragiadakou wrote:
> The symbol svm_stgi_label is AMD-V specific so guard its usage in common code
> with AMD_SVM.
> 
> Since AMD_SVM depends on HVM, it can be used alone.
> Also, use #ifdef instead of #if.
> 
> No functional change intended.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

With whatever the final name of the Kconfig control is going to be
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC 08/10] x86: wire cpu_has_svm/vmx_* to false when svm/vmx not enabled
  2023-02-13 14:57 ` [RFC 08/10] x86: wire cpu_has_svm/vmx_* to false when svm/vmx not enabled Xenia Ragiadakou
@ 2023-02-14 16:36   ` Jan Beulich
  2023-02-14 16:52     ` Xenia Ragiadakou
  0 siblings, 1 reply; 28+ messages in thread
From: Jan Beulich @ 2023-02-14 16:36 UTC (permalink / raw)
  To: Xenia Ragiadakou
  Cc: Andrew Cooper, Roger Pau Monné,
	Wei Liu, Jun Nakajima, Kevin Tian, xen-devel

On 13.02.2023 15:57, Xenia Ragiadakou wrote:
> To be able to use cpu_has_svm/vmx_* macros in common code without enclosing

Both in the title and here the spelling you use made me first think that
you talk about "cpu_has_svm". May I suggest you follow what we typically
do and use "cpu_has_{svm,vmx}_*" instead in such a case? However, the
title may want changing anyway:

> --- a/xen/arch/x86/include/asm/hvm/svm/svm.h
> +++ b/xen/arch/x86/include/asm/hvm/svm/svm.h
> @@ -80,6 +80,7 @@ extern u32 svm_feature_flags;
>  #define SVM_FEATURE_SSS           19 /* NPT Supervisor Shadow Stacks */
>  #define SVM_FEATURE_SPEC_CTRL     20 /* MSR_SPEC_CTRL virtualisation */
>  
> +#ifdef CONFIG_AMD_SVM
>  #define cpu_has_svm_feature(f) (svm_feature_flags & (1u << (f)))

Why don't you simply provide a 2nd form of this one macro, leaving all
others alone?

> --- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
> +++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
> @@ -294,6 +294,7 @@ extern u64 vmx_ept_vpid_cap;
>  
>  #define VMX_TSC_MULTIPLIER_MAX                  0xffffffffffffffffULL
>  
> +#ifdef CONFIG_INTEL_VMX
>  #define cpu_has_wbinvd_exiting \
>      (vmx_secondary_exec_control & SECONDARY_EXEC_WBINVD_EXITING)
>  #define cpu_has_vmx_virtualize_apic_accesses \
> @@ -352,6 +353,36 @@ extern u64 vmx_ept_vpid_cap;
>      (vmx_secondary_exec_control & SECONDARY_EXEC_BUS_LOCK_DETECTION)
>  #define cpu_has_vmx_notify_vm_exiting \
>      (vmx_secondary_exec_control & SECONDARY_EXEC_NOTIFY_VM_EXITING)
> +#else
> +#define cpu_has_wbinvd_exiting false
> +#define cpu_has_vmx_virtualize_apic_accesses false
> +#define cpu_has_vmx_tpr_shadow false
> +#define cpu_has_vmx_vnmi false
> +#define cpu_has_vmx_msr_bitmap false
> +#define cpu_has_vmx_secondary_exec_control false
> +#define cpu_has_vmx_ept false
> +#define cpu_has_vmx_dt_exiting false
> +#define cpu_has_vmx_vpid false
> +#define cpu_has_monitor_trap_flag false
> +#define cpu_has_vmx_pat false
> +#define cpu_has_vmx_efer false
> +#define cpu_has_vmx_unrestricted_guest false
> +#define vmx_unrestricted_guest(v) false
> +#define cpu_has_vmx_ple false
> +#define cpu_has_vmx_apic_reg_virt false
> +#define cpu_has_vmx_virtual_intr_delivery false
> +#define cpu_has_vmx_virtualize_x2apic_mode false
> +#define cpu_has_vmx_posted_intr_processing false
> +#define cpu_has_vmx_vmcs_shadowing false
> +#define cpu_has_vmx_vmfunc false
> +#define cpu_has_vmx_virt_exceptions false
> +#define cpu_has_vmx_pml false
> +#define cpu_has_vmx_mpx false
> +#define cpu_has_vmx_xsaves false
> +#define cpu_has_vmx_tsc_scaling false
> +#define cpu_has_vmx_bus_lock_detection false
> +#define cpu_has_vmx_notify_vm_exiting false
> +#endif /* CONFIG_INTEL_VMX */

For VMX you first of all want to separate out vmx_unrestricted_guest(v).
Maybe we can even get away without a 2nd form. Then I think it would be
much neater a change if, like we have for SVM, a couple (three I think)
helper macros were introduced, which then is all that needs providing a
2nd form for. (Both steps may want doing in separate, prereq patches.)

> @@ -374,8 +405,12 @@ extern u64 vmx_ept_vpid_cap;
>  #define VMX_BASIC_DEFAULT1_ZERO		(1ULL << 55)
>  
>  extern u64 vmx_basic_msr;
> +#ifdef CONFIG_INTEL_VMX
>  #define cpu_has_vmx_ins_outs_instr_info \
>      (!!(vmx_basic_msr & VMX_BASIC_INS_OUT_INFO))
> +#else
> +#define cpu_has_vmx_ins_outs_instr_info false
> +#endif /* CONFIG_INTEL_VMX */

I don't think you need an alternative here - it's used solely in VMX
code. If one wanted to this could even be moved to vmcs.c.

> --- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h
> +++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
> @@ -289,6 +289,7 @@ typedef union cr_access_qual {
>  
>  extern uint8_t posted_intr_vector;
>  
> +#ifdef CONFIG_INTEL_VMX
>  #define cpu_has_vmx_ept_exec_only_supported        \
>      (vmx_ept_vpid_cap & VMX_EPT_EXEC_ONLY_SUPPORTED)
>  
> @@ -301,6 +302,17 @@ extern uint8_t posted_intr_vector;
>  #define cpu_has_vmx_ept_ad    (vmx_ept_vpid_cap & VMX_EPT_AD_BIT)
>  #define cpu_has_vmx_ept_invept_single_context   \
>      (vmx_ept_vpid_cap & VMX_EPT_INVEPT_SINGLE_CONTEXT)
> +#else
> +#define cpu_has_vmx_ept_exec_only_supported false
> +
> +#define cpu_has_vmx_ept_wl4_supported false
> +#define cpu_has_vmx_ept_mt_uc false
> +#define cpu_has_vmx_ept_mt_wb false
> +#define cpu_has_vmx_ept_2mb false
> +#define cpu_has_vmx_ept_1gb false
> +#define cpu_has_vmx_ept_ad false
> +#define cpu_has_vmx_ept_invept_single_context false
> +#endif /* CONFIG_INTEL_VMX */

Same suggestion for introducing a helper macro here, which would then
also be usable ...

> @@ -310,12 +322,18 @@ extern uint8_t posted_intr_vector;
>  #define INVEPT_SINGLE_CONTEXT   1
>  #define INVEPT_ALL_CONTEXT      2
>  
> +#ifdef CONFIG_INTEL_VMX
>  #define cpu_has_vmx_vpid_invvpid_individual_addr                    \
>      (vmx_ept_vpid_cap & VMX_VPID_INVVPID_INDIVIDUAL_ADDR)
>  #define cpu_has_vmx_vpid_invvpid_single_context                     \
>      (vmx_ept_vpid_cap & VMX_VPID_INVVPID_SINGLE_CONTEXT)
>  #define cpu_has_vmx_vpid_invvpid_single_context_retaining_global    \
>      (vmx_ept_vpid_cap & VMX_VPID_INVVPID_SINGLE_CONTEXT_RETAINING_GLOBAL)
> +#else
> +#define cpu_has_vmx_vpid_invvpid_individual_addr false
> +#define cpu_has_vmx_vpid_invvpid_single_context false
> +#define cpu_has_vmx_vpid_invvpid_single_context_retaining_global false
> +#endif /* CONFIG_INTEL_VMX */

... here.

Jan


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC 09/10] x86/ioreq: guard VIO_realmode_completion with INTEL_VMX
  2023-02-13 14:57 ` [RFC 09/10] x86/ioreq: guard VIO_realmode_completion with INTEL_VMX Xenia Ragiadakou
@ 2023-02-14 16:46   ` Jan Beulich
  2023-02-14 17:02     ` Xenia Ragiadakou
  0 siblings, 1 reply; 28+ messages in thread
From: Jan Beulich @ 2023-02-14 16:46 UTC (permalink / raw)
  To: Xenia Ragiadakou
  Cc: Paul Durrant, Andrew Cooper, Roger Pau Monné, Wei Liu, xen-devel

On 13.02.2023 15:57, Xenia Ragiadakou wrote:
> VIO_realmode_completion is specific to vmx realmode, so guard the completion
> handling code with INTEL_VMX.

While it means slightly bigger a code change, personally I'd prefer if
VIO_realmode_completion simply didn't exist as enumerator when !VMX.
Besides ...

> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -44,6 +44,7 @@ bool arch_vcpu_ioreq_completion(enum vio_completion completion)
>  {
>      switch ( completion )
>      {
> +#ifdef CONFIG_INTEL_VMX
>      case VIO_realmode_completion:
>      {
>          struct hvm_emulate_ctxt ctxt;
> @@ -54,6 +55,7 @@ bool arch_vcpu_ioreq_completion(enum vio_completion completion)
>  
>          break;
>      }
> +#endif

... this use there's exactly one further one which would need #ifdef-ing.
I think that's tolerable.

Thinking further (not necessarily for you to carry out) the x86 version
is identical to Arm's when !VMX, so it would look better to me if the
entire x86 function became conditional, the Arm one was dropped, and
common/ioreq.c provided a fallback for the case where no arch one exists.

Jan


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC 08/10] x86: wire cpu_has_svm/vmx_* to false when svm/vmx not enabled
  2023-02-14 16:36   ` Jan Beulich
@ 2023-02-14 16:52     ` Xenia Ragiadakou
  0 siblings, 0 replies; 28+ messages in thread
From: Xenia Ragiadakou @ 2023-02-14 16:52 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Roger Pau Monné,
	Wei Liu, Jun Nakajima, Kevin Tian, xen-devel


On 2/14/23 18:36, Jan Beulich wrote:
> On 13.02.2023 15:57, Xenia Ragiadakou wrote:
>> To be able to use cpu_has_svm/vmx_* macros in common code without enclosing
> 
> Both in the title and here the spelling you use made me first think that
> you talk about "cpu_has_svm". May I suggest you follow what we typically
> do and use "cpu_has_{svm,vmx}_*" instead in such a case? However, the
> title may want changing anyway:

Ok, I will use the conventional way from now on.

> 
>> --- a/xen/arch/x86/include/asm/hvm/svm/svm.h
>> +++ b/xen/arch/x86/include/asm/hvm/svm/svm.h
>> @@ -80,6 +80,7 @@ extern u32 svm_feature_flags;
>>   #define SVM_FEATURE_SSS           19 /* NPT Supervisor Shadow Stacks */
>>   #define SVM_FEATURE_SPEC_CTRL     20 /* MSR_SPEC_CTRL virtualisation */
>>   
>> +#ifdef CONFIG_AMD_SVM
>>   #define cpu_has_svm_feature(f) (svm_feature_flags & (1u << (f)))
> 
> Why don't you simply provide a 2nd form of this one macro, leaving all
> others alone?

You are right. That way there will be less changes.

> 
>> --- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
>> +++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
>> @@ -294,6 +294,7 @@ extern u64 vmx_ept_vpid_cap;
>>   
>>   #define VMX_TSC_MULTIPLIER_MAX                  0xffffffffffffffffULL
>>   
>> +#ifdef CONFIG_INTEL_VMX
>>   #define cpu_has_wbinvd_exiting \
>>       (vmx_secondary_exec_control & SECONDARY_EXEC_WBINVD_EXITING)
>>   #define cpu_has_vmx_virtualize_apic_accesses \
>> @@ -352,6 +353,36 @@ extern u64 vmx_ept_vpid_cap;
>>       (vmx_secondary_exec_control & SECONDARY_EXEC_BUS_LOCK_DETECTION)
>>   #define cpu_has_vmx_notify_vm_exiting \
>>       (vmx_secondary_exec_control & SECONDARY_EXEC_NOTIFY_VM_EXITING)
>> +#else
>> +#define cpu_has_wbinvd_exiting false
>> +#define cpu_has_vmx_virtualize_apic_accesses false
>> +#define cpu_has_vmx_tpr_shadow false
>> +#define cpu_has_vmx_vnmi false
>> +#define cpu_has_vmx_msr_bitmap false
>> +#define cpu_has_vmx_secondary_exec_control false
>> +#define cpu_has_vmx_ept false
>> +#define cpu_has_vmx_dt_exiting false
>> +#define cpu_has_vmx_vpid false
>> +#define cpu_has_monitor_trap_flag false
>> +#define cpu_has_vmx_pat false
>> +#define cpu_has_vmx_efer false
>> +#define cpu_has_vmx_unrestricted_guest false
>> +#define vmx_unrestricted_guest(v) false
>> +#define cpu_has_vmx_ple false
>> +#define cpu_has_vmx_apic_reg_virt false
>> +#define cpu_has_vmx_virtual_intr_delivery false
>> +#define cpu_has_vmx_virtualize_x2apic_mode false
>> +#define cpu_has_vmx_posted_intr_processing false
>> +#define cpu_has_vmx_vmcs_shadowing false
>> +#define cpu_has_vmx_vmfunc false
>> +#define cpu_has_vmx_virt_exceptions false
>> +#define cpu_has_vmx_pml false
>> +#define cpu_has_vmx_mpx false
>> +#define cpu_has_vmx_xsaves false
>> +#define cpu_has_vmx_tsc_scaling false
>> +#define cpu_has_vmx_bus_lock_detection false
>> +#define cpu_has_vmx_notify_vm_exiting false
>> +#endif /* CONFIG_INTEL_VMX */
> 
> For VMX you first of all want to separate out vmx_unrestricted_guest(v).
> Maybe we can even get away without a 2nd form. Then I think it would be
> much neater a change if, like we have for SVM, a couple (three I think)
> helper macros were introduced, which then is all that needs providing a
> 2nd form for. (Both steps may want doing in separate, prereq patches.)

Ok will do.

> 
>> @@ -374,8 +405,12 @@ extern u64 vmx_ept_vpid_cap;
>>   #define VMX_BASIC_DEFAULT1_ZERO		(1ULL << 55)
>>   
>>   extern u64 vmx_basic_msr;
>> +#ifdef CONFIG_INTEL_VMX
>>   #define cpu_has_vmx_ins_outs_instr_info \
>>       (!!(vmx_basic_msr & VMX_BASIC_INS_OUT_INFO))
>> +#else
>> +#define cpu_has_vmx_ins_outs_instr_info false
>> +#endif /* CONFIG_INTEL_VMX */
> 
> I don't think you need an alternative here - it's used solely in VMX
> code. If one wanted to this could even be moved to vmcs.c.

Ok. I ll take a closer look at this one.

> 
>> --- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h
>> +++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
>> @@ -289,6 +289,7 @@ typedef union cr_access_qual {
>>   
>>   extern uint8_t posted_intr_vector;
>>   
>> +#ifdef CONFIG_INTEL_VMX
>>   #define cpu_has_vmx_ept_exec_only_supported        \
>>       (vmx_ept_vpid_cap & VMX_EPT_EXEC_ONLY_SUPPORTED)
>>   
>> @@ -301,6 +302,17 @@ extern uint8_t posted_intr_vector;
>>   #define cpu_has_vmx_ept_ad    (vmx_ept_vpid_cap & VMX_EPT_AD_BIT)
>>   #define cpu_has_vmx_ept_invept_single_context   \
>>       (vmx_ept_vpid_cap & VMX_EPT_INVEPT_SINGLE_CONTEXT)
>> +#else
>> +#define cpu_has_vmx_ept_exec_only_supported false
>> +
>> +#define cpu_has_vmx_ept_wl4_supported false
>> +#define cpu_has_vmx_ept_mt_uc false
>> +#define cpu_has_vmx_ept_mt_wb false
>> +#define cpu_has_vmx_ept_2mb false
>> +#define cpu_has_vmx_ept_1gb false
>> +#define cpu_has_vmx_ept_ad false
>> +#define cpu_has_vmx_ept_invept_single_context false
>> +#endif /* CONFIG_INTEL_VMX */
> 
> Same suggestion for introducing a helper macro here, which would then
> also be usable ...
> 
>> @@ -310,12 +322,18 @@ extern uint8_t posted_intr_vector;
>>   #define INVEPT_SINGLE_CONTEXT   1
>>   #define INVEPT_ALL_CONTEXT      2
>>   
>> +#ifdef CONFIG_INTEL_VMX
>>   #define cpu_has_vmx_vpid_invvpid_individual_addr                    \
>>       (vmx_ept_vpid_cap & VMX_VPID_INVVPID_INDIVIDUAL_ADDR)
>>   #define cpu_has_vmx_vpid_invvpid_single_context                     \
>>       (vmx_ept_vpid_cap & VMX_VPID_INVVPID_SINGLE_CONTEXT)
>>   #define cpu_has_vmx_vpid_invvpid_single_context_retaining_global    \
>>       (vmx_ept_vpid_cap & VMX_VPID_INVVPID_SINGLE_CONTEXT_RETAINING_GLOBAL)
>> +#else
>> +#define cpu_has_vmx_vpid_invvpid_individual_addr false
>> +#define cpu_has_vmx_vpid_invvpid_single_context false
>> +#define cpu_has_vmx_vpid_invvpid_single_context_retaining_global false
>> +#endif /* CONFIG_INTEL_VMX */
> 
> ... here.

Thanks.

> 
> Jan

-- 
Xenia


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC 09/10] x86/ioreq: guard VIO_realmode_completion with INTEL_VMX
  2023-02-14 16:46   ` Jan Beulich
@ 2023-02-14 17:02     ` Xenia Ragiadakou
  0 siblings, 0 replies; 28+ messages in thread
From: Xenia Ragiadakou @ 2023-02-14 17:02 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Paul Durrant, Andrew Cooper, Roger Pau Monné, Wei Liu, xen-devel


On 2/14/23 18:46, Jan Beulich wrote:
> On 13.02.2023 15:57, Xenia Ragiadakou wrote:
>> VIO_realmode_completion is specific to vmx realmode, so guard the completion
>> handling code with INTEL_VMX.
> 
> While it means slightly bigger a code change, personally I'd prefer if
> VIO_realmode_completion simply didn't exist as enumerator when !VMX.
> Besides ...
> 
>> --- a/xen/arch/x86/hvm/ioreq.c
>> +++ b/xen/arch/x86/hvm/ioreq.c
>> @@ -44,6 +44,7 @@ bool arch_vcpu_ioreq_completion(enum vio_completion completion)
>>   {
>>       switch ( completion )
>>       {
>> +#ifdef CONFIG_INTEL_VMX
>>       case VIO_realmode_completion:
>>       {
>>           struct hvm_emulate_ctxt ctxt;
>> @@ -54,6 +55,7 @@ bool arch_vcpu_ioreq_completion(enum vio_completion completion)
>>   
>>           break;
>>       }
>> +#endif
> 
> ... this use there's exactly one further one which would need #ifdef-ing.
> I think that's tolerable.

Sure. I will fix it.

> 
> Thinking further (not necessarily for you to carry out) the x86 version
> is identical to Arm's when !VMX, so it would look better to me if the
> entire x86 function became conditional, the Arm one was dropped, and
> common/ioreq.c provided a fallback for the case where no arch one exists.

It may look awkward to have arch_vcpu_ioreq_completion() in VMX guards 
in common ioreq code because the connection is not obvious immediately.

> 
> Jan

-- 
Xenia


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC 06/10] x86/domain: guard svm specific functions with AMD_SVM
  2023-02-14 16:24   ` Jan Beulich
@ 2023-02-14 17:03     ` Xenia Ragiadakou
  0 siblings, 0 replies; 28+ messages in thread
From: Xenia Ragiadakou @ 2023-02-14 17:03 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Roger Pau Monné, Wei Liu, xen-devel


On 2/14/23 18:24, Jan Beulich wrote:
> On 13.02.2023 15:57, Xenia Ragiadakou wrote:
>> The functions svm_load_segs() and svm_load_segs_prefetch() are AMD-V specific
>> so guard their calls in common code with AMD_SVM.
>>
>> Since AMD_SVM depends on HVM, it can be used alone.
>>
>> No functional change intended.
>>
>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
> 
> With whatever the final name of the Kconfig control is going to be
> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> Thinking about it, both here an in the earlier patch it may be worth
> considering to switch to use of IS_ENABLED() while making these
> adjustments.

Ok will do. Thanks.

> 
> Jan

-- 
Xenia


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC 01/10] x86: introduce AMD-V and Intel VT-x Kconfig options
  2023-02-14  7:48           ` Jan Beulich
@ 2023-02-14 18:38             ` Boris Ostrovsky
  0 siblings, 0 replies; 28+ messages in thread
From: Boris Ostrovsky @ 2023-02-14 18:38 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Roger Pau Monné,
	Xenia Ragiadakou, Wei Liu, George Dunlap, Paul Durrant,
	xen-devel


On 2/14/23 2:48 AM, Jan Beulich wrote:
> On 13.02.2023 21:53, Boris Ostrovsky wrote:
>> On 2/13/23 11:41 AM, Jan Beulich wrote:
>>> On 13.02.2023 17:30, Xenia Ragiadakou wrote:
>>>> On 2/13/23 17:11, Jan Beulich wrote:
>>>>> On 13.02.2023 15:57, Xenia Ragiadakou wrote:
>>>>>> --- a/xen/arch/x86/cpu/Makefile
>>>>>> +++ b/xen/arch/x86/cpu/Makefile
>>>>>> @@ -10,4 +10,6 @@ obj-y += intel.o
>>>>>>     obj-y += intel_cacheinfo.o
>>>>>>     obj-y += mwait-idle.o
>>>>>>     obj-y += shanghai.o
>>>>>> -obj-y += vpmu.o vpmu_amd.o vpmu_intel.o
>>>>>> +obj-y += vpmu.o
>>>>>> +obj-$(CONFIG_AMD_SVM) += vpmu_amd.o
>>>>>> +obj-$(CONFIG_INTEL_VMX) += vpmu_intel.o
>>>>> This code was moved from hvm/ to cpu/ for the specific reason of also
>>>>> being used by PV guests. (Sadly the comments at the top weren't
>>>>> corrected at that time.)
>>>> Hmm, the init functions are prefixed with svm/vmx.
>>>> Does vpmu depend on svm/vmx support?
>>> There are interactions, but I don't think there's a dependency. You
>>> may want to ask Boris (whom I have now added to Cc).
>> MSR intercept bits need to be manipulated, which is SVM/VMX-specific.
> But that's "interaction", not "dependency" aiui: The intercept bits aren't
> relevant for PV guests, are they?


Correct, they are not. All accesses to intercept bits are under is_hvm_vcpu().


So vpmu does not depend on presence of vmx/svm support. (And init routines shouldn't be prefixed with those)


-boris




^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2023-02-14 18:39 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-13 14:57 [RFC 00/10] x86: Make cpu virtualization support configurable Xenia Ragiadakou
2023-02-13 14:57 ` [RFC 01/10] x86: introduce AMD-V and Intel VT-x Kconfig options Xenia Ragiadakou
2023-02-13 15:11   ` Jan Beulich
2023-02-13 16:30     ` Xenia Ragiadakou
2023-02-13 16:41       ` Jan Beulich
2023-02-13 20:53         ` Boris Ostrovsky
2023-02-14  7:48           ` Jan Beulich
2023-02-14 18:38             ` Boris Ostrovsky
2023-02-13 14:57 ` [RFC 02/10] x86/hvm: separate AMD-V and Intel VT-x hvm_function_table initializers Xenia Ragiadakou
2023-02-13 16:47   ` Jan Beulich
2023-02-13 17:18     ` Xenia Ragiadakou
2023-02-13 14:57 ` [RFC 03/10] x86/p2m: guard vmx specific ept functions with INTEL_VMX Xenia Ragiadakou
2023-02-13 16:55   ` Jan Beulich
2023-02-13 14:57 ` [RFC 04/10] x86/vpmu: separate AMD-V and Intel VT-x init arch_vpmu_ops initializers Xenia Ragiadakou
2023-02-13 14:57 ` [RFC 05/10] x86/traps: guard vmx specific functions with INTEL_VMX Xenia Ragiadakou
2023-02-14 16:20   ` Jan Beulich
2023-02-13 14:57 ` [RFC 06/10] x86/domain: guard svm specific functions with AMD_SVM Xenia Ragiadakou
2023-02-14 16:24   ` Jan Beulich
2023-02-14 17:03     ` Xenia Ragiadakou
2023-02-13 14:57 ` [RFC 07/10] x86/oprofile: guard svm specific symbols " Xenia Ragiadakou
2023-02-14 16:26   ` Jan Beulich
2023-02-13 14:57 ` [RFC 08/10] x86: wire cpu_has_svm/vmx_* to false when svm/vmx not enabled Xenia Ragiadakou
2023-02-14 16:36   ` Jan Beulich
2023-02-14 16:52     ` Xenia Ragiadakou
2023-02-13 14:57 ` [RFC 09/10] x86/ioreq: guard VIO_realmode_completion with INTEL_VMX Xenia Ragiadakou
2023-02-14 16:46   ` Jan Beulich
2023-02-14 17:02     ` Xenia Ragiadakou
2023-02-13 14:57 ` [RFC 10/10] x86/hvm: make AMD-V and Intel VT-x support configurable Xenia Ragiadakou

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.