kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support
@ 2021-04-30 12:15 Brijesh Singh
  2021-04-30 12:15 ` [PATCH Part1 RFC v2 01/20] x86/sev: Define the GHCB MSR protocol for AP reset hold Brijesh Singh
                   ` (19 more replies)
  0 siblings, 20 replies; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:15 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

This part of Secure Encrypted Paging (SEV-SNP) series focuses on the changes
required in a guest OS for SEV-SNP support.

SEV-SNP builds upon existing SEV and SEV-ES functionality while adding
new hardware-based memory protections. SEV-SNP adds strong memory integrity
protection to help prevent malicious hypervisor-based attacks like data
replay, memory re-mapping and more in order to create an isolated memory
encryption environment.
 
This series provides the basic building blocks to support booting the SEV-SNP
VMs, it does not cover all the security enhancement introduced by the SEV-SNP
such as interrupt protection.

Many of the integrity guarantees of SEV-SNP are enforced through a new
structure called the Reverse Map Table (RMP). Adding a new page to SEV-SNP
VM requires a 2-step process. First, the hypervisor assigns a page to the
guest using the new RMPUPDATE instruction. This transitions the page to
guest-invalid. Second, the guest validates the page using the new PVALIDATE
instruction. The SEV-SNP VMs can use the new "Page State Change Request NAE"
defined in the GHCB specification to ask hypervisor to add or remove page
from the RMP table.

Each page assigned to the SEV-SNP VM can either be validated or unvalidated,
as indicated by the Validated flag in the page's RMP entry. There are two
approaches that can be taken for the page validation: Pre-validation and
Lazy Validation.

Under pre-validation, the pages are validated prior to first use. And under
lazy validation, pages are validated when first accessed. An access to a
unvalidated page results in a #VC exception, at which time the exception
handler may validate the page. Lazy validation requires careful tracking of
the validated pages to avoid validating the same GPA more than once. The
recently introduced "Unaccepted" memory type can be used to communicate the
unvalidated memory ranges to the Guest OS.

At this time we only sypport the pre-validation, the OVMF guest BIOS
validates the entire RAM before the control is handed over to the guest kernel.
The early_set_memory_{encrypt,decrypt} and set_memory_{encrypt,decrypt} are
enlightened to perform the page validation or invalidation while setting or
clearing the encryption attribute from the page table.

This series does not provide support for the following SEV-SNP features yet:

* Extended Guest request
* CPUID filtering
* AP bring up using the new SEV-SNP NAE
* Lazy validation
* Interrupt security

The series is based on tip/master commit
 - 24b57391e410 (origin/master, origin/HEAD) Merge branch 'core/rcu'
 - plus, cleanup series https://marc.info/?l=kvm&m=161952223830444&w=2

Additional resources
---------------------
SEV-SNP whitepaper
https://www.amd.com/system/files/TechDocs/SEV-SNP-strengthening-vm-isolation-with-integrity-protection-and-more.pdf
 
APM 2: https://www.amd.com/system/files/TechDocs/24593.pdf
(section 15.36)

GHCB spec:
https://developer.amd.com/wp-content/resources/56421.pdf

SEV-SNP firmware specification:
https://developer.amd.com/sev/

Change since v1:
 * Integerate the SNP support in sev.{ch}.
 * Add support to query the hypervisor feature and detect whether SNP is supported.
 * Define Linux specific reason code for the SNP guest termination.
 * Extend the setup_header provide a way for hypervisor to pass secret and cpuid page.
 * Add support to create a platform device and driver to query the attestation report
   and the derive a key.
 * Multiple cleanup and fixes to address Boris's review fedback.

Brijesh Singh (20):
  x86/sev: Define the GHCB MSR protocol for AP reset hold
  x86/sev: Save the negotiated GHCB version
  x86/sev: Add support for hypervisor feature VMGEXIT
  x86/sev: Increase the GHCB protocol version
  x86/sev: Define SNP Page State Change VMGEXIT structure
  x86/sev: Define SNP guest request NAE events
  x86/sev: Define error codes for reason set 1.
  x86/mm: Add sev_snp_active() helper
  x86/sev: check SEV-SNP features support
  x86/sev: Add a helper for the PVALIDATE instruction
  x86/compressed: Add helper for validating pages in the decompression
    stage
  x86/compressed: Register GHCB memory when SEV-SNP is active
  x86/sev: Register GHCB memory when SEV-SNP is active
  x86/sev: Add helper for validating pages in early enc attribute
    changes
  x86/kernel: Make the bss.decrypted section shared in RMP table
  x86/kernel: Validate rom memory before accessing when SEV-SNP is
    active
  x86/mm: Add support to validate memory when changing C-bit
  x86/boot: Add Confidential Computing address to setup_header
  x86/sev: Register SNP guest request platform device
  virt: Add SEV-SNP guest driver

 Documentation/x86/boot.rst              |  26 ++
 arch/x86/boot/compressed/ident_map_64.c |  17 +
 arch/x86/boot/compressed/sev.c          |  81 ++++-
 arch/x86/boot/compressed/sev.h          |  25 ++
 arch/x86/boot/header.S                  |   7 +-
 arch/x86/include/asm/mem_encrypt.h      |   2 +
 arch/x86/include/asm/msr-index.h        |   2 +
 arch/x86/include/asm/sev-common.h       |  86 +++++
 arch/x86/include/asm/sev.h              |  47 ++-
 arch/x86/include/uapi/asm/bootparam.h   |   1 +
 arch/x86/include/uapi/asm/svm.h         |   8 +
 arch/x86/kernel/head64.c                |   7 +
 arch/x86/kernel/probe_roms.c            |  13 +-
 arch/x86/kernel/sev-shared.c            |  72 +++-
 arch/x86/kernel/sev.c                   | 354 +++++++++++++++++-
 arch/x86/mm/mem_encrypt.c               |  52 ++-
 arch/x86/mm/pat/set_memory.c            |  15 +
 arch/x86/platform/efi/efi.c             |   2 +
 drivers/virt/Kconfig                    |   3 +
 drivers/virt/Makefile                   |   1 +
 drivers/virt/snp-guest/Kconfig          |  10 +
 drivers/virt/snp-guest/Makefile         |   2 +
 drivers/virt/snp-guest/snp-guest.c      | 455 ++++++++++++++++++++++++
 include/linux/efi.h                     |   1 +
 include/linux/snp-guest.h               | 124 +++++++
 include/uapi/linux/snp-guest.h          |  50 +++
 26 files changed, 1446 insertions(+), 17 deletions(-)
 create mode 100644 arch/x86/boot/compressed/sev.h
 create mode 100644 drivers/virt/snp-guest/Kconfig
 create mode 100644 drivers/virt/snp-guest/Makefile
 create mode 100644 drivers/virt/snp-guest/snp-guest.c
 create mode 100644 include/linux/snp-guest.h
 create mode 100644 include/uapi/linux/snp-guest.h

-- 
2.17.1


^ permalink raw reply	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 01/20] x86/sev: Define the GHCB MSR protocol for AP reset hold
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
@ 2021-04-30 12:15 ` Brijesh Singh
  2021-04-30 12:15 ` [PATCH Part1 RFC v2 02/20] x86/sev: Save the negotiated GHCB version Brijesh Singh
                   ` (18 subsequent siblings)
  19 siblings, 0 replies; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:15 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

Version 2 of the GHCB specification added the MSR protocol support for
the AP reset hold. See the GHCB specification for further details.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/sev-common.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
index 629c3df243f0..9f1b66090a4c 100644
--- a/arch/x86/include/asm/sev-common.h
+++ b/arch/x86/include/asm/sev-common.h
@@ -45,6 +45,12 @@
 		(((unsigned long)reg & GHCB_MSR_CPUID_REG_MASK) << GHCB_MSR_CPUID_REG_POS) | \
 		(((unsigned long)fn) << GHCB_MSR_CPUID_FUNC_POS))
 
+/* AP Reset Hold */
+#define GHCB_MSR_AP_RESET_HOLD_REQ		0x006
+#define GHCB_MSR_AP_RESET_HOLD_RESP		0x007
+#define GHCB_MSR_AP_RESET_HOLD_RESULT_POS	12
+#define GHCB_MSR_AP_RESET_HOLD_RESULT_MASK	0xfffffffffffff
+
 #define GHCB_MSR_TERM_REQ		0x100
 #define GHCB_MSR_TERM_REASON_SET_POS	12
 #define GHCB_MSR_TERM_REASON_SET_MASK	0xf
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 02/20] x86/sev: Save the negotiated GHCB version
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
  2021-04-30 12:15 ` [PATCH Part1 RFC v2 01/20] x86/sev: Define the GHCB MSR protocol for AP reset hold Brijesh Singh
@ 2021-04-30 12:15 ` Brijesh Singh
  2021-05-11  9:23   ` Borislav Petkov
  2021-04-30 12:15 ` [PATCH Part1 RFC v2 03/20] x86/sev: Add support for hypervisor feature VMGEXIT Brijesh Singh
                   ` (17 subsequent siblings)
  19 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:15 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

The SEV-ES guest calls the sev_es_negotiate_protocol() to negotiate the
GHCB protocol version before establishing the GHCB. Cache the negotiated
GHCB version so that it can be used later.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/sev.h   |  2 +-
 arch/x86/kernel/sev-shared.c | 15 ++++++++++++---
 2 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index fa5cd05d3b5b..7ec91b1359df 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -12,7 +12,7 @@
 #include <asm/insn.h>
 #include <asm/sev-common.h>
 
-#define GHCB_PROTO_OUR		0x0001UL
+#define GHCB_PROTOCOL_MIN	1ULL
 #define GHCB_PROTOCOL_MAX	1ULL
 #define GHCB_DEFAULT_USAGE	0ULL
 
diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
index 6ec8b3bfd76e..48a47540b85f 100644
--- a/arch/x86/kernel/sev-shared.c
+++ b/arch/x86/kernel/sev-shared.c
@@ -14,6 +14,13 @@
 #define has_cpuflag(f)	boot_cpu_has(f)
 #endif
 
+/*
+ * Since feature negotitation related variables are set early in the boot
+ * process they must reside in the .data section so as not to be zeroed
+ * out when the .bss section is later cleared.
+ */
+static u16 ghcb_version __section(".data") = 0;
+
 static bool __init sev_es_check_cpu_features(void)
 {
 	if (!has_cpuflag(X86_FEATURE_RDRAND)) {
@@ -54,10 +61,12 @@ static bool sev_es_negotiate_protocol(void)
 	if (GHCB_MSR_INFO(val) != GHCB_MSR_SEV_INFO_RESP)
 		return false;
 
-	if (GHCB_MSR_PROTO_MAX(val) < GHCB_PROTO_OUR ||
-	    GHCB_MSR_PROTO_MIN(val) > GHCB_PROTO_OUR)
+	if (GHCB_MSR_PROTO_MAX(val) < GHCB_PROTOCOL_MIN ||
+	    GHCB_MSR_PROTO_MIN(val) > GHCB_PROTOCOL_MAX)
 		return false;
 
+	ghcb_version = min_t(size_t, GHCB_MSR_PROTO_MAX(val), GHCB_PROTOCOL_MAX);
+
 	return true;
 }
 
@@ -101,7 +110,7 @@ static enum es_result sev_es_ghcb_hv_call(struct ghcb *ghcb,
 	enum es_result ret;
 
 	/* Fill in protocol and format specifiers */
-	ghcb->protocol_version = GHCB_PROTOCOL_MAX;
+	ghcb->protocol_version = ghcb_version;
 	ghcb->ghcb_usage       = GHCB_DEFAULT_USAGE;
 
 	ghcb_set_sw_exit_code(ghcb, exit_code);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 03/20] x86/sev: Add support for hypervisor feature VMGEXIT
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
  2021-04-30 12:15 ` [PATCH Part1 RFC v2 01/20] x86/sev: Define the GHCB MSR protocol for AP reset hold Brijesh Singh
  2021-04-30 12:15 ` [PATCH Part1 RFC v2 02/20] x86/sev: Save the negotiated GHCB version Brijesh Singh
@ 2021-04-30 12:15 ` Brijesh Singh
  2021-05-11 10:01   ` Borislav Petkov
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 04/20] x86/sev: Increase the GHCB protocol version Brijesh Singh
                   ` (16 subsequent siblings)
  19 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:15 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

Version 2 of GHCB specification introduced advertisement of a features
that are supported by the hypervisor. Define the GHCB MSR protocol and NAE
for the hypervisor feature request and query the feature during the GHCB
protocol negotitation. See the GHCB specification for more details.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/sev-common.h | 17 +++++++++++++++++
 arch/x86/include/uapi/asm/svm.h   |  2 ++
 arch/x86/kernel/sev-shared.c      | 24 ++++++++++++++++++++++++
 3 files changed, 43 insertions(+)

diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
index 9f1b66090a4c..8142e247d8da 100644
--- a/arch/x86/include/asm/sev-common.h
+++ b/arch/x86/include/asm/sev-common.h
@@ -51,6 +51,22 @@
 #define GHCB_MSR_AP_RESET_HOLD_RESULT_POS	12
 #define GHCB_MSR_AP_RESET_HOLD_RESULT_MASK	0xfffffffffffff
 
+/* GHCB Hypervisor Feature Request */
+#define GHCB_MSR_HV_FEATURES_REQ	0x080
+#define GHCB_MSR_HV_FEATURES_RESP	0x081
+#define GHCB_MSR_HV_FEATURES_POS	12
+#define GHCB_MSR_HV_FEATURES_MASK	0xfffffffffffffUL
+#define GHCB_MSR_HV_FEATURES_RESP_VAL(v)	\
+	(((v) >> GHCB_MSR_HV_FEATURES_POS) & GHCB_MSR_HV_FEATURES_MASK)
+
+#define GHCB_HV_FEATURES_SNP		BIT_ULL(0)
+#define GHCB_HV_FEATURES_SNP_AP_CREATION			\
+		(BIT_ULL(1) | GHCB_HV_FEATURES_SNP)
+#define GHCB_HV_FEATURES_SNP_RESTRICTED_INJECTION		\
+		(BIT_ULL(2) | GHCB_HV_FEATURES_SNP_AP_CREATION)
+#define GHCB_HV_FEATURES_SNP_RESTRICTED_INJECTION_TIMER		\
+		(BIT_ULL(3) | GHCB_HV_FEATURES_SNP_RESTRICTED_INJECTION)
+
 #define GHCB_MSR_TERM_REQ		0x100
 #define GHCB_MSR_TERM_REASON_SET_POS	12
 #define GHCB_MSR_TERM_REASON_SET_MASK	0xf
@@ -62,6 +78,7 @@
 
 #define GHCB_SEV_ES_REASON_GENERAL_REQUEST	0
 #define GHCB_SEV_ES_REASON_PROTOCOL_UNSUPPORTED	1
+#define GHCB_SEV_ES_REASON_SNP_UNSUPPORTED	2
 
 #define GHCB_RESP_CODE(v)		((v) & GHCB_MSR_INFO_MASK)
 
diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
index 554f75fe013c..7fbc311e2de1 100644
--- a/arch/x86/include/uapi/asm/svm.h
+++ b/arch/x86/include/uapi/asm/svm.h
@@ -108,6 +108,7 @@
 #define SVM_VMGEXIT_AP_JUMP_TABLE		0x80000005
 #define SVM_VMGEXIT_SET_AP_JUMP_TABLE		0
 #define SVM_VMGEXIT_GET_AP_JUMP_TABLE		1
+#define SVM_VMGEXIT_HYPERVISOR_FEATURES		0x8000fffd
 #define SVM_VMGEXIT_UNSUPPORTED_EVENT		0x8000ffff
 
 #define SVM_EXIT_ERR           -1
@@ -215,6 +216,7 @@
 	{ SVM_VMGEXIT_NMI_COMPLETE,	"vmgexit_nmi_complete" }, \
 	{ SVM_VMGEXIT_AP_HLT_LOOP,	"vmgexit_ap_hlt_loop" }, \
 	{ SVM_VMGEXIT_AP_JUMP_TABLE,	"vmgexit_ap_jump_table" }, \
+	{ SVM_VMGEXIT_HYPERVISOR_FEATURES,	"vmgexit_hypervisor_feature" }, \
 	{ SVM_EXIT_ERR,         "invalid_guest_state" }
 
 
diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
index 48a47540b85f..874f911837db 100644
--- a/arch/x86/kernel/sev-shared.c
+++ b/arch/x86/kernel/sev-shared.c
@@ -20,6 +20,7 @@
  * out when the .bss section is later cleared.
  */
 static u16 ghcb_version __section(".data") = 0;
+static u64 hv_features __section(".data") = 0;
 
 static bool __init sev_es_check_cpu_features(void)
 {
@@ -49,6 +50,26 @@ static void __noreturn sev_es_terminate(unsigned int reason)
 		asm volatile("hlt\n" : : : "memory");
 }
 
+static bool ghcb_get_hv_features(void)
+{
+	u64 val;
+
+	/* The hypervisor features are available from version 2 onward. */
+	if (ghcb_version < 2)
+		return true;
+
+	sev_es_wr_ghcb_msr(GHCB_MSR_HV_FEATURES_REQ);
+	VMGEXIT();
+
+	val = sev_es_rd_ghcb_msr();
+	if (GHCB_RESP_CODE(val) != GHCB_MSR_HV_FEATURES_RESP)
+		return false;
+
+	hv_features = GHCB_MSR_HV_FEATURES_RESP_VAL(val);
+
+	return true;
+}
+
 static bool sev_es_negotiate_protocol(void)
 {
 	u64 val;
@@ -67,6 +88,9 @@ static bool sev_es_negotiate_protocol(void)
 
 	ghcb_version = min_t(size_t, GHCB_MSR_PROTO_MAX(val), GHCB_PROTOCOL_MAX);
 
+	if (!ghcb_get_hv_features())
+		return false;
+
 	return true;
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 04/20] x86/sev: Increase the GHCB protocol version
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
                   ` (2 preceding siblings ...)
  2021-04-30 12:15 ` [PATCH Part1 RFC v2 03/20] x86/sev: Add support for hypervisor feature VMGEXIT Brijesh Singh
@ 2021-04-30 12:16 ` Brijesh Singh
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 05/20] x86/sev: Define SNP Page State Change VMGEXIT structure Brijesh Singh
                   ` (15 subsequent siblings)
  19 siblings, 0 replies; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:16 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

Now that the Linux guest supports version 2 of the GHCB specification,
bump the maximum supported GHCB protocol version.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/sev.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index 7ec91b1359df..134a7c9d91b6 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -13,7 +13,7 @@
 #include <asm/sev-common.h>
 
 #define GHCB_PROTOCOL_MIN	1ULL
-#define GHCB_PROTOCOL_MAX	1ULL
+#define GHCB_PROTOCOL_MAX	2ULL
 #define GHCB_DEFAULT_USAGE	0ULL
 
 #define	VMGEXIT()			{ asm volatile("rep; vmmcall\n\r"); }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 05/20] x86/sev: Define SNP Page State Change VMGEXIT structure
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
                   ` (3 preceding siblings ...)
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 04/20] x86/sev: Increase the GHCB protocol version Brijesh Singh
@ 2021-04-30 12:16 ` Brijesh Singh
  2021-05-18 10:41   ` Borislav Petkov
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 06/20] x86/sev: Define SNP guest request NAE events Brijesh Singh
                   ` (14 subsequent siblings)
  19 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:16 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

An SNP-active guest will use the page state change NAE VMGEXIT defined in
the GHCB specification to ask the hypervisor to make the guest page
private or shared in the RMP table. In addition to the private/shared,
the guest can also ask the hypervisor to split or combine multiple 4K
validated pages as a single 2M page or vice versa.

See GHCB specification section Page State Change for additional
information.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/sev-common.h | 46 +++++++++++++++++++++++++++++++
 arch/x86/include/uapi/asm/svm.h   |  2 ++
 2 files changed, 48 insertions(+)

diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
index 8142e247d8da..07b8612bf182 100644
--- a/arch/x86/include/asm/sev-common.h
+++ b/arch/x86/include/asm/sev-common.h
@@ -67,6 +67,52 @@
 #define GHCB_HV_FEATURES_SNP_RESTRICTED_INJECTION_TIMER		\
 		(BIT_ULL(3) | GHCB_HV_FEATURES_SNP_RESTRICTED_INJECTION)
 
+/* SNP Page State Change */
+#define GHCB_MSR_PSC_REQ		0x014
+#define SNP_PAGE_STATE_PRIVATE		1
+#define SNP_PAGE_STATE_SHARED		2
+#define SNP_PAGE_STATE_PSMASH		3
+#define SNP_PAGE_STATE_UNSMASH		4
+#define GHCB_MSR_PSC_GFN_POS		12
+#define GHCB_MSR_PSC_GFN_MASK		0xffffffffffULL
+#define GHCB_MSR_PSC_OP_POS		52
+#define GHCB_MSR_PSC_OP_MASK		0xf
+#define GHCB_MSR_PSC_REQ_GFN(gfn, op) 	\
+	(((unsigned long)((op) & GHCB_MSR_PSC_OP_MASK) << GHCB_MSR_PSC_OP_POS) | \
+	(((gfn) << GHCB_MSR_PSC_GFN_POS) & GHCB_MSR_PSC_GFN_MASK) | GHCB_MSR_PSC_REQ)
+
+#define GHCB_MSR_PSC_RESP		0x015
+#define GHCB_MSR_PSC_ERROR_POS		32
+#define GHCB_MSR_PSC_ERROR_MASK		0xffffffffULL
+#define GHCB_MSR_PSC_RSVD_POS		12
+#define GHCB_MSR_PSC_RSVD_MASK		0xfffffULL
+#define GHCB_MSR_PSC_RESP_VAL(val)	((val) >> GHCB_MSR_PSC_ERROR_POS)
+
+/* SNP Page State Change NAE event */
+#define VMGEXIT_PSC_MAX_ENTRY		253
+#define VMGEXIT_PSC_INVALID_HEADER	0x100000001
+#define VMGEXIT_PSC_INVALID_ENTRY	0x100000002
+#define VMGEXIT_PSC_FIRMWARE_ERROR(x)	((x & 0xffffffffULL) | 0x200000000)
+
+struct __packed snp_page_state_header {
+	u16 cur_entry;
+	u16 end_entry;
+	u32 reserved;
+};
+
+struct __packed snp_page_state_entry {
+	u64 cur_page:12;
+	u64 gfn:40;
+	u64 operation:4;
+	u64 pagesize:1;
+	u64 reserved:7;
+};
+
+struct __packed snp_page_state_change {
+	struct snp_page_state_header header;
+	struct snp_page_state_entry entry[VMGEXIT_PSC_MAX_ENTRY];
+};
+
 #define GHCB_MSR_TERM_REQ		0x100
 #define GHCB_MSR_TERM_REASON_SET_POS	12
 #define GHCB_MSR_TERM_REASON_SET_MASK	0xf
diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
index 7fbc311e2de1..f7bf12cad58c 100644
--- a/arch/x86/include/uapi/asm/svm.h
+++ b/arch/x86/include/uapi/asm/svm.h
@@ -108,6 +108,7 @@
 #define SVM_VMGEXIT_AP_JUMP_TABLE		0x80000005
 #define SVM_VMGEXIT_SET_AP_JUMP_TABLE		0
 #define SVM_VMGEXIT_GET_AP_JUMP_TABLE		1
+#define SVM_VMGEXIT_SNP_PAGE_STATE_CHANGE	0x80000010
 #define SVM_VMGEXIT_HYPERVISOR_FEATURES		0x8000fffd
 #define SVM_VMGEXIT_UNSUPPORTED_EVENT		0x8000ffff
 
@@ -216,6 +217,7 @@
 	{ SVM_VMGEXIT_NMI_COMPLETE,	"vmgexit_nmi_complete" }, \
 	{ SVM_VMGEXIT_AP_HLT_LOOP,	"vmgexit_ap_hlt_loop" }, \
 	{ SVM_VMGEXIT_AP_JUMP_TABLE,	"vmgexit_ap_jump_table" }, \
+	{ SVM_VMGEXIT_SNP_PAGE_STATE_CHANGE,	"vmgexit_page_state_change" }, \
 	{ SVM_VMGEXIT_HYPERVISOR_FEATURES,	"vmgexit_hypervisor_feature" }, \
 	{ SVM_EXIT_ERR,         "invalid_guest_state" }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 06/20] x86/sev: Define SNP guest request NAE events
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
                   ` (4 preceding siblings ...)
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 05/20] x86/sev: Define SNP Page State Change VMGEXIT structure Brijesh Singh
@ 2021-04-30 12:16 ` Brijesh Singh
  2021-05-18 10:45   ` Borislav Petkov
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 07/20] x86/sev: Define error codes for reason set 1 Brijesh Singh
                   ` (13 subsequent siblings)
  19 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:16 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

Version 2 of the GHCB specification added the support for SNP guest
request NAE events. The SEV-SNP guests will use this event to request
the attestation report. See the GHCB specification for more details.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/uapi/asm/svm.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
index f7bf12cad58c..7a45aa284530 100644
--- a/arch/x86/include/uapi/asm/svm.h
+++ b/arch/x86/include/uapi/asm/svm.h
@@ -109,6 +109,8 @@
 #define SVM_VMGEXIT_SET_AP_JUMP_TABLE		0
 #define SVM_VMGEXIT_GET_AP_JUMP_TABLE		1
 #define SVM_VMGEXIT_SNP_PAGE_STATE_CHANGE	0x80000010
+#define SVM_VMGEXIT_SNP_GUEST_REQUEST		0x80000011
+#define SVM_VMGEXIT_SNP_EXT_GUEST_REQUEST	0x80000012
 #define SVM_VMGEXIT_HYPERVISOR_FEATURES		0x8000fffd
 #define SVM_VMGEXIT_UNSUPPORTED_EVENT		0x8000ffff
 
@@ -218,6 +220,8 @@
 	{ SVM_VMGEXIT_AP_HLT_LOOP,	"vmgexit_ap_hlt_loop" }, \
 	{ SVM_VMGEXIT_AP_JUMP_TABLE,	"vmgexit_ap_jump_table" }, \
 	{ SVM_VMGEXIT_SNP_PAGE_STATE_CHANGE,	"vmgexit_page_state_change" }, \
+	{ SVM_VMGEXIT_SNP_GUEST_REQUEST,	"vmgexit_snp_guest_request" }, \
+	{ SVM_VMGEXIT_SNP_EXT_GUEST_REQUEST,	"vmgexit_snp_extended_guest_request" }, \
 	{ SVM_VMGEXIT_HYPERVISOR_FEATURES,	"vmgexit_hypervisor_feature" }, \
 	{ SVM_EXIT_ERR,         "invalid_guest_state" }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 07/20] x86/sev: Define error codes for reason set 1.
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
                   ` (5 preceding siblings ...)
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 06/20] x86/sev: Define SNP guest request NAE events Brijesh Singh
@ 2021-04-30 12:16 ` Brijesh Singh
  2021-05-18 11:05   ` Borislav Petkov
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 08/20] x86/mm: Add sev_snp_active() helper Brijesh Singh
                   ` (12 subsequent siblings)
  19 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:16 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

GHCB specification defines the reason code for reason set 0. The reason
codes defined in the set 0 do not cover all possible causes for a guest
to request termination.

The reason set 1 to 255 is reserved for the vendor-specific codes.
Use the reason set 1 for the Linux guest. Define an error codes for
reason set 1.

While at it, change the sev_es_terminate() to accept the reason set
parameter.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/boot/compressed/sev.c    | 6 +++---
 arch/x86/include/asm/sev-common.h | 5 +++++
 arch/x86/kernel/sev-shared.c      | 6 +++---
 arch/x86/kernel/sev.c             | 4 ++--
 4 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index 670e998fe930..6d9055427f37 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -122,7 +122,7 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
 static bool early_setup_sev_es(void)
 {
 	if (!sev_es_negotiate_protocol())
-		sev_es_terminate(GHCB_SEV_ES_REASON_PROTOCOL_UNSUPPORTED);
+		sev_es_terminate(0, GHCB_SEV_ES_REASON_PROTOCOL_UNSUPPORTED);
 
 	if (set_page_decrypted((unsigned long)&boot_ghcb_page))
 		return false;
@@ -175,7 +175,7 @@ void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code)
 	enum es_result result;
 
 	if (!boot_ghcb && !early_setup_sev_es())
-		sev_es_terminate(GHCB_SEV_ES_REASON_GENERAL_REQUEST);
+		sev_es_terminate(0, GHCB_SEV_ES_REASON_GENERAL_REQUEST);
 
 	vc_ghcb_invalidate(boot_ghcb);
 	result = vc_init_em_ctxt(&ctxt, regs, exit_code);
@@ -202,5 +202,5 @@ void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code)
 	if (result == ES_OK)
 		vc_finish_insn(&ctxt);
 	else if (result != ES_RETRY)
-		sev_es_terminate(GHCB_SEV_ES_REASON_GENERAL_REQUEST);
+		sev_es_terminate(0, GHCB_SEV_ES_REASON_GENERAL_REQUEST);
 }
diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
index 07b8612bf182..733fca403ae5 100644
--- a/arch/x86/include/asm/sev-common.h
+++ b/arch/x86/include/asm/sev-common.h
@@ -128,4 +128,9 @@ struct __packed snp_page_state_change {
 
 #define GHCB_RESP_CODE(v)		((v) & GHCB_MSR_INFO_MASK)
 
+/* Linux specific reason codes (used with reason set 1) */
+#define GHCB_TERM_REGISTER		0	/* GHCB GPA registration failure */
+#define GHCB_TERM_PSC			1	/* Page State Change faiilure */
+#define GHCB_TERM_PVALIDATE		2	/* Pvalidate failure */
+
 #endif
diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
index 874f911837db..3f9b06a04395 100644
--- a/arch/x86/kernel/sev-shared.c
+++ b/arch/x86/kernel/sev-shared.c
@@ -32,7 +32,7 @@ static bool __init sev_es_check_cpu_features(void)
 	return true;
 }
 
-static void __noreturn sev_es_terminate(unsigned int reason)
++static void __noreturn sev_es_terminate(unsigned int set, unsigned int reason)
 {
 	u64 val = GHCB_MSR_TERM_REQ;
 
@@ -40,7 +40,7 @@ static void __noreturn sev_es_terminate(unsigned int reason)
 	 * Tell the hypervisor what went wrong - only reason-set 0 is
 	 * currently supported.
 	 */
-	val |= GHCB_SEV_TERM_REASON(0, reason);
+	val |= GHCB_SEV_TERM_REASON(set, reason);
 
 	/* Request Guest Termination from Hypvervisor */
 	sev_es_wr_ghcb_msr(val);
@@ -240,7 +240,7 @@ void __init do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code)
 
 fail:
 	/* Terminate the guest */
-	sev_es_terminate(GHCB_SEV_ES_REASON_GENERAL_REQUEST);
+	sev_es_terminate(0, GHCB_SEV_ES_REASON_GENERAL_REQUEST);
 }
 
 static enum es_result vc_insn_string_read(struct es_em_ctxt *ctxt,
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 9578c82832aa..97be0fe666ab 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -1383,7 +1383,7 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
 		show_regs(regs);
 
 		/* Ask hypervisor to sev_es_terminate */
-		sev_es_terminate(GHCB_SEV_ES_REASON_GENERAL_REQUEST);
+		sev_es_terminate(0, GHCB_SEV_ES_REASON_GENERAL_REQUEST);
 
 		/* If that fails and we get here - just panic */
 		panic("Returned from Terminate-Request to Hypervisor\n");
@@ -1416,7 +1416,7 @@ bool __init handle_vc_boot_ghcb(struct pt_regs *regs)
 
 	/* Do initial setup or terminate the guest */
 	if (unlikely(boot_ghcb == NULL && !sev_es_setup_ghcb()))
-		sev_es_terminate(GHCB_SEV_ES_REASON_GENERAL_REQUEST);
+		sev_es_terminate(0, GHCB_SEV_ES_REASON_GENERAL_REQUEST);
 
 	vc_ghcb_invalidate(boot_ghcb);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 08/20] x86/mm: Add sev_snp_active() helper
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
                   ` (6 preceding siblings ...)
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 07/20] x86/sev: Define error codes for reason set 1 Brijesh Singh
@ 2021-04-30 12:16 ` Brijesh Singh
  2021-05-18 18:11   ` Borislav Petkov
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 09/20] x86/sev: check SEV-SNP features support Brijesh Singh
                   ` (11 subsequent siblings)
  19 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:16 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

The sev_snp_active() helper can be used by the guest to query whether the
SNP - Secure Nested Paging feature is active.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/mem_encrypt.h | 2 ++
 arch/x86/include/asm/msr-index.h   | 2 ++
 arch/x86/mm/mem_encrypt.c          | 9 +++++++++
 3 files changed, 13 insertions(+)

diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index 31c4df123aa0..d99aa260d328 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -54,6 +54,7 @@ void __init sev_es_init_vc_handling(void);
 bool sme_active(void);
 bool sev_active(void);
 bool sev_es_active(void);
+bool sev_snp_active(void);
 
 #define __bss_decrypted __section(".bss..decrypted")
 
@@ -79,6 +80,7 @@ static inline void sev_es_init_vc_handling(void) { }
 static inline bool sme_active(void) { return false; }
 static inline bool sev_active(void) { return false; }
 static inline bool sev_es_active(void) { return false; }
+static inline bool sev_snp_active(void) { return false; }
 
 static inline int __init
 early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 0; }
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 211ba3375ee9..69ce50fa3565 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -481,8 +481,10 @@
 #define MSR_AMD64_SEV			0xc0010131
 #define MSR_AMD64_SEV_ENABLED_BIT	0
 #define MSR_AMD64_SEV_ES_ENABLED_BIT	1
+#define MSR_AMD64_SEV_SNP_ENABLED_BIT	2
 #define MSR_AMD64_SEV_ENABLED		BIT_ULL(MSR_AMD64_SEV_ENABLED_BIT)
 #define MSR_AMD64_SEV_ES_ENABLED	BIT_ULL(MSR_AMD64_SEV_ES_ENABLED_BIT)
+#define MSR_AMD64_SEV_SNP_ENABLED	BIT_ULL(MSR_AMD64_SEV_SNP_ENABLED_BIT)
 
 #define MSR_AMD64_VIRT_SPEC_CTRL	0xc001011f
 
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index f633f9e23b8f..076d993acba3 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -391,6 +391,11 @@ bool noinstr sev_es_active(void)
 	return sev_status & MSR_AMD64_SEV_ES_ENABLED;
 }
 
+bool sev_snp_active(void)
+{
+	return sev_status & MSR_AMD64_SEV_SNP_ENABLED;
+}
+
 /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */
 bool force_dma_unencrypted(struct device *dev)
 {
@@ -463,6 +468,10 @@ static void print_mem_encrypt_feature_info(void)
 	if (sev_es_active())
 		pr_cont(" SEV-ES");
 
+	/* Secure Nested Paging */
+	if (sev_snp_active())
+		pr_cont(" SEV-SNP");
+
 	pr_cont("\n");
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 09/20] x86/sev: check SEV-SNP features support
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
                   ` (7 preceding siblings ...)
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 08/20] x86/mm: Add sev_snp_active() helper Brijesh Singh
@ 2021-04-30 12:16 ` Brijesh Singh
  2021-05-20 16:02   ` Borislav Petkov
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 10/20] x86/sev: Add a helper for the PVALIDATE instruction Brijesh Singh
                   ` (10 subsequent siblings)
  19 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:16 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

Version 2 of the GHCB specification added the advertisement of features
that are supported by the hypervisor. If hypervisor supports the SEV-SNP
then it must set the SEV-SNP features bit to indicate that the base
SEV-SNP is supported.

Check the SEV-SNP feature while establishing the GHCB, if failed,
terminate the guest.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/boot/compressed/sev.c | 21 +++++++++++++++++++++
 arch/x86/kernel/sev-shared.c   | 13 ++++++++++++-
 arch/x86/kernel/sev.c          |  4 ++++
 3 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index 6d9055427f37..7badbeb6cb95 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -25,6 +25,8 @@
 
 struct ghcb boot_ghcb_page __aligned(PAGE_SIZE);
 struct ghcb *boot_ghcb;
+static u64 sev_status_val;
+static bool sev_status_checked;
 
 /*
  * Copy a version of this function here - insn-eval.c can't be used in
@@ -119,11 +121,30 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
 /* Include code for early handlers */
 #include "../../kernel/sev-shared.c"
 
+static inline bool sev_snp_enabled(void)
+{
+	unsigned long low, high;
+
+	if (!sev_status_checked) {
+		asm volatile("rdmsr\n"
+			     : "=a" (low), "=d" (high)
+			     : "c" (MSR_AMD64_SEV));
+		sev_status_val = (high << 32) | low;
+		sev_status_checked = true;
+	}
+
+	return sev_status_val & MSR_AMD64_SEV_SNP_ENABLED ? true : false;
+}
+
 static bool early_setup_sev_es(void)
 {
 	if (!sev_es_negotiate_protocol())
 		sev_es_terminate(0, GHCB_SEV_ES_REASON_PROTOCOL_UNSUPPORTED);
 
+	/* If SEV-SNP is enabled then check if the hypervisor supports the SEV-SNP features. */
+	if (sev_snp_enabled() && !sev_snp_check_hypervisor_features())
+		sev_es_terminate(0, GHCB_SEV_ES_REASON_SNP_UNSUPPORTED);
+
 	if (set_page_decrypted((unsigned long)&boot_ghcb_page))
 		return false;
 
diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
index 3f9b06a04395..085d3d724bc8 100644
--- a/arch/x86/kernel/sev-shared.c
+++ b/arch/x86/kernel/sev-shared.c
@@ -32,7 +32,18 @@ static bool __init sev_es_check_cpu_features(void)
 	return true;
 }
 
-+static void __noreturn sev_es_terminate(unsigned int set, unsigned int reason)
+static bool __init sev_snp_check_hypervisor_features(void)
+{
+	if (ghcb_version < 2)
+		return false;
+
+	if (!(hv_features & GHCB_HV_FEATURES_SNP))
+		return false;
+
+	return true;
+}
+
+static void __noreturn sev_es_terminate(unsigned int set, unsigned int reason)
 {
 	u64 val = GHCB_MSR_TERM_REQ;
 
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 97be0fe666ab..8c8c939a1754 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -609,6 +609,10 @@ static bool __init sev_es_setup_ghcb(void)
 	if (!sev_es_negotiate_protocol())
 		return false;
 
+	/* If SNP is active, make sure that hypervisor supports the feature. */
+	if (sev_snp_active() && !sev_snp_check_hypervisor_features())
+		sev_es_terminate(0, GHCB_SEV_ES_REASON_SNP_UNSUPPORTED);
+
 	/*
 	 * Clear the boot_ghcb. The first exception comes in before the bss
 	 * section is cleared.
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 10/20] x86/sev: Add a helper for the PVALIDATE instruction
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
                   ` (8 preceding siblings ...)
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 09/20] x86/sev: check SEV-SNP features support Brijesh Singh
@ 2021-04-30 12:16 ` Brijesh Singh
  2021-04-30 13:05   ` Brijesh Singh
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 11/20] x86/compressed: Add helper for validating pages in the decompression stage Brijesh Singh
                   ` (9 subsequent siblings)
  19 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:16 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

An SNP-active guest uses the PVALIDATE instruction to validate or
rescind the validation of a guest page’s RMP entry. Upon completion,
a return code is stored in EAX and rFLAGS bits are set based on the
return code. If the instruction completed successfully, the CF
indicates if the content of the RMP were changed or not.

See AMD APM Volume 3 for additional details.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/sev.h | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index 134a7c9d91b6..48f911a229ba 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -59,6 +59,16 @@ extern void vc_no_ghcb(void);
 extern void vc_boot_ghcb(void);
 extern bool handle_vc_boot_ghcb(struct pt_regs *regs);
 
+/* Return code of pvalidate */
+#define PVALIDATE_SUCCESS		0
+#define PVALIDATE_FAIL_INPUT		1
+#define PVALIDATE_FAIL_SIZEMISMATCH	6
+#define PVALIDATE_FAIL_NOUPDATE		255 /* Software defined (when rFlags.CF = 1) */
+
+/* RMP page size */
+#define RMP_PG_SIZE_2M			1
+#define RMP_PG_SIZE_4K			0
+
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 extern struct static_key_false sev_es_enable_key;
 extern void __sev_es_ist_enter(struct pt_regs *regs);
@@ -81,12 +91,29 @@ static __always_inline void sev_es_nmi_complete(void)
 		__sev_es_nmi_complete();
 }
 extern int __init sev_es_efi_map_ghcbs(pgd_t *pgd);
+static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate)
+{
+	unsigned long flags;
+	int rc = 0;
+
+	asm volatile(".byte 0xF2, 0x0F, 0x01, 0xFF\n\t"
+		     CC_SET(c)
+		     : CC_OUT(c) (flags), "=a"(rc)
+		     : "a"(vaddr), "c"(rmp_psize), "d"(validate)
+		     : "memory", "cc");
+
+	if (flags & X86_EFLAGS_CF)
+		return PVALIDATE_FAIL_NOUPDATE;
+
+	return rc;
+}
 #else
 static inline void sev_es_ist_enter(struct pt_regs *regs) { }
 static inline void sev_es_ist_exit(void) { }
 static inline int sev_es_setup_ap_jump_table(struct real_mode_header *rmh) { return 0; }
 static inline void sev_es_nmi_complete(void) { }
 static inline int sev_es_efi_map_ghcbs(pgd_t *pgd) { return 0; }
+static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate) { return 0; }
 #endif
 
 #endif
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 11/20] x86/compressed: Add helper for validating pages in the decompression stage
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
                   ` (9 preceding siblings ...)
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 10/20] x86/sev: Add a helper for the PVALIDATE instruction Brijesh Singh
@ 2021-04-30 12:16 ` Brijesh Singh
  2021-05-20 17:52   ` Borislav Petkov
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 12/20] x86/compressed: Register GHCB memory when SEV-SNP is active Brijesh Singh
                   ` (8 subsequent siblings)
  19 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:16 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

Many of the integrity guarantees of SEV-SNP are enforced through the
Reverse Map Table (RMP). Each RMP entry contains the GPA at which a
particular page of DRAM should be mapped. The VMs can request the
hypervisor to add pages in the RMP table via the Page State Change VMGEXIT
defined in the GHCB specification. Inside each RMP entry is a Validated
flag; this flag is automatically cleared to 0 by the CPU hardware when a
new RMP entry is created for a guest. Each VM page can be either
validated or invalidated, as indicated by the Validated flag in the RMP
entry. Memory access to a private page that is not validated generates
a #VC. A VM must use PVALIDATE instruction to validate the private page
before using it.

To maintain the security guarantee of SEV-SNP guests, when transitioning
pages from private to shared, the guest must invalidate the pages before
asking the hypervisor to change the page state to shared in the RMP table.

After the pages are mapped private in the page table, the guest must issue a
page state change VMGEXIT to make the pages private in the RMP table and
validate it.

On boot, BIOS should have validated the entire system memory. During
the kernel decompression stage, the VC handler uses the
set_memory_decrypted() to make the GHCB page shared (i.e clear encryption
attribute). And while exiting from the decompression, it calls the
set_page_encrypted() to make the page private.

Add sev_snp_set_page_{private,shared}() helper that is used by the
set_memory_{decrypt,encrypt}() to change the page state in the RMP table.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/boot/compressed/ident_map_64.c | 17 +++++++++
 arch/x86/boot/compressed/sev.c          | 50 +++++++++++++++++++++++++
 arch/x86/boot/compressed/sev.h          | 25 +++++++++++++
 3 files changed, 92 insertions(+)
 create mode 100644 arch/x86/boot/compressed/sev.h

diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
index f7213d0943b8..c2a1a5311b47 100644
--- a/arch/x86/boot/compressed/ident_map_64.c
+++ b/arch/x86/boot/compressed/ident_map_64.c
@@ -21,6 +21,7 @@
 
 #include "error.h"
 #include "misc.h"
+#include "sev.h"
 
 /* These actually do the work of building the kernel identity maps. */
 #include <linux/pgtable.h>
@@ -278,12 +279,28 @@ static int set_clr_page_flags(struct x86_mapping_info *info,
 	if ((set | clr) & _PAGE_ENC)
 		clflush_page(address);
 
+	/*
+	 * If the encryption attribute is being cleared, then change the page state to
+	 * shared in the RMP entry. Change of the page state must be done before the
+	 * PTE updates.
+	 */
+	if (clr & _PAGE_ENC)
+		snp_set_page_shared(pte_pfn(*ptep) << PAGE_SHIFT);
+
 	/* Update PTE */
 	pte = *ptep;
 	pte = pte_set_flags(pte, set);
 	pte = pte_clear_flags(pte, clr);
 	set_pte(ptep, pte);
 
+	/*
+	 * If the encryption attribute is being set, then change the page state to
+	 * private in the RMP entry. The page state must be done after the PTE
+	 * is updated.
+	 */
+	if (set & _PAGE_ENC)
+		snp_set_page_private(pte_pfn(*ptep) << PAGE_SHIFT);
+
 	/* Flush TLB after changing encryption attribute */
 	write_cr3(top_level_pgt);
 
diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index 7badbeb6cb95..4f215d0c9f76 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -22,6 +22,7 @@
 #include <asm/svm.h>
 
 #include "error.h"
+#include "sev.h"
 
 struct ghcb boot_ghcb_page __aligned(PAGE_SIZE);
 struct ghcb *boot_ghcb;
@@ -136,6 +137,55 @@ static inline bool sev_snp_enabled(void)
 	return sev_status_val & MSR_AMD64_SEV_SNP_ENABLED ? true : false;
 }
 
+static void snp_page_state_change(unsigned long paddr, int op)
+{
+	u64 val;
+
+	if (!sev_snp_enabled())
+		return;
+
+	/*
+	 * If the page is getting changed from private to shard then invalidate the page
+	 * before requesting the state change in the RMP table.
+	 */
+	if ((op == SNP_PAGE_STATE_SHARED) && pvalidate(paddr, RMP_PG_SIZE_4K, 0))
+		goto e_pvalidate;
+
+	/* Issue VMGEXIT to change the page state in RMP table. */
+	sev_es_wr_ghcb_msr(GHCB_MSR_PSC_REQ_GFN(paddr >> PAGE_SHIFT, op));
+	VMGEXIT();
+
+	/* Read the response of the VMGEXIT. */
+	val = sev_es_rd_ghcb_msr();
+	if ((GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP) || GHCB_MSR_PSC_RESP_VAL(val))
+		goto e_psc;
+
+	/*
+	 * Now that page is added in the RMP table, validate it so that it is consistent
+	 * with the RMP entry.
+	 */
+	if ((op == SNP_PAGE_STATE_PRIVATE) && pvalidate(paddr, RMP_PG_SIZE_4K, 1))
+		goto e_pvalidate;
+
+	return;
+
+e_psc:
+	sev_es_terminate(1, GHCB_TERM_PSC);
+
+e_pvalidate:
+	sev_es_terminate(1, GHCB_TERM_PVALIDATE);
+}
+
+void snp_set_page_private(unsigned long paddr)
+{
+	snp_page_state_change(paddr, SNP_PAGE_STATE_PRIVATE);
+}
+
+void snp_set_page_shared(unsigned long paddr)
+{
+	snp_page_state_change(paddr, SNP_PAGE_STATE_SHARED);
+}
+
 static bool early_setup_sev_es(void)
 {
 	if (!sev_es_negotiate_protocol())
diff --git a/arch/x86/boot/compressed/sev.h b/arch/x86/boot/compressed/sev.h
new file mode 100644
index 000000000000..a693eabc379b
--- /dev/null
+++ b/arch/x86/boot/compressed/sev.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * AMD Secure Encrypted Virtualization
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Brijesh Singh <brijesh.singh@amd.com>
+ */
+
+#ifndef BOOT_COMPRESSED_SEV_H
+#define BOOT_COMPRESSED_SEV_H
+
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+
+void snp_set_page_private(unsigned long paddr);
+void snp_set_page_shared(unsigned long paddr);
+
+#else
+
+static inline void snp_set_page_private(unsigned long paddr) { }
+static inline void snp_set_page_shared(unsigned long paddr) { }
+
+#endif /* CONFIG_AMD_MEM_ENCRYPT */
+
+#endif /* BOOT_COMPRESSED_SEV_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 12/20] x86/compressed: Register GHCB memory when SEV-SNP is active
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
                   ` (10 preceding siblings ...)
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 11/20] x86/compressed: Add helper for validating pages in the decompression stage Brijesh Singh
@ 2021-04-30 12:16 ` Brijesh Singh
  2021-05-25 10:41   ` Borislav Petkov
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 13/20] x86/sev: " Brijesh Singh
                   ` (7 subsequent siblings)
  19 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:16 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

The SEV-SNP guest is required to perform GHCB GPA registration. This is
because the hypervisor may prefer that a guest use a consistent and/or
specific GPA for the GHCB associated with a vCPU. For more information,
see the GHCB specification.

If hypervisor can not work with the guest provided GPA then terminate the
guest boot.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/boot/compressed/sev.c    |  4 ++++
 arch/x86/include/asm/sev-common.h | 12 ++++++++++++
 arch/x86/kernel/sev-shared.c      | 16 ++++++++++++++++
 3 files changed, 32 insertions(+)

diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index 4f215d0c9f76..07b9529d7d95 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -206,6 +206,10 @@ static bool early_setup_sev_es(void)
 	/* Initialize lookup tables for the instruction decoder */
 	inat_init_tables();
 
+	/* SEV-SNP guest requires the GHCB GPA must be registered */
+	if (sev_snp_enabled())
+		snp_register_ghcb(__pa(&boot_ghcb_page));
+
 	return true;
 }
 
diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
index 733fca403ae5..7487d4768ef0 100644
--- a/arch/x86/include/asm/sev-common.h
+++ b/arch/x86/include/asm/sev-common.h
@@ -88,6 +88,18 @@
 #define GHCB_MSR_PSC_RSVD_MASK		0xfffffULL
 #define GHCB_MSR_PSC_RESP_VAL(val)	((val) >> GHCB_MSR_PSC_ERROR_POS)
 
+/* GHCB GPA Register */
+#define GHCB_MSR_GPA_REG_REQ		0x012
+#define GHCB_MSR_GPA_REG_VALUE_POS	12
+#define GHCB_MSR_GPA_REG_VALUE_MASK	0xfffffffffffffULL
+#define GHCB_MSR_GPA_REQ_VAL(v)		\
+		(((v) << GHCB_MSR_GPA_REG_VALUE_POS) | GHCB_MSR_GPA_REG_REQ)
+
+#define GHCB_MSR_GPA_REG_RESP		0x013
+#define GHCB_MSR_GPA_REG_RESP_VAL(v)	((v) >> GHCB_MSR_GPA_REG_VALUE_POS)
+#define GHCB_MSR_GPA_REG_ERROR		0xfffffffffffffULL
+#define GHCB_MSR_GPA_INVALID		~0ULL
+
 /* SNP Page State Change NAE event */
 #define VMGEXIT_PSC_MAX_ENTRY		253
 #define VMGEXIT_PSC_INVALID_HEADER	0x100000001
diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
index 085d3d724bc8..140c5bc07fc2 100644
--- a/arch/x86/kernel/sev-shared.c
+++ b/arch/x86/kernel/sev-shared.c
@@ -81,6 +81,22 @@ static bool ghcb_get_hv_features(void)
 	return true;
 }
 
+static void snp_register_ghcb(unsigned long paddr)
+{
+	unsigned long pfn = paddr >> PAGE_SHIFT;
+	u64 val;
+
+	sev_es_wr_ghcb_msr(GHCB_MSR_GPA_REQ_VAL(pfn));
+	VMGEXIT();
+
+	val = sev_es_rd_ghcb_msr();
+
+	/* If the response GPA is not ours then abort the guest */
+	if ((GHCB_RESP_CODE(val) != GHCB_MSR_GPA_REG_RESP) ||
+	    (GHCB_MSR_GPA_REG_RESP_VAL(val) != pfn))
+		sev_es_terminate(1, GHCB_TERM_REGISTER);
+}
+
 static bool sev_es_negotiate_protocol(void)
 {
 	u64 val;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 13/20] x86/sev: Register GHCB memory when SEV-SNP is active
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
                   ` (11 preceding siblings ...)
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 12/20] x86/compressed: Register GHCB memory when SEV-SNP is active Brijesh Singh
@ 2021-04-30 12:16 ` Brijesh Singh
  2021-05-25 11:11   ` Borislav Petkov
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 14/20] x86/sev: Add helper for validating pages in early enc attribute changes Brijesh Singh
                   ` (6 subsequent siblings)
  19 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:16 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

The SEV-SNP guest is required to perform GHCB GPA registration. This is
because the hypervisor may prefer that a guest use a consistent and/or
specific GPA for the GHCB associated with a vCPU. For more information,
see the GHCB specification section GHCB GPA Registration.

During the boot, init_ghcb() allocates a per-cpu GHCB page. On very first
VC exception, the exception handler switch to using the per-cpu GHCB page
allocated during the init_ghcb(). The GHCB page must be registered in
the current vcpu context.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kernel/sev.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 8c8c939a1754..e6819f170ec4 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -88,6 +88,13 @@ struct sev_es_runtime_data {
 	 * is currently unsupported in SEV-ES guests.
 	 */
 	unsigned long dr7;
+
+	/*
+	 * SEV-SNP requires that the GHCB must be registered before using it.
+	 * The flag below will indicate whether the GHCB is registered, if its
+	 * not registered then sev_es_get_ghcb() will perform the registration.
+	 */
+	bool snp_ghcb_registered;
 };
 
 struct ghcb_state {
@@ -100,6 +107,9 @@ DEFINE_STATIC_KEY_FALSE(sev_es_enable_key);
 /* Needed in vc_early_forward_exception */
 void do_early_exception(struct pt_regs *regs, int trapnr);
 
+/* Defined in sev-shared.c */
+static void snp_register_ghcb(unsigned long paddr);
+
 static void __init setup_vc_stacks(int cpu)
 {
 	struct sev_es_runtime_data *data;
@@ -218,6 +228,12 @@ static __always_inline struct ghcb *sev_es_get_ghcb(struct ghcb_state *state)
 		data->ghcb_active = true;
 	}
 
+	/* SEV-SNP guest requires that GHCB must be registered before using it. */
+	if (sev_snp_active() && !data->snp_ghcb_registered) {
+		snp_register_ghcb(__pa(ghcb));
+		data->snp_ghcb_registered = true;
+	}
+
 	return ghcb;
 }
 
@@ -622,6 +638,10 @@ static bool __init sev_es_setup_ghcb(void)
 	/* Alright - Make the boot-ghcb public */
 	boot_ghcb = &boot_ghcb_page;
 
+	/* SEV-SNP guest requires that GHCB GPA must be registered */
+	if (sev_snp_active())
+		snp_register_ghcb(__pa(&boot_ghcb_page));
+
 	return true;
 }
 
@@ -711,6 +731,7 @@ static void __init init_ghcb(int cpu)
 
 	data->ghcb_active = false;
 	data->backup_ghcb_active = false;
+	data->snp_ghcb_registered = false;
 }
 
 void __init sev_es_init_vc_handling(void)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 14/20] x86/sev: Add helper for validating pages in early enc attribute changes
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
                   ` (12 preceding siblings ...)
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 13/20] x86/sev: " Brijesh Singh
@ 2021-04-30 12:16 ` Brijesh Singh
  2021-05-26 10:39   ` Borislav Petkov
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 15/20] x86/kernel: Make the bss.decrypted section shared in RMP table Brijesh Singh
                   ` (5 subsequent siblings)
  19 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:16 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

The early_set_memory_{encrypt,decrypt}() are used for changing the
page from decrypted (shared) to encrypted (private) and vice versa.
When SEV-SNP is active, the page state transition needs to go through
additional steps.

If the page is transitioned from shared to private, then perform the
following after the encryption attribute is set in the page table:

1. Issue the page state change VMGEXIT to add the page as a private
   in the RMP table.
2. Validate the page after its successfully added in the RMP table.

To maintain the security guarantees, if the page is transitioned from
private to shared, then perform the following before clearing the
encryption attribute from the page table.

1. Invalidate the page.
2. Issue the page state change VMGEXIT to make the page shared in the
   RMP table.

The early_set_memory_{encrypt,decrypt} can be called before the GHCB
is setup, use the SNP page state MSR protocol VMGEXIT defined in the GHCB
specification to request the page state change in the RMP table.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/sev.h | 12 ++++++
 arch/x86/kernel/sev.c      | 87 ++++++++++++++++++++++++++++++++++++++
 arch/x86/mm/mem_encrypt.c  | 43 ++++++++++++++++++-
 3 files changed, 140 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index 48f911a229ba..62aba82acfb8 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -107,6 +107,10 @@ static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate)
 
 	return rc;
 }
+void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr,
+		unsigned int npages);
+void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
+		unsigned int npages);
 #else
 static inline void sev_es_ist_enter(struct pt_regs *regs) { }
 static inline void sev_es_ist_exit(void) { }
@@ -114,6 +118,14 @@ static inline int sev_es_setup_ap_jump_table(struct real_mode_header *rmh) { ret
 static inline void sev_es_nmi_complete(void) { }
 static inline int sev_es_efi_map_ghcbs(pgd_t *pgd) { return 0; }
 static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate) { return 0; }
+static inline void __init
+early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr, unsigned int npages)
+{
+}
+static inline void __init
+early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, unsigned int npages)
+{
+}
 #endif
 
 #endif
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index e6819f170ec4..33420f6da030 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -525,6 +525,93 @@ static u64 get_jump_table_addr(void)
 	return ret;
 }
 
+static void pvalidate_pages(unsigned long vaddr, unsigned int npages, bool validate)
+{
+	unsigned long vaddr_end;
+	int rc;
+
+	vaddr = vaddr & PAGE_MASK;
+	vaddr_end = vaddr + (npages << PAGE_SHIFT);
+
+	while (vaddr < vaddr_end) {
+		rc = pvalidate(vaddr, RMP_PG_SIZE_4K, validate);
+		if (rc) {
+			WARN(rc, "Failed to validate address 0x%lx ret %d", vaddr, rc);
+			goto e_fail;
+		}
+
+		vaddr = vaddr + PAGE_SIZE;
+	}
+
+	return;
+
+e_fail:
+	sev_es_terminate(1, GHCB_TERM_PVALIDATE);
+}
+
+static void __init early_snp_set_page_state(unsigned long paddr, unsigned int npages, int op)
+{
+	unsigned long paddr_end;
+	u64 val;
+
+	paddr = paddr & PAGE_MASK;
+	paddr_end = paddr + (npages << PAGE_SHIFT);
+
+	while (paddr < paddr_end) {
+		/*
+		 * Use the MSR protcol because this function can be called before the GHCB
+		 * is established.
+		 */
+		sev_es_wr_ghcb_msr(GHCB_MSR_PSC_REQ_GFN(paddr >> PAGE_SHIFT, op));
+		VMGEXIT();
+
+		val = sev_es_rd_ghcb_msr();
+
+		if (GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP)
+			goto e_term;
+
+		if (GHCB_MSR_PSC_RESP_VAL(val)) {
+			WARN(1, "Failed to change page state to '%s' paddr 0x%lx error 0x%llx\n",
+				op == SNP_PAGE_STATE_PRIVATE ? "private" : "shared", paddr,
+				GHCB_MSR_PSC_RESP_VAL(val));
+			goto e_term;
+		}
+
+		paddr = paddr + PAGE_SIZE;
+	}
+
+	return;
+
+e_term:
+	sev_es_terminate(1, GHCB_TERM_PSC);
+}
+
+void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr,
+					 unsigned int npages)
+{
+	if (!sev_snp_active())
+		return;
+
+	 /* Ask hypervisor to add the memory pages in RMP table as a 'private'. */
+	early_snp_set_page_state(paddr, npages, SNP_PAGE_STATE_PRIVATE);
+
+	/* Validate the memory pages after its added in the RMP table. */
+	pvalidate_pages(vaddr, npages, 1);
+}
+
+void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
+					unsigned int npages)
+{
+	if (!sev_snp_active())
+		return;
+
+	/* Invalidate memory pages before making it shared in the RMP table. */
+	pvalidate_pages(vaddr, npages, 0);
+
+	 /* Ask hypervisor to make the memory pages shared in the RMP table. */
+	early_snp_set_page_state(paddr, npages, SNP_PAGE_STATE_SHARED);
+}
+
 int sev_es_setup_ap_jump_table(struct real_mode_header *rmh)
 {
 	u16 startup_cs, startup_ip;
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 076d993acba3..f722518b244f 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -30,6 +30,7 @@
 #include <asm/processor-flags.h>
 #include <asm/msr.h>
 #include <asm/cmdline.h>
+#include <asm/sev.h>
 
 #include "mm_internal.h"
 
@@ -50,6 +51,30 @@ bool sev_enabled __section(".data");
 /* Buffer used for early in-place encryption by BSP, no locking needed */
 static char sme_early_buffer[PAGE_SIZE] __initdata __aligned(PAGE_SIZE);
 
+/*
+ * When SNP is active, this routine changes the page state from private to
+ * shared before copying the data from the source to destination and restore
+ * after the copy. This is required because the source address is mapped as
+ * decrypted by the caller of the routine.
+ */
+static inline void __init snp_memcpy(void *dst, void *src, size_t sz, unsigned long paddr,
+				     bool dec)
+{
+	unsigned long npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
+
+	if (dec) {
+		/* If the paddr needs to be accessed decrypted, make the page shared before memcpy. */
+		early_snp_set_memory_shared((unsigned long)__va(paddr), paddr, npages);
+
+		memcpy(dst, src, sz);
+
+		/* Restore the page state after the memcpy. */
+		early_snp_set_memory_private((unsigned long)__va(paddr), paddr, npages);
+	} else {
+		memcpy(dst, src, sz);
+	}
+}
+
 /*
  * This routine does not change the underlying encryption setting of the
  * page(s) that map this memory. It assumes that eventually the memory is
@@ -98,8 +123,8 @@ static void __init __sme_early_enc_dec(resource_size_t paddr,
 		 * Use a temporary buffer, of cache-line multiple size, to
 		 * avoid data corruption as documented in the APM.
 		 */
-		memcpy(sme_early_buffer, src, len);
-		memcpy(dst, sme_early_buffer, len);
+		snp_memcpy(sme_early_buffer, src, len, paddr, enc);
+		snp_memcpy(dst, sme_early_buffer, len, paddr, !enc);
 
 		early_memunmap(dst, len);
 		early_memunmap(src, len);
@@ -279,9 +304,23 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
 	else
 		sme_early_decrypt(pa, size);
 
+	/*
+	 * If page is getting mapped decrypted in the page table, then the page state
+	 * change in the RMP table must happen before the page table updates.
+	 */
+	if (!enc)
+		early_snp_set_memory_shared((unsigned long)__va(pa), pa, 1);
+
 	/* Change the page encryption mask. */
 	new_pte = pfn_pte(pfn, new_prot);
 	set_pte_atomic(kpte, new_pte);
+
+	/*
+	 * If page is set encrypted in the page table, then update the RMP table to
+	 * add this page as private.
+	 */
+	if (enc)
+		early_snp_set_memory_private((unsigned long)__va(pa), pa, 1);
 }
 
 static int __init early_set_memory_enc_dec(unsigned long vaddr,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 15/20] x86/kernel: Make the bss.decrypted section shared in RMP table
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
                   ` (13 preceding siblings ...)
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 14/20] x86/sev: Add helper for validating pages in early enc attribute changes Brijesh Singh
@ 2021-04-30 12:16 ` Brijesh Singh
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 16/20] x86/kernel: Validate rom memory before accessing when SEV-SNP is active Brijesh Singh
                   ` (4 subsequent siblings)
  19 siblings, 0 replies; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:16 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

The encryption attribute for the bss.decrypted region is cleared in the
initial page table build. This is because the section contains the data
that need to be shared between the guest and the hypervisor.

When SEV-SNP is active, just clearing the encryption attribute in the
page table is not enough. The page state need to updated in the RMP table.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kernel/head64.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index de01903c3735..f4c3e632345a 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -288,7 +288,14 @@ unsigned long __head __startup_64(unsigned long physaddr,
 	if (mem_encrypt_active()) {
 		vaddr = (unsigned long)__start_bss_decrypted;
 		vaddr_end = (unsigned long)__end_bss_decrypted;
+
 		for (; vaddr < vaddr_end; vaddr += PMD_SIZE) {
+			/*
+			 * When SEV-SNP is active then transition the page to shared in the RMP
+			 * table so that it is consistent with the page table attribute change.
+			 */
+			early_snp_set_memory_shared(__pa(vaddr), __pa(vaddr), PTRS_PER_PMD);
+
 			i = pmd_index(vaddr);
 			pmd[i] -= sme_get_me_mask();
 		}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 16/20] x86/kernel: Validate rom memory before accessing when SEV-SNP is active
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
                   ` (14 preceding siblings ...)
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 15/20] x86/kernel: Make the bss.decrypted section shared in RMP table Brijesh Singh
@ 2021-04-30 12:16 ` Brijesh Singh
  2021-05-27 11:49   ` Borislav Petkov
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 17/20] x86/mm: Add support to validate memory when changing C-bit Brijesh Singh
                   ` (3 subsequent siblings)
  19 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:16 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

The probe_roms() access the memory range (0xc0000 - 0x10000) to probe
various ROMs. The memory range is not part of the E820 system RAM
range. The memory range is mapped as private (i.e encrypted) in page
table.

When SEV-SNP is active, all the private memory must be validated before
the access. The ROM range was not part of E820 map, so the guest BIOS
did not validate it. An access to invalidated memory will cause a VC
exception. The guest does not support handling not-validated VC exception
yet, so validate the ROM memory regions before it is accessed.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kernel/probe_roms.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/probe_roms.c b/arch/x86/kernel/probe_roms.c
index 9e1def3744f2..7638d9c8e1e8 100644
--- a/arch/x86/kernel/probe_roms.c
+++ b/arch/x86/kernel/probe_roms.c
@@ -21,6 +21,7 @@
 #include <asm/sections.h>
 #include <asm/io.h>
 #include <asm/setup_arch.h>
+#include <asm/sev.h>
 
 static struct resource system_rom_resource = {
 	.name	= "System ROM",
@@ -197,11 +198,21 @@ static int __init romchecksum(const unsigned char *rom, unsigned long length)
 
 void __init probe_roms(void)
 {
+	unsigned long start, length, upper, n;
 	const unsigned char *rom;
-	unsigned long start, length, upper;
 	unsigned char c;
 	int i;
 
+	/*
+	 * The ROM memory is not part of the E820 system RAM and is not pre-validated
+	 * by the BIOS. The kernel page table maps the ROM region as encrypted memory,
+	 * the SEV-SNP requires the encrypted memory must be validated before the
+	 * access. Validate the ROM before accessing it.
+	 */
+	n = ((system_rom_resource.end + 1) - video_rom_resource.start) >> PAGE_SHIFT;
+	early_snp_set_memory_private((unsigned long)__va(video_rom_resource.start),
+			video_rom_resource.start, n);
+
 	/* video rom */
 	upper = adapter_rom_resources[0].start;
 	for (start = video_rom_resource.start; start < upper; start += 2048) {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 17/20] x86/mm: Add support to validate memory when changing C-bit
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
                   ` (15 preceding siblings ...)
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 16/20] x86/kernel: Validate rom memory before accessing when SEV-SNP is active Brijesh Singh
@ 2021-04-30 12:16 ` Brijesh Singh
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 18/20] x86/boot: Add Confidential Computing address to setup_header Brijesh Singh
                   ` (2 subsequent siblings)
  19 siblings, 0 replies; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:16 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

The set_memory_{encrypt,decrypt}() are used for changing the pages
from decrypted (shared) to encrypted (private) and vice versa.
When SEV-SNP is active, the page state transition needs to go through
additional steps.

If the page is transitioned from shared to private, then perform the
following after the encryption attribute is set in the page table:

1. Issue the page state change VMGEXIT to add the memory region in
   the RMP table.
2. Validate the memory region after the RMP entry is added.

To maintain the security guarantees, if the page is transitioned from
private to shared, then perform the following before encryption attribute
is removed from the page table:

1. Invalidate the page.
2. Issue the page state change VMGEXIT to remove the page from RMP table.

To change the page state in the RMP table, use the Page State Change
VMGEXIT defined in the GHCB specification.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/sev.h   |   4 ++
 arch/x86/kernel/sev.c        | 114 +++++++++++++++++++++++++++++++++++
 arch/x86/mm/pat/set_memory.c |  15 +++++
 3 files changed, 133 insertions(+)

diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index 62aba82acfb8..1b505061d9f7 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -111,6 +111,8 @@ void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long padd
 		unsigned int npages);
 void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
 		unsigned int npages);
+void snp_set_memory_shared(unsigned long vaddr, unsigned int npages);
+void snp_set_memory_private(unsigned long vaddr, unsigned int npages);
 #else
 static inline void sev_es_ist_enter(struct pt_regs *regs) { }
 static inline void sev_es_ist_exit(void) { }
@@ -126,6 +128,8 @@ static inline void __init
 early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, unsigned int npages)
 {
 }
+static inline void snp_set_memory_shared(unsigned long vaddr, unsigned int npages) { }
+static inline void snp_set_memory_private(unsigned long vaddr, unsigned int npages) { }
 #endif
 
 #endif
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 33420f6da030..f28fd8605e63 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -612,6 +612,120 @@ void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr
 	early_snp_set_page_state(paddr, npages, SNP_PAGE_STATE_SHARED);
 }
 
+static int snp_page_state_vmgexit(struct ghcb *ghcb, struct snp_page_state_change *data)
+{
+	struct snp_page_state_header *hdr;
+	int ret = 0;
+
+	hdr = &data->header;
+
+	/*
+	 * As per the GHCB specification, the hypervisor can resume the guest before
+	 * processing all the entries. The loop checks whether all the entries are
+	 * processed. If not, then keep retrying.
+	 */
+	while (hdr->cur_entry <= hdr->end_entry) {
+
+		ghcb_set_sw_scratch(ghcb, (u64)__pa(data));
+
+		ret = sev_es_ghcb_hv_call(ghcb, NULL, SVM_VMGEXIT_SNP_PAGE_STATE_CHANGE, 0, 0);
+
+		/* Page State Change VMGEXIT can pass error code through exit_info_2. */
+		if (ret || ghcb->save.sw_exit_info_2) {
+			WARN(1, "SEV-SNP: page state change failed ret=%d exit_info_2=%llx\n",
+				ret, ghcb->save.sw_exit_info_2);
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+/*
+ * The function uses the NAE event to batch the page state change request.
+ */
+static void snp_set_page_state(unsigned long vaddr, unsigned int npages, int op)
+{
+	struct snp_page_state_change *data;
+	struct snp_page_state_header *hdr;
+	struct snp_page_state_entry *e;
+	unsigned long vaddr_end;
+	struct ghcb_state state;
+	struct ghcb *ghcb;
+	int idx;
+
+	vaddr = vaddr & PAGE_MASK;
+	vaddr_end = vaddr + (npages << PAGE_SHIFT);
+
+	ghcb = sev_es_get_ghcb(&state);
+	if (unlikely(!ghcb))
+		panic("SEV-SNP: Failed to get GHCB\n");
+
+	data = (struct snp_page_state_change *)ghcb->shared_buffer;
+	hdr = &data->header;
+
+	while (vaddr < vaddr_end) {
+		e = data->entry;
+		memset(data, 0, sizeof (*data));
+
+		for (idx = 0; idx < VMGEXIT_PSC_MAX_ENTRY; idx++, e++) {
+			unsigned long pfn;
+
+			if (is_vmalloc_addr((void *)vaddr))
+				pfn = vmalloc_to_pfn((void *)vaddr);
+			else
+				pfn = __pa(vaddr) >> PAGE_SHIFT;
+
+			e->gfn = pfn;
+			e->operation = op;
+			hdr->end_entry = idx;
+
+			/*
+			 * The GHCB specification provides the flexibility to use
+			 * either 4K or 2MB page size in the RMP table. The curent
+			 * SNP support does not keep track of the page size used
+			 * in the RMP table. To avoid the overlap request, use the
+			 * 4K page size in the RMP table.
+			 */
+			e->pagesize = RMP_PG_SIZE_4K;
+			vaddr = vaddr + PAGE_SIZE;
+
+			if (vaddr >= vaddr_end)
+				break;
+		}
+
+		/* Terminate the guest on page state change failure. */
+		if (snp_page_state_vmgexit(ghcb, data))
+			sev_es_terminate(1, GHCB_TERM_PSC);
+	}
+
+	sev_es_put_ghcb(&state);
+}
+
+void snp_set_memory_shared(unsigned long vaddr, unsigned int npages)
+{
+	if (!sev_snp_active())
+		return;
+
+	/* Invalidate the memory before changing the page state in the RMP table. */
+	pvalidate_pages(vaddr, npages, 0);
+
+	/* Change the page state in the RMP table. */
+	snp_set_page_state(vaddr, npages, SNP_PAGE_STATE_SHARED);
+}
+
+void snp_set_memory_private(unsigned long vaddr, unsigned int npages)
+{
+	if (!sev_snp_active())
+		return;
+
+	/* Change the page state in the RMP table. */
+	snp_set_page_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE);
+
+	/* Validate the memory after the memory is made private in the RMP table. */
+	pvalidate_pages(vaddr, npages, 1);
+}
+
 int sev_es_setup_ap_jump_table(struct real_mode_header *rmh)
 {
 	u16 startup_cs, startup_ip;
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 427980617557..34cd13671d5c 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -27,6 +27,7 @@
 #include <asm/proto.h>
 #include <asm/memtype.h>
 #include <asm/set_memory.h>
+#include <asm/sev.h>
 
 #include "../mm_internal.h"
 
@@ -2001,8 +2002,22 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
 	 */
 	cpa_flush(&cpa, !this_cpu_has(X86_FEATURE_SME_COHERENT));
 
+	/*
+	 * To maintain the security gurantees of SEV-SNP guest invalidate the memory
+	 * before clearing the encryption attribute.
+	 */
+	if (!enc)
+		snp_set_memory_shared(addr, numpages);
+
 	ret = __change_page_attr_set_clr(&cpa, 1);
 
+	/*
+	 * Now that memory is mapped encrypted in the page table, validate the memory
+	 * range before the return.
+	 */
+	if (!ret && enc)
+		snp_set_memory_private(addr, numpages);
+
 	/*
 	 * After changing the encryption attribute, we need to flush TLBs again
 	 * in case any speculative TLB caching occurred (but no need to flush
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 18/20] x86/boot: Add Confidential Computing address to setup_header
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
                   ` (16 preceding siblings ...)
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 17/20] x86/mm: Add support to validate memory when changing C-bit Brijesh Singh
@ 2021-04-30 12:16 ` Brijesh Singh
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 19/20] x86/sev: Register SNP guest request platform device Brijesh Singh
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 20/20] virt: Add SEV-SNP guest driver Brijesh Singh
  19 siblings, 0 replies; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:16 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

While launching the encrypted guests, the hypervisor may need to provide
some additional information that will used during the guest boot. In the
case of AMD SEV-SNP the information includes the address of the secrets
and CPUID pages. The secrets page contains information such as a VM to
PSP communication key, and CPUID page contain PSP filtered CPUID values.

When booting under the EFI based BIOS, the EFI configuration table
contains an entry for the confidential computing blob. In order to support
booting encrypted guests on non EFI VM, the hypervisor need to pass these
additional information to the kernel with different method.

For this purpose expand the struct setup_header to hold the physical
address of the confidential computing blob location. Being zero means it
isn't passed.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 Documentation/x86/boot.rst            | 26 ++++++++++++++++++++++++++
 arch/x86/boot/header.S                |  7 ++++++-
 arch/x86/include/uapi/asm/bootparam.h |  1 +
 3 files changed, 33 insertions(+), 1 deletion(-)

diff --git a/Documentation/x86/boot.rst b/Documentation/x86/boot.rst
index fc844913dece..f7c6c3204b40 100644
--- a/Documentation/x86/boot.rst
+++ b/Documentation/x86/boot.rst
@@ -75,6 +75,8 @@ Protocol 2.14	BURNT BY INCORRECT COMMIT
 		DO NOT USE!!! ASSUME SAME AS 2.13.
 
 Protocol 2.15	(Kernel 5.5) Added the kernel_info and kernel_info.setup_type_max.
+
+Protocol 2.16	(Kernel 5.14) Added the confidential computing blob address
 =============	============================================================
 
 .. note::
@@ -226,6 +228,7 @@ Offset/Size	Proto		Name			Meaning
 0260/4		2.10+		init_size		Linear memory required during initialization
 0264/4		2.11+		handover_offset		Offset of handover entry point
 0268/4		2.15+		kernel_info_offset	Offset of the kernel_info
+026C/4		2.16+		cc_blob_address	        Physical address of the confidential computing blob
 ===========	========	=====================	============================================
 
 .. note::
@@ -1026,6 +1029,29 @@ Offset/size:	0x000c/4
 
   This field contains maximal allowed type for setup_data and setup_indirect structs.
 
+============	==================
+Field name:	cc_blob_address
+Type:		write (optional)
+Offset/size:	0x26C/4
+Protocol:	2.16+
+============	==================
+
+  This field can be set by the boot loader to tell the kernel the physical address
+  of the confidential computing blob info.
+
+  A value of 0 indicates that either the blob is not provided or use the EFI configuration
+  table to retrieve the blob location.
+
+  In the case of AMD SEV the blob look like this::
+
+  struct cc_blob_sev_info {
+        u32 magic;      /* 0x414d4445 (AMDE) */
+        u16 version;
+        u32 secrets_phys; /* pointer to secrets page */
+        u32 secrets_len;
+        u64 cpuid_phys;   /* 32-bit pointer to cpuid page */
+        u32 cpuid_len;
+  }
 
 The Image Checksum
 ==================
diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S
index 6dbd7e9f74c9..b4a014a18f91 100644
--- a/arch/x86/boot/header.S
+++ b/arch/x86/boot/header.S
@@ -303,7 +303,7 @@ _start:
 	# Part 2 of the header, from the old setup.S
 
 		.ascii	"HdrS"		# header signature
-		.word	0x020f		# header version number (>= 0x0105)
+		.word	0x0210		# header version number (>= 0x0105)
 					# or else old loadlin-1.5 will fail)
 		.globl realmode_swtch
 realmode_swtch:	.word	0, 0		# default_switch, SETUPSEG
@@ -577,6 +577,11 @@ pref_address:		.quad LOAD_PHYSICAL_ADDR	# preferred load addr
 init_size:		.long INIT_SIZE		# kernel initialization size
 handover_offset:	.long 0			# Filled in by build.c
 kernel_info_offset:	.long 0			# Filled in by build.c
+cc_blob_address:	.long 0			# (Header version 0x210 or later)
+						# If nonzero, a 32-bit pointer to
+						# the confidential computing blob.
+						# The blob will contain the information
+						# used while booting the encrypted VM.
 
 # End of setup header #####################################################
 
diff --git a/arch/x86/include/uapi/asm/bootparam.h b/arch/x86/include/uapi/asm/bootparam.h
index b25d3f82c2f3..210e1a0fb4ce 100644
--- a/arch/x86/include/uapi/asm/bootparam.h
+++ b/arch/x86/include/uapi/asm/bootparam.h
@@ -102,6 +102,7 @@ struct setup_header {
 	__u32	init_size;
 	__u32	handover_offset;
 	__u32	kernel_info_offset;
+	__u32	cc_blob_address;
 } __attribute__((packed));
 
 struct sys_desc_table {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 19/20] x86/sev: Register SNP guest request platform device
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
                   ` (17 preceding siblings ...)
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 18/20] x86/boot: Add Confidential Computing address to setup_header Brijesh Singh
@ 2021-04-30 12:16 ` Brijesh Singh
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 20/20] virt: Add SEV-SNP guest driver Brijesh Singh
  19 siblings, 0 replies; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:16 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

Version 2 of GHCB specification provides NAEs that can be used by the SNP
guest to communicate with the PSP without risk from a malicious hypervisor
who wishes to read, alter, drop or replay the messages sent.

The hypervisor uses the SNP_GUEST_REQUEST command interface provided by
the SEV-SNP firmware to forward the guest messages to the PSP.

In order to communicate with the PSP, the guest need to locate the secrets
page inserted by the hypervisor during the SEV-SNP guest launch. The
secrets page contains the communication keys used to send and receive the
encrypted messages between the guest and the PSP.

The secrets page is located either through the setup_data cc_blob_address
or EFI configuration table.

Create a platform device that the SNP guest driver will bind to get the
platform resources. The SNP guest driver will provide interface to get
the attestation report, key derivation etc.

See SEV-SNP and GHCB spec for more details.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kernel/sev.c       | 124 ++++++++++++++++++++++++++++++++++++
 arch/x86/platform/efi/efi.c |   2 +
 include/linux/efi.h         |   1 +
 include/linux/snp-guest.h   | 124 ++++++++++++++++++++++++++++++++++++
 4 files changed, 251 insertions(+)
 create mode 100644 include/linux/snp-guest.h

diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index f28fd8605e63..6fd1f82f473a 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -9,6 +9,7 @@
 
 #define pr_fmt(fmt)	"SEV-ES: " fmt
 
+#include <linux/platform_device.h>
 #include <linux/sched/debug.h>	/* For show_regs() */
 #include <linux/percpu-defs.h>
 #include <linux/mem_encrypt.h>
@@ -16,9 +17,12 @@
 #include <linux/printk.h>
 #include <linux/mm_types.h>
 #include <linux/set_memory.h>
+#include <linux/snp-guest.h>
 #include <linux/memblock.h>
 #include <linux/kernel.h>
+#include <linux/efi.h>
 #include <linux/mm.h>
+#include <linux/io.h>
 
 #include <asm/cpu_entry_area.h>
 #include <asm/stacktrace.h>
@@ -31,6 +35,7 @@
 #include <asm/svm.h>
 #include <asm/smp.h>
 #include <asm/cpu.h>
+#include <asm/setup.h>		/* For struct boot_params */
 
 #define DR7_RESET_VALUE        0x400
 
@@ -101,6 +106,21 @@ struct ghcb_state {
 	struct ghcb *ghcb;
 };
 
+/* Confidential computing blob information */
+#define CC_BLOB_SEV_HDR_MAGIC	0x414d4445
+struct cc_blob_sev_info {
+	u32 magic;
+	u16 version;
+	u64 secrets_phys;
+	u32 secrets_len;
+	u64 cpuid_phys;
+	u32 cpuid_len;
+};
+
+#ifdef CONFIG_EFI
+extern unsigned long cc_blob_phys;
+#endif
+
 static DEFINE_PER_CPU(struct sev_es_runtime_data*, runtime_data);
 DEFINE_STATIC_KEY_FALSE(sev_es_enable_key);
 
@@ -1685,3 +1705,107 @@ bool __init handle_vc_boot_ghcb(struct pt_regs *regs)
 	while (true)
 		halt();
 }
+
+static struct resource guest_req_res[0];
+static struct platform_device guest_req_device = {
+	.name		= "snp-guest",
+	.id		= -1,
+	.resource	= guest_req_res,
+	.num_resources	= 1,
+};
+
+static int get_snp_secrets_resource(struct resource *res)
+{
+	struct setup_header *hdr = &boot_params.hdr;
+	struct cc_blob_sev_info *info;
+	unsigned long paddr;
+	int ret = -ENODEV;
+
+	/*
+	 * The secret page contains the VM encryption key used for encrypting the
+	 * messages between the guest and the PSP. The secrets page location is
+	 * available either through the setup_data or EFI configuration table.
+	 */
+	if (hdr->cc_blob_address) {
+		paddr = hdr->cc_blob_address;
+	} else if (efi_enabled(EFI_CONFIG_TABLES)) {
+#ifdef CONFIG_EFI
+		paddr = cc_blob_phys;
+#else
+		return -ENODEV;
+#endif
+	} else {
+		return -ENODEV;
+	}
+
+	info = memremap(paddr, sizeof(*info), MEMREMAP_WB);
+	if (!info)
+		return -ENOMEM;
+
+	if (info->magic == CC_BLOB_SEV_HDR_MAGIC && info->secrets_phys && info->secrets_len) {
+		res->start = info->secrets_phys;
+		res->end = info->secrets_phys + info->secrets_len;
+		res->flags = IORESOURCE_MEM;
+		ret = 0;
+	}
+
+	memunmap(info);
+	return ret;
+}
+
+static int __init add_snp_guest_request(void)
+{
+	if (!sev_snp_active())
+		return -ENODEV;
+
+	if (get_snp_secrets_resource(&guest_req_res[0]))
+		return -ENODEV;
+
+	platform_device_register(&guest_req_device);
+	dev_info(&guest_req_device.dev, "registered [secret 0x%016llx - 0x%016llx]\n",
+		guest_req_res[0].start, guest_req_res[0].end);
+
+	return 0;
+}
+device_initcall(add_snp_guest_request);
+
+unsigned long snp_issue_guest_request(int type, struct snp_guest_request_data *input)
+{
+	struct ghcb_state state;
+	struct ghcb *ghcb;
+	unsigned long id;
+	int ret;
+
+	if (type == SNP_GUEST_REQUEST)
+		id = SVM_VMGEXIT_SNP_GUEST_REQUEST;
+	else if (type == SNP_EXTENDED_GUEST_REQUEST)
+		id = SVM_VMGEXIT_SNP_EXT_GUEST_REQUEST;
+	else
+		return -EINVAL;
+
+	if (!sev_snp_active())
+		return -ENODEV;
+
+	ghcb = sev_es_get_ghcb(&state);
+	if (!ghcb)
+		return -ENODEV;
+
+	vc_ghcb_invalidate(ghcb);
+	ghcb_set_rax(ghcb, input->data_gpa);
+	ghcb_set_rbx(ghcb, input->data_npages);
+
+	ret = sev_es_ghcb_hv_call(ghcb, NULL, id, input->req_gpa, input->resp_gpa);
+
+	if (ret)
+		goto e_put;
+
+	if (ghcb->save.sw_exit_info_2) {
+		ret = ghcb->save.sw_exit_info_2;
+		goto e_put;
+	}
+
+e_put:
+	sev_es_put_ghcb(&state);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(snp_issue_guest_request);
diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c
index 8a26e705cb06..2cca9ee6e1d4 100644
--- a/arch/x86/platform/efi/efi.c
+++ b/arch/x86/platform/efi/efi.c
@@ -57,6 +57,7 @@ static unsigned long efi_systab_phys __initdata;
 static unsigned long prop_phys = EFI_INVALID_TABLE_ADDR;
 static unsigned long uga_phys = EFI_INVALID_TABLE_ADDR;
 static unsigned long efi_runtime, efi_nr_tables;
+unsigned long cc_blob_phys;
 
 unsigned long efi_fw_vendor, efi_config_table;
 
@@ -66,6 +67,7 @@ static const efi_config_table_type_t arch_tables[] __initconst = {
 #ifdef CONFIG_X86_UV
 	{UV_SYSTEM_TABLE_GUID,		&uv_systab_phys,	"UVsystab"	},
 #endif
+	{EFI_CC_BLOB_GUID,		&cc_blob_phys,		"CC blob"	},
 	{},
 };
 
diff --git a/include/linux/efi.h b/include/linux/efi.h
index 6b5d36babfcc..75aeb2a56888 100644
--- a/include/linux/efi.h
+++ b/include/linux/efi.h
@@ -344,6 +344,7 @@ void efi_native_runtime_setup(void);
 #define EFI_CERT_SHA256_GUID			EFI_GUID(0xc1c41626, 0x504c, 0x4092, 0xac, 0xa9, 0x41, 0xf9, 0x36, 0x93, 0x43, 0x28)
 #define EFI_CERT_X509_GUID			EFI_GUID(0xa5c059a1, 0x94e4, 0x4aa7, 0x87, 0xb5, 0xab, 0x15, 0x5c, 0x2b, 0xf0, 0x72)
 #define EFI_CERT_X509_SHA256_GUID		EFI_GUID(0x3bd2a492, 0x96c0, 0x4079, 0xb4, 0x20, 0xfc, 0xf9, 0x8e, 0xf1, 0x03, 0xed)
+#define EFI_CC_BLOB_GUID			EFI_GUID(0x067b1f5f, 0xcf26, 0x44c5, 0x85, 0x54, 0x93, 0xd7, 0x77, 0x91, 0x2d, 0x42)
 
 /*
  * This GUID is used to pass to the kernel proper the struct screen_info
diff --git a/include/linux/snp-guest.h b/include/linux/snp-guest.h
new file mode 100644
index 000000000000..32f70605b52c
--- /dev/null
+++ b/include/linux/snp-guest.h
@@ -0,0 +1,124 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * AMD Secure Encrypted Virtualization (SEV) driver interface
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Brijesh Singh <brijesh.singh@amd.com>
+ *
+ * SEV API spec is available at https://developer.amd.com/sev
+ */
+
+#ifndef __LINUX_SNP_GUEST_H_
+#define __LINUX_SNP_GUEST_H_
+
+#include <linux/types.h>
+
+#define MAX_AUTHTAG_LEN		32
+#define VMPCK_KEY_LEN		32
+
+/*
+ * The secrets page contains 96-bytes of reserved field that can be used by
+ * the guest OS. The guest OS uses the area to save the message sequence
+ * number for each VMPL level.
+ */
+struct secrets_guest_priv {
+	u64 msg_seqno_0;
+	u64 msg_seqno_1;
+	u64 msg_seqno_2;
+	u64 msg_seqno_3;
+	u8 rsvd[64];
+} __packed;
+
+/* See the SNP spec secrets page layout section for the structure */
+struct snp_data_secrets_layout {
+	u32 version;
+	u32 imiEn:1;
+	u32 rsvd1:31;
+	u32 fms;
+	u32 rsvd2;
+	u8 gosvw[16];
+	u8 vmpck0[VMPCK_KEY_LEN];
+	u8 vmpck1[VMPCK_KEY_LEN];
+	u8 vmpck2[VMPCK_KEY_LEN];
+	u8 vmpck3[VMPCK_KEY_LEN];
+	struct secrets_guest_priv guest_priv;
+	u8 rsvd3[3840];
+} __packed;
+
+/* See SNP spec SNP_GUEST_REQUEST section for the structure */
+enum msg_type {
+	SNP_MSG_TYPE_INVALID = 0,
+	SNP_MSG_CPUID_REQ,
+	SNP_MSG_CPUID_RSP,
+	SNP_MSG_KEY_REQ,
+	SNP_MSG_KEY_RSP,
+	SNP_MSG_REPORT_REQ,
+	SNP_MSG_REPORT_RSP,
+	SNP_MSG_EXPORT_REQ,
+	SNP_MSG_EXPORT_RSP,
+	SNP_MSG_IMPORT_REQ,
+	SNP_MSG_IMPORT_RSP,
+	SNP_MSG_ABSORB_REQ,
+	SNP_MSG_ABSORB_RSP,
+	SNP_MSG_VMRK_REQ,
+	SNP_MSG_VMRK_RSP,
+
+	SNP_MSG_TYPE_MAX
+};
+
+enum aead_algo {
+	SNP_AEAD_INVALID,
+	SNP_AEAD_AES_256_GCM,
+};
+
+struct snp_guest_msg_hdr {
+	u8 authtag[MAX_AUTHTAG_LEN];
+	u64 msg_seqno;
+	u8 rsvd1[8];
+	u8 algo;
+	u8 hdr_version;
+	u16 hdr_sz;
+	u8 msg_type;
+	u8 msg_version;
+	u16 msg_sz;
+	u32 rsvd2;
+	u8 msg_vmpck;
+	u8 rsvd3[35];
+} __packed;
+
+struct snp_guest_msg {
+	struct snp_guest_msg_hdr hdr;
+	u8 payload[4000];
+} __packed;
+
+struct snp_msg_report_req {
+	u8 data[64];
+	u32 vmpl;
+	u8 rsvd[28];
+} __packed;
+
+enum vmgexit_type {
+	SNP_GUEST_REQUEST,
+	SNP_EXTENDED_GUEST_REQUEST,
+
+	GUEST_REQUEST_MAX
+};
+
+struct snp_guest_request_data {
+	unsigned long req_gpa;
+	unsigned long resp_gpa;
+	unsigned long data_gpa;
+	unsigned int data_npages;
+};
+
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+unsigned long snp_issue_guest_request(int vmgexit_type, struct snp_guest_request_data *input);
+#else
+
+static inline unsigned long snp_issue_guest_request(int type, struct snp_guest_request_data *input)
+{
+	return -ENODEV;
+}
+#endif /* CONFIG_AMD_MEM_ENCRYPT */
+#endif /* __LINUX_SNP_GUEST_H__ */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* [PATCH Part1 RFC v2 20/20] virt: Add SEV-SNP guest driver
  2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
                   ` (18 preceding siblings ...)
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 19/20] x86/sev: Register SNP guest request platform device Brijesh Singh
@ 2021-04-30 12:16 ` Brijesh Singh
  19 siblings, 0 replies; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 12:16 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: tglx, bp, jroedel, thomas.lendacky, pbonzini, mingo, dave.hansen,
	rientjes, seanjc, peterz, hpa, tony.luck, Brijesh Singh

SEV-SNP specification provides the guest a mechanisum to communicate with
the PSP without risk from a malicious hypervisor who wishes to read, alter,
drop or replay the messages sent. The driver uses snp_issue_guest_request()
to issue GHCB SNP_GUEST_REQUEST NAE event. This command constructs a
trusted channel between the guest and the PSP firmware.

The userspace can use the following ioctls provided by the driver:

1. Request an attestation report that can be used to assume the identity
   and security configuration of the guest.
2. Ask the firmware to provide a key derived from a root key.

See SEV-SNP spec section Guest Messages for more details.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 drivers/virt/Kconfig               |   3 +
 drivers/virt/Makefile              |   1 +
 drivers/virt/snp-guest/Kconfig     |  10 +
 drivers/virt/snp-guest/Makefile    |   2 +
 drivers/virt/snp-guest/snp-guest.c | 455 +++++++++++++++++++++++++++++
 include/uapi/linux/snp-guest.h     |  50 ++++
 6 files changed, 521 insertions(+)
 create mode 100644 drivers/virt/snp-guest/Kconfig
 create mode 100644 drivers/virt/snp-guest/Makefile
 create mode 100644 drivers/virt/snp-guest/snp-guest.c
 create mode 100644 include/uapi/linux/snp-guest.h

diff --git a/drivers/virt/Kconfig b/drivers/virt/Kconfig
index 8061e8ef449f..4dec783499ed 100644
--- a/drivers/virt/Kconfig
+++ b/drivers/virt/Kconfig
@@ -36,4 +36,7 @@ source "drivers/virt/vboxguest/Kconfig"
 source "drivers/virt/nitro_enclaves/Kconfig"
 
 source "drivers/virt/acrn/Kconfig"
+
+source "drivers/virt/snp-guest/Kconfig"
+
 endif
diff --git a/drivers/virt/Makefile b/drivers/virt/Makefile
index 3e272ea60cd9..8accff502305 100644
--- a/drivers/virt/Makefile
+++ b/drivers/virt/Makefile
@@ -8,3 +8,4 @@ obj-y				+= vboxguest/
 
 obj-$(CONFIG_NITRO_ENCLAVES)	+= nitro_enclaves/
 obj-$(CONFIG_ACRN_HSM)		+= acrn/
+obj-$(CONFIG_SNP_GUEST)		+= snp-guest/
diff --git a/drivers/virt/snp-guest/Kconfig b/drivers/virt/snp-guest/Kconfig
new file mode 100644
index 000000000000..321a9ff7f869
--- /dev/null
+++ b/drivers/virt/snp-guest/Kconfig
@@ -0,0 +1,10 @@
+config SNP_GUEST
+	tristate "AMD SEV-SNP Guest request driver"
+	default y
+	depends on AMD_MEM_ENCRYPT
+	help
+	  Provides AMD SNP guest request driver. The driver can be used by the
+	  guest to communicate with the hypervisor to request the attestation report
+	  and more.
+
+	  If you choose 'M' here, this module will be called snp-guest.
diff --git a/drivers/virt/snp-guest/Makefile b/drivers/virt/snp-guest/Makefile
new file mode 100644
index 000000000000..f0866b9590aa
--- /dev/null
+++ b/drivers/virt/snp-guest/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0-only
+obj-$(CONFIG_SNP_GUEST) += snp-guest.o
diff --git a/drivers/virt/snp-guest/snp-guest.c b/drivers/virt/snp-guest/snp-guest.c
new file mode 100644
index 000000000000..7798705f822a
--- /dev/null
+++ b/drivers/virt/snp-guest/snp-guest.c
@@ -0,0 +1,455 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * AMD Secure Encrypted Virtualization Nested Paging (SEV-SNP) guest request interface
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Brijesh Singh <brijesh.singh@amd.com>
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/mutex.h>
+#include <linux/io.h>
+#include <linux/snp-guest.h>
+#include <linux/platform_device.h>
+#include <linux/miscdevice.h>
+#include <linux/set_memory.h>
+#include <linux/fs.h>
+#include <crypto/aead.h>
+#include <linux/scatterlist.h>
+#include <uapi/linux/snp-guest.h>
+
+#define DEVICE_NAME	"snp-guest"
+#define AAD_LEN		48
+#define MSG_HDR_VER	1
+
+struct snp_guest_crypto {
+	struct crypto_aead *tfm;
+	uint8_t *iv, *authtag;
+	int iv_len, a_len;
+};
+
+struct snp_guest_dev {
+	struct device *dev;
+	struct miscdevice misc;
+
+	void __iomem *base;
+	struct snp_data_secrets_layout *secrets;
+	struct snp_guest_crypto *crypto;
+	struct snp_guest_msg *request, *response;
+};
+
+static DEFINE_MUTEX(snp_cmd_mutex);
+
+static inline struct snp_guest_dev *to_snp_dev(struct file *file)
+{
+	struct miscdevice *dev = file->private_data;
+
+	return container_of(dev, struct snp_guest_dev, misc);
+}
+
+static void free_shared_pages(void *buf, size_t sz)
+{
+	unsigned npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
+
+	/* If fail to restore the encryption mask then leak it. */
+	if (set_memory_encrypted((unsigned long)buf, npages))
+		return;
+
+	__free_pages(virt_to_page(buf), get_order(sz));
+}
+
+static void *alloc_shared_pages(size_t sz)
+{
+	unsigned npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
+	struct page *page;
+	int ret;
+
+	page = alloc_pages(GFP_KERNEL_ACCOUNT, get_order(sz));
+	if (IS_ERR(page))
+		return NULL;
+
+	ret = set_memory_decrypted((unsigned long)page_address(page), npages);
+	if (ret) {
+		__free_pages(page, get_order(sz));
+		return NULL;
+	}
+
+	return page_address(page);
+}
+
+static struct snp_guest_crypto *init_crypto(struct snp_guest_dev *snp_dev)
+{
+	struct snp_data_secrets_layout *secrets = snp_dev->secrets;
+	struct snp_guest_crypto *crypto;
+
+	crypto = kzalloc(sizeof(*crypto), GFP_KERNEL_ACCOUNT);
+	if (!crypto)
+		return NULL;
+
+	crypto->tfm = crypto_alloc_aead("gcm(aes)", 0, 0);
+	if (IS_ERR(crypto->tfm))
+		goto e_free;
+
+	if (crypto_aead_setkey(crypto->tfm, secrets->vmpck0, 32))
+		goto e_free_crypto;
+
+	crypto->iv_len = crypto_aead_ivsize(crypto->tfm);
+	if (crypto->iv_len < 12) {
+		dev_err(snp_dev->dev, "IV length is less than 12.\n");
+		goto e_free_crypto;
+	}
+
+	crypto->iv = kmalloc(crypto->iv_len, GFP_KERNEL_ACCOUNT);
+	if (!crypto->iv)
+		goto e_free_crypto;
+
+	if (crypto_aead_authsize(crypto->tfm) > MAX_AUTHTAG_LEN) {
+		if (crypto_aead_setauthsize(crypto->tfm, MAX_AUTHTAG_LEN)) {
+			dev_err(snp_dev->dev, "failed to set authsize to %d\n", MAX_AUTHTAG_LEN);
+			goto e_free_crypto;
+		}
+	}
+
+	crypto->a_len = crypto_aead_authsize(crypto->tfm);
+	crypto->authtag = kmalloc(crypto->a_len, GFP_KERNEL_ACCOUNT);
+	if (!crypto->authtag)
+		goto e_free_crypto;
+
+	return crypto;
+
+e_free_crypto:
+	crypto_free_aead(crypto->tfm);
+e_free:
+	kfree(crypto->iv);
+	kfree(crypto->authtag);
+	kfree(crypto);
+
+	return NULL;
+}
+
+static void deinit_crypto(struct snp_guest_crypto *crypto)
+{
+	crypto_free_aead(crypto->tfm);
+	kfree(crypto->iv);
+	kfree(crypto->authtag);
+	kfree(crypto);
+}
+
+static int enc_dec_message(struct snp_guest_crypto *crypto, struct snp_guest_msg *msg,
+			   uint8_t *src_buf, uint8_t *dst_buf, size_t len, bool enc)
+{
+	struct snp_guest_msg_hdr *hdr = &msg->hdr;
+	struct scatterlist src[3], dst[3];
+	DECLARE_CRYPTO_WAIT(wait);
+	struct aead_request *req;
+	int ret;
+
+	req = aead_request_alloc(crypto->tfm, GFP_KERNEL);
+	if (!req)
+		return -ENOMEM;
+
+	/*
+	 * AEAD memory operations:
+	 * +------ AAD -------+------- DATA -----+---- AUTHTAG----+
+	 * |  msg header      |  plaintext       |  hdr->authtag  |
+	 * | bytes 30h - 5Fh  |    or            |                |
+	 * |                  |   cipher         |                |
+	 * +------------------+------------------+----------------+
+	 */
+	sg_init_table(src, 3);
+	sg_set_buf(&src[0], &hdr->algo, AAD_LEN);
+	sg_set_buf(&src[1], src_buf, hdr->msg_sz);
+	sg_set_buf(&src[2], hdr->authtag, crypto->a_len);
+
+	sg_init_table(dst, 3);
+	sg_set_buf(&dst[0], &hdr->algo, AAD_LEN);
+	sg_set_buf(&dst[1], dst_buf, hdr->msg_sz);
+	sg_set_buf(&dst[2], hdr->authtag, crypto->a_len);
+
+	aead_request_set_ad(req, AAD_LEN);
+	aead_request_set_tfm(req, crypto->tfm);
+	aead_request_set_callback(req, 0, crypto_req_done, &wait);
+
+	aead_request_set_crypt(req, src, dst, len, crypto->iv);
+	ret = crypto_wait_req(enc ? crypto_aead_encrypt(req) : crypto_aead_decrypt(req), &wait);
+
+	aead_request_free(req);
+	return ret;
+}
+
+static int encrypt_payload(struct snp_guest_dev *snp_dev, struct snp_guest_msg *msg,
+			   void *plaintext, size_t len)
+{
+	struct snp_guest_crypto *crypto = snp_dev->crypto;
+	struct snp_guest_msg_hdr *hdr = &msg->hdr;
+
+	memset(crypto->iv, 0, crypto->iv_len);
+	memcpy(crypto->iv, &hdr->msg_seqno, sizeof(hdr->msg_seqno));
+
+	return enc_dec_message(crypto, msg, plaintext, msg->payload, len, true);
+}
+
+static int decrypt_payload(struct snp_guest_dev *snp_dev, struct snp_guest_msg *msg,
+			   void *plaintext, size_t len)
+{
+	struct snp_guest_crypto *crypto = snp_dev->crypto;
+	struct snp_guest_msg_hdr *hdr = &msg->hdr;
+
+	/* Build IV with response buffer sequence number */
+	memset(crypto->iv, 0, crypto->iv_len);
+	memcpy(crypto->iv, &hdr->msg_seqno, sizeof(hdr->msg_seqno));
+
+	return enc_dec_message(crypto, msg, msg->payload, plaintext, len, false);
+}
+
+static int __handle_guest_request(struct snp_guest_dev *snp_dev, int msg_type,
+				 struct snp_user_guest_request *input, uint8_t *req_buf,
+				 size_t req_sz, uint8_t *resp_buf, size_t resp_sz, size_t *msg_sz)
+{
+	struct secrets_guest_priv *priv = &snp_dev->secrets->guest_priv;
+	struct snp_guest_msg *response = snp_dev->response;
+	struct snp_guest_msg_hdr *resp_hdr = &response->hdr;
+	struct snp_guest_msg *request = snp_dev->request;
+	struct snp_guest_msg_hdr *req_hdr = &request->hdr;
+	struct snp_guest_crypto *crypto = snp_dev->crypto;
+	struct snp_guest_request_data data;
+	int ret;
+
+	/* The sequence number must begin with non-zero value */
+	if (!priv->msg_seqno_0)
+		priv->msg_seqno_0 = 1;
+
+	/* Populate the request header */
+	memset(req_hdr, 0, sizeof(*req_hdr));
+	req_hdr->algo = SNP_AEAD_AES_256_GCM;
+	req_hdr->hdr_version = MSG_HDR_VER;
+	req_hdr->hdr_sz = sizeof(*req_hdr);
+	req_hdr->msg_type = msg_type;
+	req_hdr->msg_version = input->msg_version;
+	req_hdr->msg_seqno = priv->msg_seqno_0;
+	req_hdr->msg_vmpck = 0;
+	req_hdr->msg_sz = req_sz;
+
+	dev_dbg(snp_dev->dev, "request [msg_seqno %lld msg_type %d msg_version %d msg_sz %d]\n",
+		req_hdr->msg_seqno, req_hdr->msg_type, req_hdr->msg_version, req_hdr->msg_sz);
+
+	/* Encrypt the request message buffer */
+	ret = encrypt_payload(snp_dev, request, req_buf, req_sz);
+	if (ret)
+		return ret;
+
+	/* Call firmware to process the request */
+	data.req_gpa = __pa(request);
+	data.resp_gpa = __pa(response);
+	ret = snp_issue_guest_request(SNP_GUEST_REQUEST, &data);
+	input->fw_err = ret;
+	if (ret)
+		return ret;
+
+	dev_dbg(snp_dev->dev, "response [msg_seqno %lld msg_type %d msg_version %d msg_sz %d]\n",
+		resp_hdr->msg_seqno, resp_hdr->msg_type, resp_hdr->msg_version, resp_hdr->msg_sz);
+
+	/* Verify that the sequence counter is incremented by 1 */
+	if (resp_hdr->msg_seqno != (req_hdr->msg_seqno + 1))
+		return -EBADMSG;
+
+	/* Save the message counter for the next request */
+	priv->msg_seqno_0 = resp_hdr->msg_seqno + 1;
+
+	/* Verify response message type and version */
+	if ((resp_hdr->msg_type != (req_hdr->msg_type + 1)) ||
+	    (resp_hdr->msg_version != req_hdr->msg_version))
+		return -EBADMSG;
+
+	/* If the message size is greather than our buffer length then return an error. */
+	if (unlikely((resp_hdr->msg_sz + crypto->a_len) > resp_sz))
+		return -EBADMSG;
+
+	/* Decrypt the payload */
+	ret = decrypt_payload(snp_dev, response, resp_buf, resp_hdr->msg_sz + crypto->a_len);
+	if (ret)
+		return ret;
+
+	*msg_sz = resp_hdr->msg_sz;
+	return 0;
+}
+
+static int handle_guest_request(struct snp_guest_dev *snp_dev, int msg_type,
+				struct snp_user_guest_request *input, void *req_buf,
+				size_t req_len, void __user *resp_buf, size_t resp_len)
+{
+	struct snp_guest_crypto *crypto = snp_dev->crypto;
+	struct page *page;
+	size_t msg_len;
+	int ret;
+
+	/* Allocate the buffer to hold response */
+	resp_len += crypto->a_len;
+	page = alloc_pages(GFP_KERNEL_ACCOUNT, get_order(resp_len));
+	if (!page)
+		return -ENOMEM;
+
+	ret = __handle_guest_request(snp_dev, msg_type, input, req_buf, req_len,
+			page_address(page), resp_len, &msg_len);
+	if (ret)
+		goto e_free;
+
+	if (copy_to_user(resp_buf, page_address(page), msg_len))
+		ret = -EFAULT;
+
+e_free:
+	__free_pages(page, get_order(resp_len));
+
+	return ret;
+}
+
+static int get_report(struct snp_guest_dev *snp_dev, struct snp_user_guest_request *input)
+{
+	struct snp_user_report __user *report = (struct snp_user_report *)input->data;
+	struct snp_user_report_req req;
+
+	if (copy_from_user(&req, &report->req, sizeof(req)))
+		return -EFAULT;
+
+	return handle_guest_request(snp_dev, SNP_MSG_REPORT_REQ, input, &req.user_data,
+			sizeof(req.user_data), report->response, sizeof(report->response));
+}
+
+static int derive_key(struct snp_guest_dev *snp_dev, struct snp_user_guest_request *input)
+{
+	struct snp_user_derive_key __user *key = (struct snp_user_derive_key *)input->data;
+	struct snp_user_derive_key_req req;
+
+	if (copy_from_user(&req, &key->req, sizeof(req)))
+		return -EFAULT;
+
+	return handle_guest_request(snp_dev, SNP_MSG_KEY_REQ, input, &req, sizeof(req),
+			key->response, sizeof(key->response));
+}
+
+static long snp_guest_ioctl(struct file *file, unsigned int ioctl, unsigned long arg)
+{
+	struct snp_guest_dev *snp_dev = to_snp_dev(file);
+	struct snp_user_guest_request input;
+	void __user *argp = (void __user *)arg;
+	int ret = -ENOTTY;
+
+	if (copy_from_user(&input, argp, sizeof(input)))
+		return -EFAULT;
+
+	mutex_lock(&snp_cmd_mutex);
+	switch(ioctl) {
+	case SNP_GET_REPORT: {
+		ret = get_report(snp_dev, &input);
+		break;
+	}
+	case SNP_DERIVE_KEY: {
+		ret = derive_key(snp_dev, &input);
+		break;
+	}
+	default: break;
+	}
+
+	mutex_unlock(&snp_cmd_mutex);
+
+	if (copy_to_user(argp, &input, sizeof(input)))
+		return -EFAULT;
+
+	return ret;
+}
+
+static const struct file_operations snp_guest_fops = {
+	.owner	= THIS_MODULE,
+	.unlocked_ioctl = snp_guest_ioctl,
+};
+
+static int __init snp_guest_probe(struct platform_device *pdev)
+{
+	struct snp_guest_dev *snp_dev;
+	struct device *dev = &pdev->dev;
+	struct miscdevice *misc;
+	struct resource *res;
+	int ret;
+
+	snp_dev = devm_kzalloc(&pdev->dev, sizeof(struct snp_guest_dev), GFP_KERNEL);
+	if (!snp_dev)
+		return -ENOMEM;
+
+	platform_set_drvdata(pdev, snp_dev);
+	snp_dev->dev = dev;
+
+	res = platform_get_mem_or_io(pdev, 0);
+	if (IS_ERR(res))
+		return PTR_ERR(res);
+
+	if (unlikely(resource_type(res) != IORESOURCE_MEM))
+		return -EINVAL;
+
+	snp_dev->base = memremap(res->start, resource_size(res), MEMREMAP_WB);
+	if (IS_ERR(snp_dev->base))
+		return PTR_ERR(snp_dev->base);
+
+	snp_dev->secrets = (struct snp_data_secrets_layout *)snp_dev->base;
+
+	snp_dev->crypto = init_crypto(snp_dev);
+	if (!snp_dev->crypto) {
+		ret = -EINVAL;
+		goto e_unmap;
+	}
+
+	/* Allocate the shared page used for the request and response message. */
+	snp_dev->request = alloc_shared_pages(sizeof(struct snp_guest_msg));
+	if (IS_ERR(snp_dev->request)) {
+		ret = PTR_ERR(snp_dev->request);
+		goto e_unmap;
+	}
+
+	snp_dev->response = alloc_shared_pages(sizeof(struct snp_guest_msg));
+	if (IS_ERR(snp_dev->response)) {
+		ret = PTR_ERR(snp_dev->response);
+		goto e_free_req;
+	}
+
+	misc = &snp_dev->misc;
+	misc->minor = MISC_DYNAMIC_MINOR;
+	misc->name = DEVICE_NAME;
+	misc->fops = &snp_guest_fops;
+
+	return misc_register(misc);
+
+e_free_req:
+	free_shared_pages(snp_dev->request, sizeof(struct snp_guest_msg));
+e_unmap:
+	memunmap(snp_dev->base);
+	return ret;
+}
+
+static int __exit snp_guest_remove(struct platform_device *pdev)
+{
+	struct snp_guest_dev *snp_dev = platform_get_drvdata(pdev);
+
+	free_shared_pages(snp_dev->request, sizeof(struct snp_guest_msg));
+	free_shared_pages(snp_dev->response, sizeof(struct snp_guest_msg));
+	deinit_crypto(snp_dev->crypto);
+	memunmap(snp_dev->base);
+	misc_deregister(&snp_dev->misc);
+
+	return 0;
+}
+
+static struct platform_driver snp_guest_driver = {
+	.remove		= __exit_p(snp_guest_remove),
+	.driver		= {
+		.name = "snp-guest",
+	},
+};
+
+module_platform_driver_probe(snp_guest_driver, snp_guest_probe);
+
+MODULE_AUTHOR("Brijesh Singh <brijesh.singh@amd.com>");
+MODULE_LICENSE("GPL");
+MODULE_VERSION("1.0.0");
+MODULE_DESCRIPTION("AMD SNP Guest Driver");
diff --git a/include/uapi/linux/snp-guest.h b/include/uapi/linux/snp-guest.h
new file mode 100644
index 000000000000..61ff22092a22
--- /dev/null
+++ b/include/uapi/linux/snp-guest.h
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
+/*
+ * Userspace interface for AMD Secure Encrypted Virtualization Nested Paging (SEV-SNP)
+ * guest command request.
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Brijesh Singh <brijesh.singh@amd.com>
+ *
+ * SEV-SNP API specification is available at: https://developer.amd.com/sev/
+ */
+
+#ifndef __UAPI_LINUX_SNP_GUEST_H_
+#define __UAPI_LINUX_SNP_GUEST_H_
+
+#include <linux/types.h>
+
+struct snp_user_report_req {
+	__u8 user_data[64];
+};
+
+struct snp_user_report {
+	struct snp_user_report_req req;
+	__u8 response[4000];	/* see SEV-SNP spec for the response format */
+};
+
+struct snp_user_derive_key_req {
+	__u8 root_key_select;
+	__u64 guest_field_select;
+	__u32 vmpl;
+	__u32 guest_svn;
+	__u64 tcb_version;
+};
+
+struct snp_user_derive_key {
+	struct snp_user_derive_key_req req;
+	__u8 response[64];	/* see SEV-SNP spec for the response format */
+};
+
+struct snp_user_guest_request {
+	__u8 msg_version;	/* Message version number (must be non-zero) */
+	__u64 data;
+	__u32 fw_err;		/* firmware error code on failure (see psp-sev.h) */
+};
+
+#define SNP_GUEST_REQ_IOC_TYPE	'S'
+#define SNP_GET_REPORT _IOWR(SNP_GUEST_REQ_IOC_TYPE, 0x0, struct snp_user_guest_request)
+#define SNP_DERIVE_KEY _IOWR(SNP_GUEST_REQ_IOC_TYPE, 0x1, struct snp_user_guest_request)
+
+#endif /* __UAPI_LINUX_SNP_GUEST_H_ */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 10/20] x86/sev: Add a helper for the PVALIDATE instruction
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 10/20] x86/sev: Add a helper for the PVALIDATE instruction Brijesh Singh
@ 2021-04-30 13:05   ` Brijesh Singh
  2021-05-20 17:32     ` Borislav Petkov
  0 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-04-30 13:05 UTC (permalink / raw)
  To: x86, linux-kernel, kvm
  Cc: brijesh.singh, tglx, bp, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck


On 4/30/21 7:16 AM, Brijesh Singh wrote:
> An SNP-active guest uses the PVALIDATE instruction to validate or
> rescind the validation of a guest page’s RMP entry. Upon completion,
> a return code is stored in EAX and rFLAGS bits are set based on the
> return code. If the instruction completed successfully, the CF
> indicates if the content of the RMP were changed or not.
>
> See AMD APM Volume 3 for additional details.
>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>  arch/x86/include/asm/sev.h | 27 +++++++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
>
> diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
> index 134a7c9d91b6..48f911a229ba 100644
> --- a/arch/x86/include/asm/sev.h
> +++ b/arch/x86/include/asm/sev.h
> @@ -59,6 +59,16 @@ extern void vc_no_ghcb(void);
>  extern void vc_boot_ghcb(void);
>  extern bool handle_vc_boot_ghcb(struct pt_regs *regs);
>  
> +/* Return code of pvalidate */
> +#define PVALIDATE_SUCCESS		0
> +#define PVALIDATE_FAIL_INPUT		1
> +#define PVALIDATE_FAIL_SIZEMISMATCH	6
> +#define PVALIDATE_FAIL_NOUPDATE		255 /* Software defined (when rFlags.CF = 1) */
> +
> +/* RMP page size */
> +#define RMP_PG_SIZE_2M			1
> +#define RMP_PG_SIZE_4K			0
> +
>  #ifdef CONFIG_AMD_MEM_ENCRYPT
>  extern struct static_key_false sev_es_enable_key;
>  extern void __sev_es_ist_enter(struct pt_regs *regs);
> @@ -81,12 +91,29 @@ static __always_inline void sev_es_nmi_complete(void)
>  		__sev_es_nmi_complete();
>  }
>  extern int __init sev_es_efi_map_ghcbs(pgd_t *pgd);
> +static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate)
> +{
> +	unsigned long flags;
> +	int rc = 0;
> +
> +	asm volatile(".byte 0xF2, 0x0F, 0x01, 0xFF\n\t"
> +		     CC_SET(c)
> +		     : CC_OUT(c) (flags), "=a"(rc)
> +		     : "a"(vaddr), "c"(rmp_psize), "d"(validate)
> +		     : "memory", "cc");
> +
> +	if (flags & X86_EFLAGS_CF)
> +		return PVALIDATE_FAIL_NOUPDATE;
> +
> +	return rc;
> +}


While generating the patches for part1, I accidentally picked the wrong
version of this patch.

The pvalidate() looks like this

static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool
validate)
{
    bool no_rmpupdate;
    int rc;

    asm volatile(".byte 0xF2, 0x0F, 0x01, 0xFF\n\t"
             CC_SET(c)
             : CC_OUT(c) (no_rmpupdate), "=a"(rc)
             : "a"(vaddr), "c"(rmp_psize), "d"(validate)
             : "memory", "cc");

    if (no_rmpupdate)
        return PVALIDATE_FAIL_NOUPDATE;

    return rc;
}

https://github.com/AMDESE/linux/commit/581316923efb4e4833722962b02a0c892aed9505#diff-a9a713d4f58a64b6640506f689940cb077dcb0a3705da0c024145c0c857d6c38


>  #else
>  static inline void sev_es_ist_enter(struct pt_regs *regs) { }
>  static inline void sev_es_ist_exit(void) { }
>  static inline int sev_es_setup_ap_jump_table(struct real_mode_header *rmh) { return 0; }
>  static inline void sev_es_nmi_complete(void) { }
>  static inline int sev_es_efi_map_ghcbs(pgd_t *pgd) { return 0; }
> +static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate) { return 0; }
>  #endif
>  
>  #endif

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 02/20] x86/sev: Save the negotiated GHCB version
  2021-04-30 12:15 ` [PATCH Part1 RFC v2 02/20] x86/sev: Save the negotiated GHCB version Brijesh Singh
@ 2021-05-11  9:23   ` Borislav Petkov
  2021-05-11 18:29     ` Brijesh Singh
  0 siblings, 1 reply; 62+ messages in thread
From: Borislav Petkov @ 2021-05-11  9:23 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Fri, Apr 30, 2021 at 07:15:58AM -0500, Brijesh Singh wrote:
> The SEV-ES guest calls the sev_es_negotiate_protocol() to negotiate the
> GHCB protocol version before establishing the GHCB. Cache the negotiated
> GHCB version so that it can be used later.
> 
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>  arch/x86/include/asm/sev.h   |  2 +-
>  arch/x86/kernel/sev-shared.c | 15 ++++++++++++---
>  2 files changed, 13 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
> index fa5cd05d3b5b..7ec91b1359df 100644
> --- a/arch/x86/include/asm/sev.h
> +++ b/arch/x86/include/asm/sev.h
> @@ -12,7 +12,7 @@
>  #include <asm/insn.h>
>  #include <asm/sev-common.h>
>  
> -#define GHCB_PROTO_OUR		0x0001UL
> +#define GHCB_PROTOCOL_MIN	1ULL
>  #define GHCB_PROTOCOL_MAX	1ULL
>  #define GHCB_DEFAULT_USAGE	0ULL
>  
> diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
> index 6ec8b3bfd76e..48a47540b85f 100644
> --- a/arch/x86/kernel/sev-shared.c
> +++ b/arch/x86/kernel/sev-shared.c
> @@ -14,6 +14,13 @@
>  #define has_cpuflag(f)	boot_cpu_has(f)
>  #endif
>  
> +/*
> + * Since feature negotitation related variables are set early in the boot
> + * process they must reside in the .data section so as not to be zeroed
> + * out when the .bss section is later cleared.

  *
  * GHCB protocol version negotiated with the hypervisor.
  */

> +static u16 ghcb_version __section(".data") = 0;

Did you not see this when running checkpatch.pl on your patch?

ERROR: do not initialise statics to 0
#141: FILE: arch/x86/kernel/sev-shared.c:22:
+static u16 ghcb_version __section(".data") = 0;

>  static bool __init sev_es_check_cpu_features(void)
>  {
>  	if (!has_cpuflag(X86_FEATURE_RDRAND)) {
> @@ -54,10 +61,12 @@ static bool sev_es_negotiate_protocol(void)
>  	if (GHCB_MSR_INFO(val) != GHCB_MSR_SEV_INFO_RESP)
>  		return false;
>  
> -	if (GHCB_MSR_PROTO_MAX(val) < GHCB_PROTO_OUR ||
> -	    GHCB_MSR_PROTO_MIN(val) > GHCB_PROTO_OUR)
> +	if (GHCB_MSR_PROTO_MAX(val) < GHCB_PROTOCOL_MIN ||
> +	    GHCB_MSR_PROTO_MIN(val) > GHCB_PROTOCOL_MAX)
>  		return false;
>  
> +	ghcb_version = min_t(size_t, GHCB_MSR_PROTO_MAX(val), GHCB_PROTOCOL_MAX);

How is that even supposed to work? GHCB_PROTOCOL_MAX is 1 so
ghcb_version will be always 1 when you do min_t() on values one of which
is 1.

Maybe I'm missing something...

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 03/20] x86/sev: Add support for hypervisor feature VMGEXIT
  2021-04-30 12:15 ` [PATCH Part1 RFC v2 03/20] x86/sev: Add support for hypervisor feature VMGEXIT Brijesh Singh
@ 2021-05-11 10:01   ` Borislav Petkov
  2021-05-11 18:53     ` Brijesh Singh
  0 siblings, 1 reply; 62+ messages in thread
From: Borislav Petkov @ 2021-05-11 10:01 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Fri, Apr 30, 2021 at 07:15:59AM -0500, Brijesh Singh wrote:
> Version 2 of GHCB specification introduced advertisement of a features
> that are supported by the hypervisor. Define the GHCB MSR protocol and NAE
> for the hypervisor feature request and query the feature during the GHCB
> protocol negotitation. See the GHCB specification for more details.
> 
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>  arch/x86/include/asm/sev-common.h | 17 +++++++++++++++++
>  arch/x86/include/uapi/asm/svm.h   |  2 ++
>  arch/x86/kernel/sev-shared.c      | 24 ++++++++++++++++++++++++
>  3 files changed, 43 insertions(+)
> 
> diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
> index 9f1b66090a4c..8142e247d8da 100644
> --- a/arch/x86/include/asm/sev-common.h
> +++ b/arch/x86/include/asm/sev-common.h
> @@ -51,6 +51,22 @@
>  #define GHCB_MSR_AP_RESET_HOLD_RESULT_POS	12
>  #define GHCB_MSR_AP_RESET_HOLD_RESULT_MASK	0xfffffffffffff
>  
> +/* GHCB Hypervisor Feature Request */
> +#define GHCB_MSR_HV_FEATURES_REQ	0x080
> +#define GHCB_MSR_HV_FEATURES_RESP	0x081
> +#define GHCB_MSR_HV_FEATURES_POS	12
> +#define GHCB_MSR_HV_FEATURES_MASK	0xfffffffffffffUL
> +#define GHCB_MSR_HV_FEATURES_RESP_VAL(v)	\
> +	(((v) >> GHCB_MSR_HV_FEATURES_POS) & GHCB_MSR_HV_FEATURES_MASK)
> +
> +#define GHCB_HV_FEATURES_SNP		BIT_ULL(0)
> +#define GHCB_HV_FEATURES_SNP_AP_CREATION			\
> +		(BIT_ULL(1) | GHCB_HV_FEATURES_SNP)
> +#define GHCB_HV_FEATURES_SNP_RESTRICTED_INJECTION		\
> +		(BIT_ULL(2) | GHCB_HV_FEATURES_SNP_AP_CREATION)
> +#define GHCB_HV_FEATURES_SNP_RESTRICTED_INJECTION_TIMER		\
> +		(BIT_ULL(3) | GHCB_HV_FEATURES_SNP_RESTRICTED_INJECTION)

Please add those in the patches which use them - not in bulk here.

And GHCB_HV_FEATURES_SNP_RESTRICTED_INJECTION_TIMER is a mouthfull and
looks like BIOS code to me. But this is still the kernel, remember? :-)

So let's do

GHCB_MSR_HV_FT_*

GHCB_SNP_AP_CREATION
GHCB_SNP_RESTRICTED_INJ
GHCB_SNP_RESTRICTED_INJ_TMR

and so on so that we can all keep our sanity when reading that code.

> +
>  #define GHCB_MSR_TERM_REQ		0x100
>  #define GHCB_MSR_TERM_REASON_SET_POS	12
>  #define GHCB_MSR_TERM_REASON_SET_MASK	0xf
> @@ -62,6 +78,7 @@
>  
>  #define GHCB_SEV_ES_REASON_GENERAL_REQUEST	0
>  #define GHCB_SEV_ES_REASON_PROTOCOL_UNSUPPORTED	1
> +#define GHCB_SEV_ES_REASON_SNP_UNSUPPORTED	2

I remember asking for those to get shortened too

GHCB_SEV_ES_GEN_REQ
GHCB_SEV_ES_PROT_UNSUPPORTED
GHCB_SEV_ES_SNP_UNSUPPORTED

Perhaps in a prepatch?

>  #define GHCB_RESP_CODE(v)		((v) & GHCB_MSR_INFO_MASK)
>  
> diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
> index 554f75fe013c..7fbc311e2de1 100644
> --- a/arch/x86/include/uapi/asm/svm.h
> +++ b/arch/x86/include/uapi/asm/svm.h
> @@ -108,6 +108,7 @@
>  #define SVM_VMGEXIT_AP_JUMP_TABLE		0x80000005
>  #define SVM_VMGEXIT_SET_AP_JUMP_TABLE		0
>  #define SVM_VMGEXIT_GET_AP_JUMP_TABLE		1
> +#define SVM_VMGEXIT_HYPERVISOR_FEATURES		0x8000fffd

SVM_VMGEXIT_HV_FT

you get the idea.

>  #define SVM_VMGEXIT_UNSUPPORTED_EVENT		0x8000ffff
>  
>  #define SVM_EXIT_ERR           -1
> @@ -215,6 +216,7 @@
>  	{ SVM_VMGEXIT_NMI_COMPLETE,	"vmgexit_nmi_complete" }, \
>  	{ SVM_VMGEXIT_AP_HLT_LOOP,	"vmgexit_ap_hlt_loop" }, \
>  	{ SVM_VMGEXIT_AP_JUMP_TABLE,	"vmgexit_ap_jump_table" }, \
> +	{ SVM_VMGEXIT_HYPERVISOR_FEATURES,	"vmgexit_hypervisor_feature" }, \
>  	{ SVM_EXIT_ERR,         "invalid_guest_state" }
>  
>  
> diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
> index 48a47540b85f..874f911837db 100644
> --- a/arch/x86/kernel/sev-shared.c
> +++ b/arch/x86/kernel/sev-shared.c
> @@ -20,6 +20,7 @@
>   * out when the .bss section is later cleared.
>   */
>  static u16 ghcb_version __section(".data") = 0;
> +static u64 hv_features __section(".data") = 0;

ERROR: do not initialise statics to 0
#181: FILE: arch/x86/kernel/sev-shared.c:23:
+static u64 hv_features __section(".data") = 0;

>  static bool __init sev_es_check_cpu_features(void)
>  {
> @@ -49,6 +50,26 @@ static void __noreturn sev_es_terminate(unsigned int reason)
>  		asm volatile("hlt\n" : : : "memory");
>  }
>  
> +static bool ghcb_get_hv_features(void)

Used only once here - no need for the ghcb_ prefix.

> +{
> +	u64 val;
> +
> +	/* The hypervisor features are available from version 2 onward. */
> +	if (ghcb_version < 2)
> +		return true;

		return false;
no?

Also, this should be done differently:

	if (ghcb_version >= 2)
		get_hv_features();

at the call site.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 02/20] x86/sev: Save the negotiated GHCB version
  2021-05-11  9:23   ` Borislav Petkov
@ 2021-05-11 18:29     ` Brijesh Singh
  2021-05-11 18:41       ` Borislav Petkov
  0 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-05-11 18:29 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, tglx, jroedel,
	thomas.lendacky, pbonzini, mingo, dave.hansen, rientjes, seanjc,
	peterz, hpa, tony.luck


On 5/11/21 4:23 AM, Borislav Petkov wrote:
> On Fri, Apr 30, 2021 at 07:15:58AM -0500, Brijesh Singh wrote:
>> The SEV-ES guest calls the sev_es_negotiate_protocol() to negotiate the
>> GHCB protocol version before establishing the GHCB. Cache the negotiated
>> GHCB version so that it can be used later.
>>
>> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
>> ---
>>  arch/x86/include/asm/sev.h   |  2 +-
>>  arch/x86/kernel/sev-shared.c | 15 ++++++++++++---
>>  2 files changed, 13 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
>> index fa5cd05d3b5b..7ec91b1359df 100644
>> --- a/arch/x86/include/asm/sev.h
>> +++ b/arch/x86/include/asm/sev.h
>> @@ -12,7 +12,7 @@
>>  #include <asm/insn.h>
>>  #include <asm/sev-common.h>
>>  
>> -#define GHCB_PROTO_OUR		0x0001UL
>> +#define GHCB_PROTOCOL_MIN	1ULL
>>  #define GHCB_PROTOCOL_MAX	1ULL
>>  #define GHCB_DEFAULT_USAGE	0ULL
>>  
>> diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
>> index 6ec8b3bfd76e..48a47540b85f 100644
>> --- a/arch/x86/kernel/sev-shared.c
>> +++ b/arch/x86/kernel/sev-shared.c
>> @@ -14,6 +14,13 @@
>>  #define has_cpuflag(f)	boot_cpu_has(f)
>>  #endif
>>  
>> +/*
>> + * Since feature negotitation related variables are set early in the boot
>> + * process they must reside in the .data section so as not to be zeroed
>> + * out when the .bss section is later cleared.
>   *
>   * GHCB protocol version negotiated with the hypervisor.
>   */
>
>> +static u16 ghcb_version __section(".data") = 0;
> Did you not see this when running checkpatch.pl on your patch?
>
> ERROR: do not initialise statics to 0
> #141: FILE: arch/x86/kernel/sev-shared.c:22:
> +static u16 ghcb_version __section(".data") = 0;
>
I ignored it, because I thought I still need to initialize the value to
zero because it does not go into .bss section which gets initialized to
zero by kernel on boot.

I guess I was wrong, maybe compiler or kernel build ensures that
ghcb_version variable to be init'ed to zero because its marked static ?


>>  static bool __init sev_es_check_cpu_features(void)
>>  {
>>  	if (!has_cpuflag(X86_FEATURE_RDRAND)) {
>> @@ -54,10 +61,12 @@ static bool sev_es_negotiate_protocol(void)
>>  	if (GHCB_MSR_INFO(val) != GHCB_MSR_SEV_INFO_RESP)
>>  		return false;
>>  
>> -	if (GHCB_MSR_PROTO_MAX(val) < GHCB_PROTO_OUR ||
>> -	    GHCB_MSR_PROTO_MIN(val) > GHCB_PROTO_OUR)
>> +	if (GHCB_MSR_PROTO_MAX(val) < GHCB_PROTOCOL_MIN ||
>> +	    GHCB_MSR_PROTO_MIN(val) > GHCB_PROTOCOL_MAX)
>>  		return false;
>>  
>> +	ghcb_version = min_t(size_t, GHCB_MSR_PROTO_MAX(val), GHCB_PROTOCOL_MAX);
> How is that even supposed to work? GHCB_PROTOCOL_MAX is 1 so
> ghcb_version will be always 1 when you do min_t() on values one of which
> is 1.
>
> Maybe I'm missing something...

In patch #4, you will see that I increase the GHCB max protocol version
to 2, and then min_t() will choose the lower value. I didn't combine
version bump and caching into same patch because I felt I will be asked
to break them into two logical patches.


>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 02/20] x86/sev: Save the negotiated GHCB version
  2021-05-11 18:29     ` Brijesh Singh
@ 2021-05-11 18:41       ` Borislav Petkov
  2021-05-12 14:03         ` Brijesh Singh
  0 siblings, 1 reply; 62+ messages in thread
From: Borislav Petkov @ 2021-05-11 18:41 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Tue, May 11, 2021 at 01:29:00PM -0500, Brijesh Singh wrote:
> I ignored it, because I thought I still need to initialize the value to
> zero because it does not go into .bss section which gets initialized to
> zero by kernel on boot.
> 
> I guess I was wrong, maybe compiler or kernel build ensures that
> ghcb_version variable to be init'ed to zero because its marked static ?

Yes.

If in doubt, always look at the generated asm:

make arch/x86/kernel/sev.s

You can see the .zero 2 gas directive there, where the variable is
defined.

> In patch #4, you will see that I increase the GHCB max protocol version
> to 2, and then min_t() will choose the lower value. I didn't combine
> version bump and caching into same patch because I felt I will be asked
> to break them into two logical patches.

Hmm, what would be the reasoning to keep the version bump in a separate
patch?

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 03/20] x86/sev: Add support for hypervisor feature VMGEXIT
  2021-05-11 10:01   ` Borislav Petkov
@ 2021-05-11 18:53     ` Brijesh Singh
  2021-05-17 14:40       ` Borislav Petkov
  0 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-05-11 18:53 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, tglx, jroedel,
	thomas.lendacky, pbonzini, mingo, dave.hansen, rientjes, seanjc,
	peterz, hpa, tony.luck


On 5/11/21 5:01 AM, Borislav Petkov wrote:
> On Fri, Apr 30, 2021 at 07:15:59AM -0500, Brijesh Singh wrote:
>> Version 2 of GHCB specification introduced advertisement of a features
>> that are supported by the hypervisor. Define the GHCB MSR protocol and NAE
>> for the hypervisor feature request and query the feature during the GHCB
>> protocol negotitation. See the GHCB specification for more details.
>>
>> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
>> ---
>>  arch/x86/include/asm/sev-common.h | 17 +++++++++++++++++
>>  arch/x86/include/uapi/asm/svm.h   |  2 ++
>>  arch/x86/kernel/sev-shared.c      | 24 ++++++++++++++++++++++++
>>  3 files changed, 43 insertions(+)
>>
>> diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
>> index 9f1b66090a4c..8142e247d8da 100644
>> --- a/arch/x86/include/asm/sev-common.h
>> +++ b/arch/x86/include/asm/sev-common.h
>> @@ -51,6 +51,22 @@
>>  #define GHCB_MSR_AP_RESET_HOLD_RESULT_POS	12
>>  #define GHCB_MSR_AP_RESET_HOLD_RESULT_MASK	0xfffffffffffff
>>  
>> +/* GHCB Hypervisor Feature Request */
>> +#define GHCB_MSR_HV_FEATURES_REQ	0x080
>> +#define GHCB_MSR_HV_FEATURES_RESP	0x081
>> +#define GHCB_MSR_HV_FEATURES_POS	12
>> +#define GHCB_MSR_HV_FEATURES_MASK	0xfffffffffffffUL
>> +#define GHCB_MSR_HV_FEATURES_RESP_VAL(v)	\
>> +	(((v) >> GHCB_MSR_HV_FEATURES_POS) & GHCB_MSR_HV_FEATURES_MASK)
>> +
>> +#define GHCB_HV_FEATURES_SNP		BIT_ULL(0)
>> +#define GHCB_HV_FEATURES_SNP_AP_CREATION			\
>> +		(BIT_ULL(1) | GHCB_HV_FEATURES_SNP)
>> +#define GHCB_HV_FEATURES_SNP_RESTRICTED_INJECTION		\
>> +		(BIT_ULL(2) | GHCB_HV_FEATURES_SNP_AP_CREATION)
>> +#define GHCB_HV_FEATURES_SNP_RESTRICTED_INJECTION_TIMER		\
>> +		(BIT_ULL(3) | GHCB_HV_FEATURES_SNP_RESTRICTED_INJECTION)
> Please add those in the patches which use them - not in bulk here.
>
> And GHCB_HV_FEATURES_SNP_RESTRICTED_INJECTION_TIMER is a mouthfull and
> looks like BIOS code to me. But this is still the kernel, remember? :-)
>
> So let's do
>
> GHCB_MSR_HV_FT_*
>
> GHCB_SNP_AP_CREATION
> GHCB_SNP_RESTRICTED_INJ
> GHCB_SNP_RESTRICTED_INJ_TMR
>
> and so on so that we can all keep our sanity when reading that code.

I am fine with the reduced name, I just hope that "TMR" does not create
confusion with "Trusted Memory Region" documented in  SEV-ES firmware
specification. Since I am working on both guest and OVMF patches
simultaneously so its possible that I just worked on this code after
OVMF and used the same mouthful name ;)  I apologies for those nits.


>> +
>>  #define GHCB_MSR_TERM_REQ		0x100
>>  #define GHCB_MSR_TERM_REASON_SET_POS	12
>>  #define GHCB_MSR_TERM_REASON_SET_MASK	0xf
>> @@ -62,6 +78,7 @@
>>  
>>  #define GHCB_SEV_ES_REASON_GENERAL_REQUEST	0
>>  #define GHCB_SEV_ES_REASON_PROTOCOL_UNSUPPORTED	1
>> +#define GHCB_SEV_ES_REASON_SNP_UNSUPPORTED	2
> I remember asking for those to get shortened too
>
> GHCB_SEV_ES_GEN_REQ
> GHCB_SEV_ES_PROT_UNSUPPORTED
> GHCB_SEV_ES_SNP_UNSUPPORTED
>
> Perhaps in a prepatch?

Sure, I will send prepatch.


>>  #define GHCB_RESP_CODE(v)		((v) & GHCB_MSR_INFO_MASK)
>>  
>> diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
>> index 554f75fe013c..7fbc311e2de1 100644
>> --- a/arch/x86/include/uapi/asm/svm.h
>> +++ b/arch/x86/include/uapi/asm/svm.h
>> @@ -108,6 +108,7 @@
>>  #define SVM_VMGEXIT_AP_JUMP_TABLE		0x80000005
>>  #define SVM_VMGEXIT_SET_AP_JUMP_TABLE		0
>>  #define SVM_VMGEXIT_GET_AP_JUMP_TABLE		1
>> +#define SVM_VMGEXIT_HYPERVISOR_FEATURES		0x8000fffd
> SVM_VMGEXIT_HV_FT
>
> you get the idea.
>
>>  #define SVM_VMGEXIT_UNSUPPORTED_EVENT		0x8000ffff
>>  
>>  #define SVM_EXIT_ERR           -1
>> @@ -215,6 +216,7 @@
>>  	{ SVM_VMGEXIT_NMI_COMPLETE,	"vmgexit_nmi_complete" }, \
>>  	{ SVM_VMGEXIT_AP_HLT_LOOP,	"vmgexit_ap_hlt_loop" }, \
>>  	{ SVM_VMGEXIT_AP_JUMP_TABLE,	"vmgexit_ap_jump_table" }, \
>> +	{ SVM_VMGEXIT_HYPERVISOR_FEATURES,	"vmgexit_hypervisor_feature" }, \
>>  	{ SVM_EXIT_ERR,         "invalid_guest_state" }
>>  
>>  
>> diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
>> index 48a47540b85f..874f911837db 100644
>> --- a/arch/x86/kernel/sev-shared.c
>> +++ b/arch/x86/kernel/sev-shared.c
>> @@ -20,6 +20,7 @@
>>   * out when the .bss section is later cleared.
>>   */
>>  static u16 ghcb_version __section(".data") = 0;
>> +static u64 hv_features __section(".data") = 0;
> ERROR: do not initialise statics to 0
> #181: FILE: arch/x86/kernel/sev-shared.c:23:
> +static u64 hv_features __section(".data") = 0;
>
>>  static bool __init sev_es_check_cpu_features(void)
>>  {
>> @@ -49,6 +50,26 @@ static void __noreturn sev_es_terminate(unsigned int reason)
>>  		asm volatile("hlt\n" : : : "memory");
>>  }
>>  
>> +static bool ghcb_get_hv_features(void)
> Used only once here - no need for the ghcb_ prefix.
Noted.
>
>> +{
>> +	u64 val;
>> +
>> +	/* The hypervisor features are available from version 2 onward. */
>> +	if (ghcb_version < 2)
>> +		return true;
> 		return false;
> no?
>
> Also, this should be done differently:
>
> 	if (ghcb_version >= 2)
> 		get_hv_features();
>
> at the call site.

I can go with the change in the call site itself.


>
> Thx.
>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 02/20] x86/sev: Save the negotiated GHCB version
  2021-05-11 18:41       ` Borislav Petkov
@ 2021-05-12 14:03         ` Brijesh Singh
  2021-05-12 14:31           ` Borislav Petkov
  0 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-05-12 14:03 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, tglx, jroedel,
	thomas.lendacky, pbonzini, mingo, dave.hansen, rientjes, seanjc,
	peterz, hpa, tony.luck


On 5/11/21 1:41 PM, Borislav Petkov wrote:
> On Tue, May 11, 2021 at 01:29:00PM -0500, Brijesh Singh wrote:
>> I ignored it, because I thought I still need to initialize the value to
>> zero because it does not go into .bss section which gets initialized to
>> zero by kernel on boot.
>>
>> I guess I was wrong, maybe compiler or kernel build ensures that
>> ghcb_version variable to be init'ed to zero because its marked static ?
> Yes.
>
> If in doubt, always look at the generated asm:
>
> make arch/x86/kernel/sev.s
>
> You can see the .zero 2 gas directive there, where the variable is
> defined.
>
>> In patch #4, you will see that I increase the GHCB max protocol version
>> to 2, and then min_t() will choose the lower value. I didn't combine
>> version bump and caching into same patch because I felt I will be asked
>> to break them into two logical patches.
> Hmm, what would be the reasoning to keep the version bump in a separate
> patch?

Version 2 of the spec adds bunch of NAEs, and several of them are
optional except the hyervisor features NAE. IMO, a guest should bump the
GHCB version only after it has implemented all the required NAEs from
the version 2. It may help during the git bisect of the guest kernel --
mainly when the hypervisor supports the higher version.



^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 02/20] x86/sev: Save the negotiated GHCB version
  2021-05-12 14:03         ` Brijesh Singh
@ 2021-05-12 14:31           ` Borislav Petkov
  2021-05-12 15:03             ` Brijesh Singh
  0 siblings, 1 reply; 62+ messages in thread
From: Borislav Petkov @ 2021-05-12 14:31 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Wed, May 12, 2021 at 09:03:41AM -0500, Brijesh Singh wrote:
> Version 2 of the spec adds bunch of NAEs, and several of them are
> optional except the hyervisor features NAE. IMO, a guest should bump the
> GHCB version only after it has implemented all the required NAEs from
> the version 2. It may help during the git bisect of the guest kernel --
> mainly when the hypervisor supports the higher version.

Aha, so AFAICT, the version bump should happen in patch 3 which adds
that HV features NAE - not in a separate one after that. The logic
being, with the patch which adds the functionality needed, you mark the
guest as supporting v2.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 02/20] x86/sev: Save the negotiated GHCB version
  2021-05-12 14:31           ` Borislav Petkov
@ 2021-05-12 15:03             ` Brijesh Singh
  0 siblings, 0 replies; 62+ messages in thread
From: Brijesh Singh @ 2021-05-12 15:03 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, tglx, jroedel,
	thomas.lendacky, pbonzini, mingo, dave.hansen, rientjes, seanjc,
	peterz, hpa, tony.luck


On 5/12/21 9:31 AM, Borislav Petkov wrote:
> On Wed, May 12, 2021 at 09:03:41AM -0500, Brijesh Singh wrote:
>> Version 2 of the spec adds bunch of NAEs, and several of them are
>> optional except the hyervisor features NAE. IMO, a guest should bump the
>> GHCB version only after it has implemented all the required NAEs from
>> the version 2. It may help during the git bisect of the guest kernel --
>> mainly when the hypervisor supports the higher version.
> Aha, so AFAICT, the version bump should happen in patch 3 which adds
> that HV features NAE - not in a separate one after that. The logic
> being, with the patch which adds the functionality needed, you mark the
> guest as supporting v2.

Sure, I will squash the patch 3 and 4 and will do the same for the
hypervisor series. thanks

-Brijesh


^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 03/20] x86/sev: Add support for hypervisor feature VMGEXIT
  2021-05-11 18:53     ` Brijesh Singh
@ 2021-05-17 14:40       ` Borislav Petkov
  0 siblings, 0 replies; 62+ messages in thread
From: Borislav Petkov @ 2021-05-17 14:40 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Tue, May 11, 2021 at 01:53:53PM -0500, Brijesh Singh wrote:
> I am fine with the reduced name, I just hope that "TMR" does not create
> confusion with "Trusted Memory Region" documented in  SEV-ES firmware
> specification. Since I am working on both guest and OVMF patches
> simultaneously so its possible that I just worked on this code after
> OVMF and used the same mouthful name ;)

I'm not surprised :-)

But sure, call this

GHCB_SNP_RESTRICTED_INJ_TIMER

Still short enough.

>   I apologies for those nits.

Oh, it's not a nit - it pays off later when chasing bugs and one is
trying to swap in the whole situation back into her/his L1. :-)

> Sure, I will send prepatch.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 05/20] x86/sev: Define SNP Page State Change VMGEXIT structure
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 05/20] x86/sev: Define SNP Page State Change VMGEXIT structure Brijesh Singh
@ 2021-05-18 10:41   ` Borislav Petkov
  2021-05-18 15:06     ` Brijesh Singh
  0 siblings, 1 reply; 62+ messages in thread
From: Borislav Petkov @ 2021-05-18 10:41 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Fri, Apr 30, 2021 at 07:16:01AM -0500, Brijesh Singh wrote:
> An SNP-active guest will use the page state change NAE VMGEXIT defined in
> the GHCB specification to ask the hypervisor to make the guest page
> private or shared in the RMP table. In addition to the private/shared,
> the guest can also ask the hypervisor to split or combine multiple 4K
> validated pages as a single 2M page or vice versa.
> 
> See GHCB specification section Page State Change for additional
> information.
> 
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>  arch/x86/include/asm/sev-common.h | 46 +++++++++++++++++++++++++++++++
>  arch/x86/include/uapi/asm/svm.h   |  2 ++
>  2 files changed, 48 insertions(+)
> 
> diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
> index 8142e247d8da..07b8612bf182 100644
> --- a/arch/x86/include/asm/sev-common.h
> +++ b/arch/x86/include/asm/sev-common.h
> @@ -67,6 +67,52 @@
>  #define GHCB_HV_FEATURES_SNP_RESTRICTED_INJECTION_TIMER		\
>  		(BIT_ULL(3) | GHCB_HV_FEATURES_SNP_RESTRICTED_INJECTION)
>  
> +/* SNP Page State Change */
> +#define GHCB_MSR_PSC_REQ		0x014
> +#define SNP_PAGE_STATE_PRIVATE		1
> +#define SNP_PAGE_STATE_SHARED		2
> +#define SNP_PAGE_STATE_PSMASH		3
> +#define SNP_PAGE_STATE_UNSMASH		4
> +#define GHCB_MSR_PSC_GFN_POS		12
> +#define GHCB_MSR_PSC_GFN_MASK		0xffffffffffULL

Why don't you use GENMASK_ULL() as I suggested? All those "f"s are
harder to count than seeing the start and end of a mask.

> +#define GHCB_MSR_PSC_OP_POS		52

Also, having defines for 12 and 52 doesn't make it more readable,
frankly.

When I see GENMASK_ULL(51, 12) it is kinda clear what it is. With the
defines, I have to go chase every define and replace it with the number
mentally.

> +#define GHCB_MSR_PSC_OP_MASK		0xf
> +#define GHCB_MSR_PSC_REQ_GFN(gfn, op) 	\
> +	(((unsigned long)((op) & GHCB_MSR_PSC_OP_MASK) << GHCB_MSR_PSC_OP_POS) | \
> +	(((gfn) << GHCB_MSR_PSC_GFN_POS) & GHCB_MSR_PSC_GFN_MASK) | GHCB_MSR_PSC_REQ)
> +
> +#define GHCB_MSR_PSC_RESP		0x015
> +#define GHCB_MSR_PSC_ERROR_POS		32
> +#define GHCB_MSR_PSC_ERROR_MASK		0xffffffffULL
> +#define GHCB_MSR_PSC_RSVD_POS		12
> +#define GHCB_MSR_PSC_RSVD_MASK		0xfffffULL
> +#define GHCB_MSR_PSC_RESP_VAL(val)	((val) >> GHCB_MSR_PSC_ERROR_POS)
> +
> +/* SNP Page State Change NAE event */
> +#define VMGEXIT_PSC_MAX_ENTRY		253
> +#define VMGEXIT_PSC_INVALID_HEADER	0x100000001
> +#define VMGEXIT_PSC_INVALID_ENTRY	0x100000002
> +#define VMGEXIT_PSC_FIRMWARE_ERROR(x)	((x & 0xffffffffULL) | 0x200000000)
> +
> +struct __packed snp_page_state_header {
> +	u16 cur_entry;
> +	u16 end_entry;
> +	u32 reserved;
> +};
> +
> +struct __packed snp_page_state_entry {
> +	u64 cur_page:12;
> +	u64 gfn:40;
> +	u64 operation:4;
> +	u64 pagesize:1;
> +	u64 reserved:7;

Please write it like this:

	u64 cur_page 	: 12,
	    gfn		: 40,
	    ...

to show that all those fields are part of the same u64.

struct perf_event_attr in include/uapi/linux/perf_event.h is a good
example.

> +};
> +
> +struct __packed snp_page_state_change {
> +	struct snp_page_state_header header;
> +	struct snp_page_state_entry entry[VMGEXIT_PSC_MAX_ENTRY];
> +};
> +
>  #define GHCB_MSR_TERM_REQ		0x100
>  #define GHCB_MSR_TERM_REASON_SET_POS	12
>  #define GHCB_MSR_TERM_REASON_SET_MASK	0xf
> diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
> index 7fbc311e2de1..f7bf12cad58c 100644
> --- a/arch/x86/include/uapi/asm/svm.h
> +++ b/arch/x86/include/uapi/asm/svm.h
> @@ -108,6 +108,7 @@
>  #define SVM_VMGEXIT_AP_JUMP_TABLE		0x80000005
>  #define SVM_VMGEXIT_SET_AP_JUMP_TABLE		0
>  #define SVM_VMGEXIT_GET_AP_JUMP_TABLE		1
> +#define SVM_VMGEXIT_SNP_PAGE_STATE_CHANGE	0x80000010

SVM_VMGEXIT_SNP_PSC

>  #define SVM_VMGEXIT_HYPERVISOR_FEATURES		0x8000fffd
>  #define SVM_VMGEXIT_UNSUPPORTED_EVENT		0x8000ffff
>  
> @@ -216,6 +217,7 @@
>  	{ SVM_VMGEXIT_NMI_COMPLETE,	"vmgexit_nmi_complete" }, \
>  	{ SVM_VMGEXIT_AP_HLT_LOOP,	"vmgexit_ap_hlt_loop" }, \
>  	{ SVM_VMGEXIT_AP_JUMP_TABLE,	"vmgexit_ap_jump_table" }, \
> +	{ SVM_VMGEXIT_SNP_PAGE_STATE_CHANGE,	"vmgexit_page_state_change" }, \

Ditto.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 06/20] x86/sev: Define SNP guest request NAE events
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 06/20] x86/sev: Define SNP guest request NAE events Brijesh Singh
@ 2021-05-18 10:45   ` Borislav Petkov
  2021-05-18 13:42     ` Brijesh Singh
  0 siblings, 1 reply; 62+ messages in thread
From: Borislav Petkov @ 2021-05-18 10:45 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Fri, Apr 30, 2021 at 07:16:02AM -0500, Brijesh Singh wrote:
> Version 2 of the GHCB specification added the support for SNP guest
> request NAE events. The SEV-SNP guests will use this event to request
> the attestation report. See the GHCB specification for more details.
> 
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>  arch/x86/include/uapi/asm/svm.h | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
> index f7bf12cad58c..7a45aa284530 100644
> --- a/arch/x86/include/uapi/asm/svm.h
> +++ b/arch/x86/include/uapi/asm/svm.h
> @@ -109,6 +109,8 @@
>  #define SVM_VMGEXIT_SET_AP_JUMP_TABLE		0
>  #define SVM_VMGEXIT_GET_AP_JUMP_TABLE		1
>  #define SVM_VMGEXIT_SNP_PAGE_STATE_CHANGE	0x80000010
> +#define SVM_VMGEXIT_SNP_GUEST_REQUEST		0x80000011
> +#define SVM_VMGEXIT_SNP_EXT_GUEST_REQUEST	0x80000012

Why does this need the "VMGEXIT" *and* "SNP" prefixes?

Why not simply:

SVM_VMGEXIT_GUEST_REQ
SVM_VMGEXIT_EXT_GUEST_REQ

like the rest of the VMGEXIT defines in there?


>  #define SVM_VMGEXIT_HYPERVISOR_FEATURES		0x8000fffd
>  #define SVM_VMGEXIT_UNSUPPORTED_EVENT		0x8000ffff
>  
> @@ -218,6 +220,8 @@
>  	{ SVM_VMGEXIT_AP_HLT_LOOP,	"vmgexit_ap_hlt_loop" }, \
>  	{ SVM_VMGEXIT_AP_JUMP_TABLE,	"vmgexit_ap_jump_table" }, \
>  	{ SVM_VMGEXIT_SNP_PAGE_STATE_CHANGE,	"vmgexit_page_state_change" }, \
> +	{ SVM_VMGEXIT_SNP_GUEST_REQUEST,	"vmgexit_snp_guest_request" }, \
> +	{ SVM_VMGEXIT_SNP_EXT_GUEST_REQUEST,	"vmgexit_snp_extended_guest_request" }, \
>  	{ SVM_VMGEXIT_HYPERVISOR_FEATURES,	"vmgexit_hypervisor_feature" }, \
>  	{ SVM_EXIT_ERR,         "invalid_guest_state" }

Ditto.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 07/20] x86/sev: Define error codes for reason set 1.
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 07/20] x86/sev: Define error codes for reason set 1 Brijesh Singh
@ 2021-05-18 11:05   ` Borislav Petkov
  0 siblings, 0 replies; 62+ messages in thread
From: Borislav Petkov @ 2021-05-18 11:05 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Fri, Apr 30, 2021 at 07:16:03AM -0500, Brijesh Singh wrote:

> Subject: Re: [PATCH Part1 RFC v2 07/20] x86/sev: Define error codes for reason set 1.

That patch title needs to be more generic, perhaps

"...: Define the Linux-specific guest termination reasons"

> diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
> index 07b8612bf182..733fca403ae5 100644
> --- a/arch/x86/include/asm/sev-common.h
> +++ b/arch/x86/include/asm/sev-common.h
> @@ -128,4 +128,9 @@ struct __packed snp_page_state_change {
>  
>  #define GHCB_RESP_CODE(v)		((v) & GHCB_MSR_INFO_MASK)
>  
> +/* Linux specific reason codes (used with reason set 1) */
> +#define GHCB_TERM_REGISTER		0	/* GHCB GPA registration failure */
> +#define GHCB_TERM_PSC			1	/* Page State Change faiilure */

"failure"

> +#define GHCB_TERM_PVALIDATE		2	/* Pvalidate failure */
> +
>  #endif
> diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
> index 874f911837db..3f9b06a04395 100644
> --- a/arch/x86/kernel/sev-shared.c
> +++ b/arch/x86/kernel/sev-shared.c
> @@ -32,7 +32,7 @@ static bool __init sev_es_check_cpu_features(void)
>  	return true;
>  }
>  
> -static void __noreturn sev_es_terminate(unsigned int reason)
> ++static void __noreturn sev_es_terminate(unsigned int set, unsigned int reason)
   ^

Do you see the '+' above?

In file included from arch/x86/kernel/sev.c:462:
arch/x86/kernel/sev-shared.c:37:1: error: expected identifier or ‘(’ before ‘+’ token
   37 | +static void __noreturn sev_es_terminate(unsigned int set, unsigned int reason)
      | ^
arch/x86/kernel/sev-shared.c: In function ‘do_vc_no_ghcb’:
arch/x86/kernel/sev-shared.c:245:2: error: implicit declaration of function ‘sev_es_terminate’; did you mean ‘seq_buf_terminate’? [-Werror=implicit-function-declaration]
  245 |  sev_es_terminate(0, GHCB_SEV_ES_REASON_GENERAL_REQUEST);
      |  ^~~~~~~~~~~~~~~~
      |  seq_buf_terminate
cc1: some warnings being treated as errors
make[2]: *** [scripts/Makefile.build:272: arch/x86/kernel/sev.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [scripts/Makefile.build:515: arch/x86/kernel] Error 2
make: *** [Makefile:1839: arch/x86] Error 2
make: *** Waiting for unfinished jobs....

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 06/20] x86/sev: Define SNP guest request NAE events
  2021-05-18 10:45   ` Borislav Petkov
@ 2021-05-18 13:42     ` Brijesh Singh
  2021-05-18 13:54       ` Borislav Petkov
  0 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-05-18 13:42 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, tglx, jroedel,
	thomas.lendacky, pbonzini, mingo, dave.hansen, rientjes, seanjc,
	peterz, hpa, tony.luck


On 5/18/21 5:45 AM, Borislav Petkov wrote:
> On Fri, Apr 30, 2021 at 07:16:02AM -0500, Brijesh Singh wrote:
>> Version 2 of the GHCB specification added the support for SNP guest
>> request NAE events. The SEV-SNP guests will use this event to request
>> the attestation report. See the GHCB specification for more details.
>>
>> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
>> ---
>>  arch/x86/include/uapi/asm/svm.h | 4 ++++
>>  1 file changed, 4 insertions(+)
>>
>> diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
>> index f7bf12cad58c..7a45aa284530 100644
>> --- a/arch/x86/include/uapi/asm/svm.h
>> +++ b/arch/x86/include/uapi/asm/svm.h
>> @@ -109,6 +109,8 @@
>>  #define SVM_VMGEXIT_SET_AP_JUMP_TABLE		0
>>  #define SVM_VMGEXIT_GET_AP_JUMP_TABLE		1
>>  #define SVM_VMGEXIT_SNP_PAGE_STATE_CHANGE	0x80000010
>> +#define SVM_VMGEXIT_SNP_GUEST_REQUEST		0x80000011
>> +#define SVM_VMGEXIT_SNP_EXT_GUEST_REQUEST	0x80000012
> Why does this need the "VMGEXIT" *and* "SNP" prefixes?
>
> Why not simply:
>
> SVM_VMGEXIT_GUEST_REQ
> SVM_VMGEXIT_EXT_GUEST_REQ
>
> like the rest of the VMGEXIT defines in there?

This VMGEXIT is optional and is available only when the SNP feature is
advertised through HV_FEATURE VMGEXIT. The GHCB specification spells it
with the "SNP" prefix" to distinguish it from others. The other
"VMGEXIT's" defined in this file are available for both the SNP and ES
guests, so we don't need any prefixes.


>
>>  #define SVM_VMGEXIT_HYPERVISOR_FEATURES		0x8000fffd
>>  #define SVM_VMGEXIT_UNSUPPORTED_EVENT		0x8000ffff
>>  
>> @@ -218,6 +220,8 @@
>>  	{ SVM_VMGEXIT_AP_HLT_LOOP,	"vmgexit_ap_hlt_loop" }, \
>>  	{ SVM_VMGEXIT_AP_JUMP_TABLE,	"vmgexit_ap_jump_table" }, \
>>  	{ SVM_VMGEXIT_SNP_PAGE_STATE_CHANGE,	"vmgexit_page_state_change" }, \
>> +	{ SVM_VMGEXIT_SNP_GUEST_REQUEST,	"vmgexit_snp_guest_request" }, \
>> +	{ SVM_VMGEXIT_SNP_EXT_GUEST_REQUEST,	"vmgexit_snp_extended_guest_request" }, \
>>  	{ SVM_VMGEXIT_HYPERVISOR_FEATURES,	"vmgexit_hypervisor_feature" }, \
>>  	{ SVM_EXIT_ERR,         "invalid_guest_state" }
> Ditto.
>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 06/20] x86/sev: Define SNP guest request NAE events
  2021-05-18 13:42     ` Brijesh Singh
@ 2021-05-18 13:54       ` Borislav Petkov
  2021-05-18 14:13         ` Brijesh Singh
  0 siblings, 1 reply; 62+ messages in thread
From: Borislav Petkov @ 2021-05-18 13:54 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Tue, May 18, 2021 at 08:42:44AM -0500, Brijesh Singh wrote:
> This VMGEXIT is optional and is available only when the SNP feature is
> advertised through HV_FEATURE VMGEXIT. The GHCB specification spells it
> with the "SNP" prefix" to distinguish it from others. The other
> "VMGEXIT's" defined in this file are available for both the SNP and ES
> guests, so we don't need any prefixes.

Sure but are there any other VMGEXIT guest requests besides those two?
If not, then they're unique so we can just as well drop the SNP prefix.
Bottom line is, I'd like the code to be short and readable at a glance.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 06/20] x86/sev: Define SNP guest request NAE events
  2021-05-18 13:54       ` Borislav Petkov
@ 2021-05-18 14:13         ` Brijesh Singh
  0 siblings, 0 replies; 62+ messages in thread
From: Brijesh Singh @ 2021-05-18 14:13 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, tglx, jroedel,
	thomas.lendacky, pbonzini, mingo, dave.hansen, rientjes, seanjc,
	peterz, hpa, tony.luck


On 5/18/21 8:54 AM, Borislav Petkov wrote:
> On Tue, May 18, 2021 at 08:42:44AM -0500, Brijesh Singh wrote:
>> This VMGEXIT is optional and is available only when the SNP feature is
>> advertised through HV_FEATURE VMGEXIT. The GHCB specification spells it
>> with the "SNP" prefix" to distinguish it from others. The other
>> "VMGEXIT's" defined in this file are available for both the SNP and ES
>> guests, so we don't need any prefixes.
> Sure but are there any other VMGEXIT guest requests besides those two?
> If not, then they're unique so we can just as well drop the SNP prefix.
> Bottom line is, I'd like the code to be short and readable at a glance.

There are total 7 SNP specific VMGEXIT. I can drop the "SNP" prefix if
that is preferred.


> Thx.
>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 05/20] x86/sev: Define SNP Page State Change VMGEXIT structure
  2021-05-18 10:41   ` Borislav Petkov
@ 2021-05-18 15:06     ` Brijesh Singh
  0 siblings, 0 replies; 62+ messages in thread
From: Brijesh Singh @ 2021-05-18 15:06 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, tglx, jroedel,
	thomas.lendacky, pbonzini, mingo, dave.hansen, rientjes, seanjc,
	peterz, hpa, tony.luck


On 5/18/21 5:41 AM, Borislav Petkov wrote:
> On Fri, Apr 30, 2021 at 07:16:01AM -0500, Brijesh Singh wrote:
>> An SNP-active guest will use the page state change NAE VMGEXIT defined in
>> the GHCB specification to ask the hypervisor to make the guest page
>> private or shared in the RMP table. In addition to the private/shared,
>> the guest can also ask the hypervisor to split or combine multiple 4K
>> validated pages as a single 2M page or vice versa.
>>
>> See GHCB specification section Page State Change for additional
>> information.
>>
>> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
>> ---
>>  arch/x86/include/asm/sev-common.h | 46 +++++++++++++++++++++++++++++++
>>  arch/x86/include/uapi/asm/svm.h   |  2 ++
>>  2 files changed, 48 insertions(+)
>>
>> diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
>> index 8142e247d8da..07b8612bf182 100644
>> --- a/arch/x86/include/asm/sev-common.h
>> +++ b/arch/x86/include/asm/sev-common.h
>> @@ -67,6 +67,52 @@
>>  #define GHCB_HV_FEATURES_SNP_RESTRICTED_INJECTION_TIMER		\
>>  		(BIT_ULL(3) | GHCB_HV_FEATURES_SNP_RESTRICTED_INJECTION)
>>  
>> +/* SNP Page State Change */
>> +#define GHCB_MSR_PSC_REQ		0x014
>> +#define SNP_PAGE_STATE_PRIVATE		1
>> +#define SNP_PAGE_STATE_SHARED		2
>> +#define SNP_PAGE_STATE_PSMASH		3
>> +#define SNP_PAGE_STATE_UNSMASH		4
>> +#define GHCB_MSR_PSC_GFN_POS		12
>> +#define GHCB_MSR_PSC_GFN_MASK		0xffffffffffULL
> Why don't you use GENMASK_ULL() as I suggested? All those "f"s are
> harder to count than seeing the start and end of a mask.
>
>> +#define GHCB_MSR_PSC_OP_POS		52
> Also, having defines for 12 and 52 doesn't make it more readable,
> frankly.
>
> When I see GENMASK_ULL(51, 12) it is kinda clear what it is. With the
> defines, I have to go chase every define and replace it with the number
> mentally.

I was still adhering to the previous convention in the file to define
the values. I will submit prepatch to change them to also use GENMASK so
that its consistent with SNP updates I add in this series.


>> +#define GHCB_MSR_PSC_OP_MASK		0xf
>> +#define GHCB_MSR_PSC_REQ_GFN(gfn, op) 	\
>> +	(((unsigned long)((op) & GHCB_MSR_PSC_OP_MASK) << GHCB_MSR_PSC_OP_POS) | \
>> +	(((gfn) << GHCB_MSR_PSC_GFN_POS) & GHCB_MSR_PSC_GFN_MASK) | GHCB_MSR_PSC_REQ)
>> +
>> +#define GHCB_MSR_PSC_RESP		0x015
>> +#define GHCB_MSR_PSC_ERROR_POS		32
>> +#define GHCB_MSR_PSC_ERROR_MASK		0xffffffffULL
>> +#define GHCB_MSR_PSC_RSVD_POS		12
>> +#define GHCB_MSR_PSC_RSVD_MASK		0xfffffULL
>> +#define GHCB_MSR_PSC_RESP_VAL(val)	((val) >> GHCB_MSR_PSC_ERROR_POS)
>> +
>> +/* SNP Page State Change NAE event */
>> +#define VMGEXIT_PSC_MAX_ENTRY		253
>> +#define VMGEXIT_PSC_INVALID_HEADER	0x100000001
>> +#define VMGEXIT_PSC_INVALID_ENTRY	0x100000002
>> +#define VMGEXIT_PSC_FIRMWARE_ERROR(x)	((x & 0xffffffffULL) | 0x200000000)
>> +
>> +struct __packed snp_page_state_header {
>> +	u16 cur_entry;
>> +	u16 end_entry;
>> +	u32 reserved;
>> +};
>> +
>> +struct __packed snp_page_state_entry {
>> +	u64 cur_page:12;
>> +	u64 gfn:40;
>> +	u64 operation:4;
>> +	u64 pagesize:1;
>> +	u64 reserved:7;
> Please write it like this:
>
> 	u64 cur_page 	: 12,
> 	    gfn		: 40,
> 	    ...
>
> to show that all those fields are part of the same u64.
>
> struct perf_event_attr in include/uapi/linux/perf_event.h is a good
> example.
>
Ah, thanks for pointing. I will switch to using it.


>> +};
>> +
>> +struct __packed snp_page_state_change {
>> +	struct snp_page_state_header header;
>> +	struct snp_page_state_entry entry[VMGEXIT_PSC_MAX_ENTRY];
>> +};
>> +
>>  #define GHCB_MSR_TERM_REQ		0x100
>>  #define GHCB_MSR_TERM_REASON_SET_POS	12
>>  #define GHCB_MSR_TERM_REASON_SET_MASK	0xf
>> diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
>> index 7fbc311e2de1..f7bf12cad58c 100644
>> --- a/arch/x86/include/uapi/asm/svm.h
>> +++ b/arch/x86/include/uapi/asm/svm.h
>> @@ -108,6 +108,7 @@
>>  #define SVM_VMGEXIT_AP_JUMP_TABLE		0x80000005
>>  #define SVM_VMGEXIT_SET_AP_JUMP_TABLE		0
>>  #define SVM_VMGEXIT_GET_AP_JUMP_TABLE		1
>> +#define SVM_VMGEXIT_SNP_PAGE_STATE_CHANGE	0x80000010
> SVM_VMGEXIT_SNP_PSC

Noted.


>
>>  #define SVM_VMGEXIT_HYPERVISOR_FEATURES		0x8000fffd
>>  #define SVM_VMGEXIT_UNSUPPORTED_EVENT		0x8000ffff
>>  
>> @@ -216,6 +217,7 @@
>>  	{ SVM_VMGEXIT_NMI_COMPLETE,	"vmgexit_nmi_complete" }, \
>>  	{ SVM_VMGEXIT_AP_HLT_LOOP,	"vmgexit_ap_hlt_loop" }, \
>>  	{ SVM_VMGEXIT_AP_JUMP_TABLE,	"vmgexit_ap_jump_table" }, \
>> +	{ SVM_VMGEXIT_SNP_PAGE_STATE_CHANGE,	"vmgexit_page_state_change" }, \
> Ditto.

These strings are printed in the trace so I thought giving them a full
name makes sense


>
> Thx.
>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 08/20] x86/mm: Add sev_snp_active() helper
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 08/20] x86/mm: Add sev_snp_active() helper Brijesh Singh
@ 2021-05-18 18:11   ` Borislav Petkov
  2021-05-19 17:28     ` Brijesh Singh
  0 siblings, 1 reply; 62+ messages in thread
From: Borislav Petkov @ 2021-05-18 18:11 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Fri, Apr 30, 2021 at 07:16:04AM -0500, Brijesh Singh wrote:
> The sev_snp_active() helper can be used by the guest to query whether the
> SNP - Secure Nested Paging feature is active.
> 
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>  arch/x86/include/asm/mem_encrypt.h | 2 ++
>  arch/x86/include/asm/msr-index.h   | 2 ++
>  arch/x86/mm/mem_encrypt.c          | 9 +++++++++
>  3 files changed, 13 insertions(+)
> 
> diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
> index 31c4df123aa0..d99aa260d328 100644
> --- a/arch/x86/include/asm/mem_encrypt.h
> +++ b/arch/x86/include/asm/mem_encrypt.h
> @@ -54,6 +54,7 @@ void __init sev_es_init_vc_handling(void);
>  bool sme_active(void);
>  bool sev_active(void);
>  bool sev_es_active(void);
> +bool sev_snp_active(void);
>  
>  #define __bss_decrypted __section(".bss..decrypted")
>  
> @@ -79,6 +80,7 @@ static inline void sev_es_init_vc_handling(void) { }
>  static inline bool sme_active(void) { return false; }
>  static inline bool sev_active(void) { return false; }
>  static inline bool sev_es_active(void) { return false; }
> +static inline bool sev_snp_active(void) { return false; }

Uff, yet another sev-something helper. So I already had this idea:

https://lore.kernel.org/kvm/20210421144402.GB5004@zn.tnic/

How about you add the sev_feature_enabled() thing

which will return a boolean value depending on which SEV feature has
been queried and instead of having yet another helper, do

	if (sev_feature_enabled(SEV_SNP))

or so?

I.e., just add the facility and the SNP bit - we will convert the rest
in time.

So that we can redesign this cleanly...

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 08/20] x86/mm: Add sev_snp_active() helper
  2021-05-18 18:11   ` Borislav Petkov
@ 2021-05-19 17:28     ` Brijesh Singh
  0 siblings, 0 replies; 62+ messages in thread
From: Brijesh Singh @ 2021-05-19 17:28 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, tglx, jroedel,
	thomas.lendacky, pbonzini, mingo, dave.hansen, rientjes, seanjc,
	peterz, hpa, tony.luck


On 5/18/21 1:11 PM, Borislav Petkov wrote:
> On Fri, Apr 30, 2021 at 07:16:04AM -0500, Brijesh Singh wrote:
>> The sev_snp_active() helper can be used by the guest to query whether the
>> SNP - Secure Nested Paging feature is active.
>>
>> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
>> ---
>>  arch/x86/include/asm/mem_encrypt.h | 2 ++
>>  arch/x86/include/asm/msr-index.h   | 2 ++
>>  arch/x86/mm/mem_encrypt.c          | 9 +++++++++
>>  3 files changed, 13 insertions(+)
>>
>> diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
>> index 31c4df123aa0..d99aa260d328 100644
>> --- a/arch/x86/include/asm/mem_encrypt.h
>> +++ b/arch/x86/include/asm/mem_encrypt.h
>> @@ -54,6 +54,7 @@ void __init sev_es_init_vc_handling(void);
>>  bool sme_active(void);
>>  bool sev_active(void);
>>  bool sev_es_active(void);
>> +bool sev_snp_active(void);
>>  
>>  #define __bss_decrypted __section(".bss..decrypted")
>>  
>> @@ -79,6 +80,7 @@ static inline void sev_es_init_vc_handling(void) { }
>>  static inline bool sme_active(void) { return false; }
>>  static inline bool sev_active(void) { return false; }
>>  static inline bool sev_es_active(void) { return false; }
>> +static inline bool sev_snp_active(void) { return false; }
> Uff, yet another sev-something helper. So I already had this idea:
>
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fkvm%2F20210421144402.GB5004%40zn.tnic%2F&amp;data=04%7C01%7Cbrijesh.singh%40amd.com%7C363870693b07482681da08d91a284ce4%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637569582675957160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=Ebk6MT2jKDfyPwwzYb3D5%2BGopUU3VWudgeAUxcsc74c%3D&amp;reserved=0
>
> How about you add the sev_feature_enabled() thing
>
> which will return a boolean value depending on which SEV feature has
> been queried and instead of having yet another helper, do
>
> 	if (sev_feature_enabled(SEV_SNP))
>
> or so?

Sure, I will introduce it in next rev.


> I.e., just add the facility and the SNP bit - we will convert the rest
> in time.
>
> So that we can redesign this cleanly...
>
> Thx.
>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 09/20] x86/sev: check SEV-SNP features support
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 09/20] x86/sev: check SEV-SNP features support Brijesh Singh
@ 2021-05-20 16:02   ` Borislav Petkov
  2021-05-20 17:40     ` Brijesh Singh
  0 siblings, 1 reply; 62+ messages in thread
From: Borislav Petkov @ 2021-05-20 16:02 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Fri, Apr 30, 2021 at 07:16:05AM -0500, Brijesh Singh wrote:
> diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
> index 6d9055427f37..7badbeb6cb95 100644
> --- a/arch/x86/boot/compressed/sev.c
> +++ b/arch/x86/boot/compressed/sev.c
> @@ -25,6 +25,8 @@
>  
>  struct ghcb boot_ghcb_page __aligned(PAGE_SIZE);
>  struct ghcb *boot_ghcb;
> +static u64 sev_status_val;

msr_sev_status should be more descriptive.

> +static bool sev_status_checked;

You don't need this one - you can simply do

	if (!msr_sev_status)
		read the MSR.

>  /*
>   * Copy a version of this function here - insn-eval.c can't be used in
> @@ -119,11 +121,30 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
>  /* Include code for early handlers */
>  #include "../../kernel/sev-shared.c"
>  
> +static inline bool sev_snp_enabled(void)
> +{
> +	unsigned long low, high;
> +
> +	if (!sev_status_checked) {
> +		asm volatile("rdmsr\n"
> +			     : "=a" (low), "=d" (high)
> +			     : "c" (MSR_AMD64_SEV));
> +		sev_status_val = (high << 32) | low;
> +		sev_status_checked = true;
> +	}
> +
> +	return sev_status_val & MSR_AMD64_SEV_SNP_ENABLED ? true : false;

	return msr_sev_status & MSR_AMD64_SEV_SNP_ENABLED;

is enough.

> +}
> +
>  static bool early_setup_sev_es(void)
>  {
>  	if (!sev_es_negotiate_protocol())
>  		sev_es_terminate(0, GHCB_SEV_ES_REASON_PROTOCOL_UNSUPPORTED);
>  
> +	/* If SEV-SNP is enabled then check if the hypervisor supports the SEV-SNP features. */

80 cols like the rest of this file pls.

> +	if (sev_snp_enabled() && !sev_snp_check_hypervisor_features())
> +		sev_es_terminate(0, GHCB_SEV_ES_REASON_SNP_UNSUPPORTED);
> +
>  	if (set_page_decrypted((unsigned long)&boot_ghcb_page))
>  		return false;

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 10/20] x86/sev: Add a helper for the PVALIDATE instruction
  2021-04-30 13:05   ` Brijesh Singh
@ 2021-05-20 17:32     ` Borislav Petkov
  2021-05-20 17:44       ` Brijesh Singh
  0 siblings, 1 reply; 62+ messages in thread
From: Borislav Petkov @ 2021-05-20 17:32 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Fri, Apr 30, 2021 at 08:05:36AM -0500, Brijesh Singh wrote:
> While generating the patches for part1, I accidentally picked the wrong
> version of this patch.

Adding the right one...

> Author: Brijesh Singh <brijesh.singh@amd.com>
> Date:   Thu Apr 29 16:45:36 2021 -0500
> 
>     x86/sev: Add a helper for the PVALIDATE instruction
>     
>     An SNP-active guest uses the PVALIDATE instruction to validate or
>     rescind the validation of a guest page’s RMP entry. Upon completion,
>     a return code is stored in EAX and rFLAGS bits are set based on the
>     return code. If the instruction completed successfully, the CF
>     indicates if the content of the RMP were changed or not.
> 
>     See AMD APM Volume 3 for additional details.
> 
>     Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> 
> diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
> index 134a7c9d91b6..be67d9c70267 100644
> --- a/arch/x86/include/asm/sev.h
> +++ b/arch/x86/include/asm/sev.h
> @@ -59,6 +59,16 @@ extern void vc_no_ghcb(void);
>  extern void vc_boot_ghcb(void);
>  extern bool handle_vc_boot_ghcb(struct pt_regs *regs);
>  
> +/* Return code of pvalidate */
> +#define PVALIDATE_SUCCESS		0
> +#define PVALIDATE_FAIL_INPUT		1
> +#define PVALIDATE_FAIL_SIZEMISMATCH	6

Those are unused. Remove them pls.

> +#define PVALIDATE_FAIL_NOUPDATE		255 /* Software defined (when rFlags.CF = 1) */

Put the comment above the define pls.

> +
> +/* RMP page size */
> +#define RMP_PG_SIZE_2M			1
> +#define RMP_PG_SIZE_4K			0

Add those when you need them - I see

[PATCH Part2 RFC v2 06/37] x86/sev: Add RMP entry lookup helpers

is moving them to some generic header. No need to add them to this patch
here.

>  #ifdef CONFIG_AMD_MEM_ENCRYPT
>  extern struct static_key_false sev_es_enable_key;
>  extern void __sev_es_ist_enter(struct pt_regs *regs);
> @@ -81,12 +91,29 @@ static __always_inline void sev_es_nmi_complete(void)
>  		__sev_es_nmi_complete();
>  }
>  extern int __init sev_es_efi_map_ghcbs(pgd_t *pgd);
> +static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate)
> +{
> +	bool no_rmpupdate;
> +	int rc;

Adding this for the mail archives when we find this mail again in the
future so that I don't have to do binutils git archeology again:

Enablement for the "pvalidate" mnemonic is in binutils commit
646cc3e0109e ("Add AMD znver3 processor support"). :-)

Please put over the opcode bytes line:

	/* "pvalidate" mnemonic support in binutils 2.36 and newer */

> +
> +	asm volatile(".byte 0xF2, 0x0F, 0x01, 0xFF\n\t"
> +		     CC_SET(c)
> +		     : CC_OUT(c) (no_rmpupdate), "=a"(rc)
> +		     : "a"(vaddr), "c"(rmp_psize), "d"(validate)
> +		     : "memory", "cc");
> +
> +	if (no_rmpupdate)
> +		return PVALIDATE_FAIL_NOUPDATE;
> +
> +	return rc;
> +}

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 09/20] x86/sev: check SEV-SNP features support
  2021-05-20 16:02   ` Borislav Petkov
@ 2021-05-20 17:40     ` Brijesh Singh
  0 siblings, 0 replies; 62+ messages in thread
From: Brijesh Singh @ 2021-05-20 17:40 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, tglx, jroedel,
	thomas.lendacky, pbonzini, mingo, dave.hansen, rientjes, seanjc,
	peterz, hpa, tony.luck


On 5/20/21 11:02 AM, Borislav Petkov wrote:
> On Fri, Apr 30, 2021 at 07:16:05AM -0500, Brijesh Singh wrote:
>> diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
>> index 6d9055427f37..7badbeb6cb95 100644
>> --- a/arch/x86/boot/compressed/sev.c
>> +++ b/arch/x86/boot/compressed/sev.c
>> @@ -25,6 +25,8 @@
>>  
>>  struct ghcb boot_ghcb_page __aligned(PAGE_SIZE);
>>  struct ghcb *boot_ghcb;
>> +static u64 sev_status_val;
> msr_sev_status should be more descriptive.
Noted.
>
>> +static bool sev_status_checked;
> You don't need this one - you can simply do
>
> 	if (!msr_sev_status)
> 		read the MSR.

Agreed.


>
>>  /*
>>   * Copy a version of this function here - insn-eval.c can't be used in
>> @@ -119,11 +121,30 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
>>  /* Include code for early handlers */
>>  #include "../../kernel/sev-shared.c"
>>  
>> +static inline bool sev_snp_enabled(void)
>> +{
>> +	unsigned long low, high;
>> +
>> +	if (!sev_status_checked) {
>> +		asm volatile("rdmsr\n"
>> +			     : "=a" (low), "=d" (high)
>> +			     : "c" (MSR_AMD64_SEV));
>> +		sev_status_val = (high << 32) | low;
>> +		sev_status_checked = true;
>> +	}
>> +
>> +	return sev_status_val & MSR_AMD64_SEV_SNP_ENABLED ? true : false;
> 	return msr_sev_status & MSR_AMD64_SEV_SNP_ENABLED;
>
> is enough.

Noted.


>> +}
>> +
>>  static bool early_setup_sev_es(void)
>>  {
>>  	if (!sev_es_negotiate_protocol())
>>  		sev_es_terminate(0, GHCB_SEV_ES_REASON_PROTOCOL_UNSUPPORTED);
>>  
>> +	/* If SEV-SNP is enabled then check if the hypervisor supports the SEV-SNP features. */
> 80 cols like the rest of this file pls.

Noted.


>
>> +	if (sev_snp_enabled() && !sev_snp_check_hypervisor_features())
>> +		sev_es_terminate(0, GHCB_SEV_ES_REASON_SNP_UNSUPPORTED);
>> +
>>  	if (set_page_decrypted((unsigned long)&boot_ghcb_page))
>>  		return false;
> Thx.
>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 10/20] x86/sev: Add a helper for the PVALIDATE instruction
  2021-05-20 17:32     ` Borislav Petkov
@ 2021-05-20 17:44       ` Brijesh Singh
  2021-05-20 17:51         ` Borislav Petkov
  0 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-05-20 17:44 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, tglx, jroedel,
	thomas.lendacky, pbonzini, mingo, dave.hansen, rientjes, seanjc,
	peterz, hpa, tony.luck


On 5/20/21 12:32 PM, Borislav Petkov wrote:
> On Fri, Apr 30, 2021 at 08:05:36AM -0500, Brijesh Singh wrote:
>> While generating the patches for part1, I accidentally picked the wrong
>> version of this patch.
> Adding the right one...
>
>> Author: Brijesh Singh <brijesh.singh@amd.com>
>> Date:   Thu Apr 29 16:45:36 2021 -0500
>>
>>     x86/sev: Add a helper for the PVALIDATE instruction
>>     
>>     An SNP-active guest uses the PVALIDATE instruction to validate or
>>     rescind the validation of a guest page’s RMP entry. Upon completion,
>>     a return code is stored in EAX and rFLAGS bits are set based on the
>>     return code. If the instruction completed successfully, the CF
>>     indicates if the content of the RMP were changed or not.
>>
>>     See AMD APM Volume 3 for additional details.
>>
>>     Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
>>
>> diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
>> index 134a7c9d91b6..be67d9c70267 100644
>> --- a/arch/x86/include/asm/sev.h
>> +++ b/arch/x86/include/asm/sev.h
>> @@ -59,6 +59,16 @@ extern void vc_no_ghcb(void);
>>  extern void vc_boot_ghcb(void);
>>  extern bool handle_vc_boot_ghcb(struct pt_regs *regs);
>>  
>> +/* Return code of pvalidate */
>> +#define PVALIDATE_SUCCESS		0
>> +#define PVALIDATE_FAIL_INPUT		1
>> +#define PVALIDATE_FAIL_SIZEMISMATCH	6
> Those are unused. Remove them pls.

Hmm, I use the SIZEMISMATCH later in the patches. Since I was
introducing the pvalidate in separate patch so decided to define all the
return code.


>> +#define PVALIDATE_FAIL_NOUPDATE		255 /* Software defined (when rFlags.CF = 1) */
> Put the comment above the define pls.

Noted.


>
>> +
>> +/* RMP page size */
>> +#define RMP_PG_SIZE_2M			1
>> +#define RMP_PG_SIZE_4K			0
> Add those when you need them - I see
>
> [PATCH Part2 RFC v2 06/37] x86/sev: Add RMP entry lookup helpers
>
> is moving them to some generic header. No need to add them to this patch
> here.

Noted.


>>  #ifdef CONFIG_AMD_MEM_ENCRYPT
>>  extern struct static_key_false sev_es_enable_key;
>>  extern void __sev_es_ist_enter(struct pt_regs *regs);
>> @@ -81,12 +91,29 @@ static __always_inline void sev_es_nmi_complete(void)
>>  		__sev_es_nmi_complete();
>>  }
>>  extern int __init sev_es_efi_map_ghcbs(pgd_t *pgd);
>> +static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate)
>> +{
>> +	bool no_rmpupdate;
>> +	int rc;
> Adding this for the mail archives when we find this mail again in the
> future so that I don't have to do binutils git archeology again:
>
> Enablement for the "pvalidate" mnemonic is in binutils commit
> 646cc3e0109e ("Add AMD znver3 processor support"). :-)
>
> Please put over the opcode bytes line:

Noted.


> 	/* "pvalidate" mnemonic support in binutils 2.36 and newer */
>
>> +
>> +	asm volatile(".byte 0xF2, 0x0F, 0x01, 0xFF\n\t"
>> +		     CC_SET(c)
>> +		     : CC_OUT(c) (no_rmpupdate), "=a"(rc)
>> +		     : "a"(vaddr), "c"(rmp_psize), "d"(validate)
>> +		     : "memory", "cc");
>> +
>> +	if (no_rmpupdate)
>> +		return PVALIDATE_FAIL_NOUPDATE;
>> +
>> +	return rc;
>> +}
> Thx.
>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 10/20] x86/sev: Add a helper for the PVALIDATE instruction
  2021-05-20 17:44       ` Brijesh Singh
@ 2021-05-20 17:51         ` Borislav Petkov
  2021-05-20 17:57           ` Brijesh Singh
  0 siblings, 1 reply; 62+ messages in thread
From: Borislav Petkov @ 2021-05-20 17:51 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Thu, May 20, 2021 at 12:44:50PM -0500, Brijesh Singh wrote:
> Hmm, I use the SIZEMISMATCH later in the patches.

You do, where? Maybe I don't see it.

> Since I was introducing the pvalidate in separate patch so decided to
> define all the return code.

You can define them in a comment so that it is clear what PVALIDATE
returns but not as defines when they're unused.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 11/20] x86/compressed: Add helper for validating pages in the decompression stage
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 11/20] x86/compressed: Add helper for validating pages in the decompression stage Brijesh Singh
@ 2021-05-20 17:52   ` Borislav Petkov
  2021-05-20 18:05     ` Brijesh Singh
  0 siblings, 1 reply; 62+ messages in thread
From: Borislav Petkov @ 2021-05-20 17:52 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Fri, Apr 30, 2021 at 07:16:07AM -0500, Brijesh Singh wrote:
> @@ -278,12 +279,28 @@ static int set_clr_page_flags(struct x86_mapping_info *info,
>  	if ((set | clr) & _PAGE_ENC)
>  		clflush_page(address);
>  
> +	/*
> +	 * If the encryption attribute is being cleared, then change the page state to
> +	 * shared in the RMP entry. Change of the page state must be done before the
> +	 * PTE updates.
> +	 */
> +	if (clr & _PAGE_ENC)
> +		snp_set_page_shared(pte_pfn(*ptep) << PAGE_SHIFT);

From the last review:

The statement above already looks at clr - just merge the two together.

> @@ -136,6 +137,55 @@ static inline bool sev_snp_enabled(void)
>  	return sev_status_val & MSR_AMD64_SEV_SNP_ENABLED ? true : false;
>  }
>  
> +static void snp_page_state_change(unsigned long paddr, int op)

From the last review:

no need for too many prefixes on static functions - just call this one
__change_page_state() or so, so that the below one can be called...

> +{
> +	u64 val;
> +
> +	if (!sev_snp_enabled())
> +		return;
> +
> +	/*
> +	 * If the page is getting changed from private to shard then invalidate the page

							shared

And you can write this a lot shorter

	* If private -> shared, ...

> +	 * before requesting the state change in the RMP table.
> +	 */
> +	if ((op == SNP_PAGE_STATE_SHARED) && pvalidate(paddr, RMP_PG_SIZE_4K, 0))
> +		goto e_pvalidate;
> +
> +	/* Issue VMGEXIT to change the page state in RMP table. */
> +	sev_es_wr_ghcb_msr(GHCB_MSR_PSC_REQ_GFN(paddr >> PAGE_SHIFT, op));
> +	VMGEXIT();
> +
> +	/* Read the response of the VMGEXIT. */
> +	val = sev_es_rd_ghcb_msr();
> +	if ((GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP) || GHCB_MSR_PSC_RESP_VAL(val))
> +		goto e_psc;

That label is used only once - just do the termination here directly and
remove it.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 10/20] x86/sev: Add a helper for the PVALIDATE instruction
  2021-05-20 17:51         ` Borislav Petkov
@ 2021-05-20 17:57           ` Brijesh Singh
  0 siblings, 0 replies; 62+ messages in thread
From: Brijesh Singh @ 2021-05-20 17:57 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, tglx, jroedel,
	thomas.lendacky, pbonzini, mingo, dave.hansen, rientjes, seanjc,
	peterz, hpa, tony.luck


On 5/20/21 12:51 PM, Borislav Petkov wrote:
> On Thu, May 20, 2021 at 12:44:50PM -0500, Brijesh Singh wrote:
>> Hmm, I use the SIZEMISMATCH later in the patches.
> You do, where? Maybe I don't see it.

Sorry, my bad. Currently all the pages are pre-validated, and guest
kernel uses 4K in the page state change VMGEXIT. So, we should *not* see
the SIZEMISMATCH. This will happen when we move to lazy validation. I
mixed up with OVMF in which I validate pages as a large. I will drop the
macro.


>
>> Since I was introducing the pvalidate in separate patch so decided to
>> define all the return code.
> You can define them in a comment so that it is clear what PVALIDATE
> returns but not as defines when they're unused.

Got it.


>
> Thx.
>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 11/20] x86/compressed: Add helper for validating pages in the decompression stage
  2021-05-20 17:52   ` Borislav Petkov
@ 2021-05-20 18:05     ` Brijesh Singh
  2021-05-25 10:18       ` Borislav Petkov
  0 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-05-20 18:05 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, tglx, jroedel,
	thomas.lendacky, pbonzini, mingo, dave.hansen, rientjes, seanjc,
	peterz, hpa, tony.luck


On 5/20/21 12:52 PM, Borislav Petkov wrote:
> On Fri, Apr 30, 2021 at 07:16:07AM -0500, Brijesh Singh wrote:
>> @@ -278,12 +279,28 @@ static int set_clr_page_flags(struct x86_mapping_info *info,
>>  	if ((set | clr) & _PAGE_ENC)
>>  		clflush_page(address);
>>  
>> +	/*
>> +	 * If the encryption attribute is being cleared, then change the page state to
>> +	 * shared in the RMP entry. Change of the page state must be done before the
>> +	 * PTE updates.
>> +	 */
>> +	if (clr & _PAGE_ENC)
>> +		snp_set_page_shared(pte_pfn(*ptep) << PAGE_SHIFT);
> From the last review:
>
> The statement above already looks at clr - just merge the two together.

Maybe I am missing something, the statement above was executed for
either set or clr but the page shared need to happen only for clr. So,
from code readability point I kept it outside of that if().

Otherwise we may have to do something like.

...

if ((set | clr) & _PAGE_EN) {

   if (clr)

    snp_set_page_shared(pte_pfn(*ptep) << PAGE_SHIFT);

  }

I am okay with above is the preferred approach.

>
>> @@ -136,6 +137,55 @@ static inline bool sev_snp_enabled(void)
>>  	return sev_status_val & MSR_AMD64_SEV_SNP_ENABLED ? true : false;
>>  }
>>  
>> +static void snp_page_state_change(unsigned long paddr, int op)
> From the last review:
>
> no need for too many prefixes on static functions - just call this one
> __change_page_state() or so, so that the below one can be called...

I guess I still kept the "snp" prefix because vmgexit was named that
way. Based on your feedback, I am droping the "SNP" prefix from the
VMGEXIT and will update it as well.


>> +{
>> +	u64 val;
>> +
>> +	if (!sev_snp_enabled())
>> +		return;
>> +
>> +	/*
>> +	 * If the page is getting changed from private to shard then invalidate the page
> 							shared
>
> And you can write this a lot shorter
>
> 	* If private -> shared, ...
>
>> +	 * before requesting the state change in the RMP table.
>> +	 */
>> +	if ((op == SNP_PAGE_STATE_SHARED) && pvalidate(paddr, RMP_PG_SIZE_4K, 0))
>> +		goto e_pvalidate;
>> +
>> +	/* Issue VMGEXIT to change the page state in RMP table. */
>> +	sev_es_wr_ghcb_msr(GHCB_MSR_PSC_REQ_GFN(paddr >> PAGE_SHIFT, op));
>> +	VMGEXIT();
>> +
>> +	/* Read the response of the VMGEXIT. */
>> +	val = sev_es_rd_ghcb_msr();
>> +	if ((GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP) || GHCB_MSR_PSC_RESP_VAL(val))
>> +		goto e_psc;
> That label is used only once - just do the termination here directly and
> remove it.

Noted.


>
> Thx.
>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 11/20] x86/compressed: Add helper for validating pages in the decompression stage
  2021-05-20 18:05     ` Brijesh Singh
@ 2021-05-25 10:18       ` Borislav Petkov
  0 siblings, 0 replies; 62+ messages in thread
From: Borislav Petkov @ 2021-05-25 10:18 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Thu, May 20, 2021 at 01:05:15PM -0500, Brijesh Singh wrote:
> Maybe I am missing something, the statement above was executed for
> either set or clr but the page shared need to happen only for clr. So,
> from code readability point I kept it outside of that if().
> 
> Otherwise we may have to do something like.
> 
> ...
> 
> if ((set | clr) & _PAGE_EN) {
> 
>    if (clr)
> 
>     snp_set_page_shared(pte_pfn(*ptep) << PAGE_SHIFT);
> 
>   }
> 
> I am okay with above is the preferred approach.

Yes pls, because it keeps the _PAGE_ENC handling together in one
statement.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 12/20] x86/compressed: Register GHCB memory when SEV-SNP is active
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 12/20] x86/compressed: Register GHCB memory when SEV-SNP is active Brijesh Singh
@ 2021-05-25 10:41   ` Borislav Petkov
  0 siblings, 0 replies; 62+ messages in thread
From: Borislav Petkov @ 2021-05-25 10:41 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Fri, Apr 30, 2021 at 07:16:08AM -0500, Brijesh Singh wrote:
> diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
> index 733fca403ae5..7487d4768ef0 100644
> --- a/arch/x86/include/asm/sev-common.h
> +++ b/arch/x86/include/asm/sev-common.h
> @@ -88,6 +88,18 @@
>  #define GHCB_MSR_PSC_RSVD_MASK		0xfffffULL
>  #define GHCB_MSR_PSC_RESP_VAL(val)	((val) >> GHCB_MSR_PSC_ERROR_POS)
>  
> +/* GHCB GPA Register */
> +#define GHCB_MSR_GPA_REG_REQ		0x012
> +#define GHCB_MSR_GPA_REG_VALUE_POS	12
> +#define GHCB_MSR_GPA_REG_VALUE_MASK	0xfffffffffffffULL

GENMASK_ULL

> +#define GHCB_MSR_GPA_REQ_VAL(v)		\
> +		(((v) << GHCB_MSR_GPA_REG_VALUE_POS) | GHCB_MSR_GPA_REG_REQ)
> +
> +#define GHCB_MSR_GPA_REG_RESP		0x013
> +#define GHCB_MSR_GPA_REG_RESP_VAL(v)	((v) >> GHCB_MSR_GPA_REG_VALUE_POS)
> +#define GHCB_MSR_GPA_REG_ERROR		0xfffffffffffffULL
> +#define GHCB_MSR_GPA_INVALID		~0ULL

Ditto.

> +
>  /* SNP Page State Change NAE event */
>  #define VMGEXIT_PSC_MAX_ENTRY		253
>  #define VMGEXIT_PSC_INVALID_HEADER	0x100000001
> diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
> index 085d3d724bc8..140c5bc07fc2 100644
> --- a/arch/x86/kernel/sev-shared.c
> +++ b/arch/x86/kernel/sev-shared.c
> @@ -81,6 +81,22 @@ static bool ghcb_get_hv_features(void)
>  	return true;
>  }
>  
> +static void snp_register_ghcb(unsigned long paddr)
> +{
> +	unsigned long pfn = paddr >> PAGE_SHIFT;
> +	u64 val;
> +
> +	sev_es_wr_ghcb_msr(GHCB_MSR_GPA_REQ_VAL(pfn));
> +	VMGEXIT();
> +
> +	val = sev_es_rd_ghcb_msr();
> +
> +	/* If the response GPA is not ours then abort the guest */
> +	if ((GHCB_RESP_CODE(val) != GHCB_MSR_GPA_REG_RESP) ||
> +	    (GHCB_MSR_GPA_REG_RESP_VAL(val) != pfn))
> +		sev_es_terminate(1, GHCB_TERM_REGISTER);

Nice, special termination reasons which say why the guest terminates,
cool!

:-)

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 13/20] x86/sev: Register GHCB memory when SEV-SNP is active
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 13/20] x86/sev: " Brijesh Singh
@ 2021-05-25 11:11   ` Borislav Petkov
  2021-05-25 14:28     ` Brijesh Singh
  0 siblings, 1 reply; 62+ messages in thread
From: Borislav Petkov @ 2021-05-25 11:11 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Fri, Apr 30, 2021 at 07:16:09AM -0500, Brijesh Singh wrote:
> The SEV-SNP guest is required to perform GHCB GPA registration. This is
> because the hypervisor may prefer that a guest use a consistent and/or
> specific GPA for the GHCB associated with a vCPU. For more information,
> see the GHCB specification section GHCB GPA Registration.
> 
> During the boot, init_ghcb() allocates a per-cpu GHCB page. On very first
> VC exception, the exception handler switch to using the per-cpu GHCB page
> allocated during the init_ghcb(). The GHCB page must be registered in
> the current vcpu context.
> 
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>  arch/x86/kernel/sev.c | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
> 
> diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
> index 8c8c939a1754..e6819f170ec4 100644
> --- a/arch/x86/kernel/sev.c
> +++ b/arch/x86/kernel/sev.c
> @@ -88,6 +88,13 @@ struct sev_es_runtime_data {
>  	 * is currently unsupported in SEV-ES guests.
>  	 */
>  	unsigned long dr7;
> +
> +	/*
> +	 * SEV-SNP requires that the GHCB must be registered before using it.
> +	 * The flag below will indicate whether the GHCB is registered, if its
> +	 * not registered then sev_es_get_ghcb() will perform the registration.
> +	 */
> +	bool snp_ghcb_registered;
>  };
>  
>  struct ghcb_state {
> @@ -100,6 +107,9 @@ DEFINE_STATIC_KEY_FALSE(sev_es_enable_key);
>  /* Needed in vc_early_forward_exception */
>  void do_early_exception(struct pt_regs *regs, int trapnr);
>  
> +/* Defined in sev-shared.c */
> +static void snp_register_ghcb(unsigned long paddr);

Can we get rid of those forward declarations pls? Due to sev-shared.c
this file is starting to spawn those and that's ugly.

Either through a code reorg or even defining a sev-internal.h header
which contains all those so that they don't pollute the code?

Thx.

> +
>  static void __init setup_vc_stacks(int cpu)
>  {
>  	struct sev_es_runtime_data *data;
> @@ -218,6 +228,12 @@ static __always_inline struct ghcb *sev_es_get_ghcb(struct ghcb_state *state)
>  		data->ghcb_active = true;
>  	}
>  
> +	/* SEV-SNP guest requires that GHCB must be registered before using it. */
> +	if (sev_snp_active() && !data->snp_ghcb_registered) {
> +		snp_register_ghcb(__pa(ghcb));
> +		data->snp_ghcb_registered = true;
> +	}

More missed review from last time:

"This needs to be set to true in the function itself, in the success
case."

Can you please be more careful and go through all review comments so
that I don't have to do the same work twice?

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 13/20] x86/sev: Register GHCB memory when SEV-SNP is active
  2021-05-25 11:11   ` Borislav Petkov
@ 2021-05-25 14:28     ` Brijesh Singh
  2021-05-25 14:35       ` Borislav Petkov
  0 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-05-25 14:28 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, tglx, jroedel,
	thomas.lendacky, pbonzini, mingo, dave.hansen, rientjes, seanjc,
	peterz, hpa, tony.luck


On 5/25/21 6:11 AM, Borislav Petkov wrote:
> On Fri, Apr 30, 2021 at 07:16:09AM -0500, Brijesh Singh wrote:
>> The SEV-SNP guest is required to perform GHCB GPA registration. This is
>> because the hypervisor may prefer that a guest use a consistent and/or
>> specific GPA for the GHCB associated with a vCPU. For more information,
>> see the GHCB specification section GHCB GPA Registration.
>>
>> During the boot, init_ghcb() allocates a per-cpu GHCB page. On very first
>> VC exception, the exception handler switch to using the per-cpu GHCB page
>> allocated during the init_ghcb(). The GHCB page must be registered in
>> the current vcpu context.
>>
>> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
>> ---
>>  arch/x86/kernel/sev.c | 21 +++++++++++++++++++++
>>  1 file changed, 21 insertions(+)
>>
>> diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
>> index 8c8c939a1754..e6819f170ec4 100644
>> --- a/arch/x86/kernel/sev.c
>> +++ b/arch/x86/kernel/sev.c
>> @@ -88,6 +88,13 @@ struct sev_es_runtime_data {
>>  	 * is currently unsupported in SEV-ES guests.
>>  	 */
>>  	unsigned long dr7;
>> +
>> +	/*
>> +	 * SEV-SNP requires that the GHCB must be registered before using it.
>> +	 * The flag below will indicate whether the GHCB is registered, if its
>> +	 * not registered then sev_es_get_ghcb() will perform the registration.
>> +	 */
>> +	bool snp_ghcb_registered;
>>  };
>>  
>>  struct ghcb_state {
>> @@ -100,6 +107,9 @@ DEFINE_STATIC_KEY_FALSE(sev_es_enable_key);
>>  /* Needed in vc_early_forward_exception */
>>  void do_early_exception(struct pt_regs *regs, int trapnr);
>>  
>> +/* Defined in sev-shared.c */
>> +static void snp_register_ghcb(unsigned long paddr);
> Can we get rid of those forward declarations pls? Due to sev-shared.c
> this file is starting to spawn those and that's ugly.
>
> Either through a code reorg or even defining a sev-internal.h header
> which contains all those so that they don't pollute the code?

Okay, I will see what I can do to avoid the forward declarations.


> Thx.
>
>> +
>>  static void __init setup_vc_stacks(int cpu)
>>  {
>>  	struct sev_es_runtime_data *data;
>> @@ -218,6 +228,12 @@ static __always_inline struct ghcb *sev_es_get_ghcb(struct ghcb_state *state)
>>  		data->ghcb_active = true;
>>  	}
>>  
>> +	/* SEV-SNP guest requires that GHCB must be registered before using it. */
>> +	if (sev_snp_active() && !data->snp_ghcb_registered) {
>> +		snp_register_ghcb(__pa(ghcb));
>> +		data->snp_ghcb_registered = true;
>> +	}
> More missed review from last time:
>
> "This needs to be set to true in the function itself, in the success
> case."
>
> Can you please be more careful and go through all review comments so
> that I don't have to do the same work twice?

I am not ignoring your valuable feedback; I am sorry if I came across
like that.

In this particular case, the snp_register_ghcb() is shared between the
decompress and main kernel. The variable data->snp_ghcb_registered is
not visible in the decompressed path, so I choose not to set it to true
inside the function itself.


> Thx.
>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 13/20] x86/sev: Register GHCB memory when SEV-SNP is active
  2021-05-25 14:28     ` Brijesh Singh
@ 2021-05-25 14:35       ` Borislav Petkov
  2021-05-25 14:47         ` Brijesh Singh
  0 siblings, 1 reply; 62+ messages in thread
From: Borislav Petkov @ 2021-05-25 14:35 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Tue, May 25, 2021 at 09:28:14AM -0500, Brijesh Singh wrote:
> I am not ignoring your valuable feedback; I am sorry if I came across
> like that.

Sure, no worries, but you have to say something when you don't agree
with the feedback. How am I supposed to know?

> In this particular case, the snp_register_ghcb() is shared between the
> decompress and main kernel. The variable data->snp_ghcb_registered is
> not visible in the decompressed path,

Why is it not visible?

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 13/20] x86/sev: Register GHCB memory when SEV-SNP is active
  2021-05-25 14:35       ` Borislav Petkov
@ 2021-05-25 14:47         ` Brijesh Singh
  2021-05-26  9:57           ` Borislav Petkov
  0 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-05-25 14:47 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, tglx, jroedel,
	thomas.lendacky, pbonzini, mingo, dave.hansen, rientjes, seanjc,
	peterz, hpa, tony.luck


On 5/25/21 9:35 AM, Borislav Petkov wrote:
>> In this particular case, the snp_register_ghcb() is shared between the
>> decompress and main kernel. The variable data->snp_ghcb_registered is
>> not visible in the decompressed path,
> Why is it not visible?

Maybe I should have said, its not applicable in the decompressed path.

The snp_ghcb_register is defined in the per-CPU structure, and used in
the per-CPU #VC handler. We don't have the per-CPU #VC handler in the
decompression code path.

Please see the arch/x86/kernel/sev.c

/* #VC handler runtime per-CPU data */

struct sev_es_runtime_data {

 ...

 ...

 bool snp_ghcb_registered;

}



^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 13/20] x86/sev: Register GHCB memory when SEV-SNP is active
  2021-05-25 14:47         ` Brijesh Singh
@ 2021-05-26  9:57           ` Borislav Petkov
  2021-05-26 13:23             ` Brijesh Singh
  0 siblings, 1 reply; 62+ messages in thread
From: Borislav Petkov @ 2021-05-26  9:57 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Tue, May 25, 2021 at 09:47:24AM -0500, Brijesh Singh wrote:
> Maybe I should have said, its not applicable in the decompressed path.

Aha, ok. How's that, ontop of yours:

---
diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index 07b9529d7d95..c9dd98b9dcdf 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -208,7 +208,7 @@ static bool early_setup_sev_es(void)
 
 	/* SEV-SNP guest requires the GHCB GPA must be registered */
 	if (sev_snp_enabled())
-		snp_register_ghcb(__pa(&boot_ghcb_page));
+		snp_register_ghcb_early(__pa(&boot_ghcb_page));
 
 	return true;
 }
diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
index 37a23c524f8c..7200f44d6b6b 100644
--- a/arch/x86/kernel/sev-shared.c
+++ b/arch/x86/kernel/sev-shared.c
@@ -81,7 +81,7 @@ static bool ghcb_get_hv_features(void)
 	return true;
 }
 
-static void snp_register_ghcb(unsigned long paddr)
+static void snp_register_ghcb_early(unsigned long paddr)
 {
 	unsigned long pfn = paddr >> PAGE_SHIFT;
 	u64 val;
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 5544557d9fb6..144c20479cae 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -108,7 +108,18 @@ DEFINE_STATIC_KEY_FALSE(sev_es_enable_key);
 void do_early_exception(struct pt_regs *regs, int trapnr);
 
 /* Defined in sev-shared.c */
-static void snp_register_ghcb(unsigned long paddr);
+static void snp_register_ghcb_early(unsigned long paddr);
+
+static void snp_register_ghcb(struct sev_es_runtime_data *data,
+			      unsigned long paddr)
+{
+	if (data->snp_ghcb_registered)
+		return;
+
+	snp_register_ghcb_early(paddr);
+
+	data->snp_ghcb_registered = true;
+}
 
 static void __init setup_vc_stacks(int cpu)
 {
@@ -239,10 +250,8 @@ static __always_inline struct ghcb *sev_es_get_ghcb(struct ghcb_state *state)
 	}
 
 	/* SEV-SNP guest requires that GHCB must be registered before using it. */
-	if (sev_snp_active() && !data->snp_ghcb_registered) {
-		snp_register_ghcb(__pa(ghcb));
-		data->snp_ghcb_registered = true;
-	}
+	if (sev_snp_active())
+		snp_register_ghcb(data, __pa(ghcb));
 
 	return ghcb;
 }
@@ -681,7 +690,7 @@ static bool __init sev_es_setup_ghcb(void)
 
 	/* SEV-SNP guest requires that GHCB GPA must be registered */
 	if (sev_snp_active())
-		snp_register_ghcb(__pa(&boot_ghcb_page));
+		snp_register_ghcb_early(__pa(&boot_ghcb_page));
 
 	return true;
 }

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply related	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 14/20] x86/sev: Add helper for validating pages in early enc attribute changes
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 14/20] x86/sev: Add helper for validating pages in early enc attribute changes Brijesh Singh
@ 2021-05-26 10:39   ` Borislav Petkov
  2021-05-26 13:34     ` Brijesh Singh
  0 siblings, 1 reply; 62+ messages in thread
From: Borislav Petkov @ 2021-05-26 10:39 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Fri, Apr 30, 2021 at 07:16:10AM -0500, Brijesh Singh wrote:
> +static void pvalidate_pages(unsigned long vaddr, unsigned int npages, bool validate)
> +{
> +	unsigned long vaddr_end;
> +	int rc;
> +
> +	vaddr = vaddr & PAGE_MASK;
> +	vaddr_end = vaddr + (npages << PAGE_SHIFT);
> +
> +	while (vaddr < vaddr_end) {
> +		rc = pvalidate(vaddr, RMP_PG_SIZE_4K, validate);
> +		if (rc) {
> +			WARN(rc, "Failed to validate address 0x%lx ret %d", vaddr, rc);

WARN does return the condition it warned on, look at its definition.

IOW, you can do (and btw e_fail is not needed either):

                rc = pvalidate(vaddr, RMP_PG_SIZE_4K, validate);
                if (WARN(rc, "Failed to validate address 0x%lx ret %d", vaddr, rc))
			sev_es_terminate(1, GHCB_TERM_PVALIDATE);

Ditto for the other WARN.

But looking at that function more, you probably could reorganize it a
bit and do this instead:

	...

        while (vaddr < vaddr_end) {
                if (!pvalidate(vaddr, RMP_PG_SIZE_4K, validate)) {
                        vaddr = vaddr + PAGE_SIZE;
                        continue;
                }

                WARN(rc, "Failed to validate address 0x%lx ret %d", vaddr, rc);
                sev_es_terminate(1, GHCB_TERM_PVALIDATE);
        }
}

and have the failure case at the end and no labels or return statements
for less clutter.

> +			goto e_fail;
> +		}
> +
> +		vaddr = vaddr + PAGE_SIZE;
> +	}
> +
> +	return;
> +
> +e_fail:
> +	sev_es_terminate(1, GHCB_TERM_PVALIDATE);
> +}
> +
> +static void __init early_snp_set_page_state(unsigned long paddr, unsigned int npages, int op)
> +{
> +	unsigned long paddr_end;
> +	u64 val;
> +
> +	paddr = paddr & PAGE_MASK;
> +	paddr_end = paddr + (npages << PAGE_SHIFT);
> +
> +	while (paddr < paddr_end) {
> +		/*
> +		 * Use the MSR protcol because this function can be called before the GHCB

WARNING: 'protcol' may be misspelled - perhaps 'protocol'?
#209: FILE: arch/x86/kernel/sev.c:562:
+                * Use the MSR protcol because this function can be called before the GHCB


I think I've said this before:

Please integrate scripts/checkpatch.pl into your patch creation
workflow. Some of the warnings/errors *actually* make sense.

> +		 * is established.
> +		 */
> +		sev_es_wr_ghcb_msr(GHCB_MSR_PSC_REQ_GFN(paddr >> PAGE_SHIFT, op));
> +		VMGEXIT();
> +
> +		val = sev_es_rd_ghcb_msr();
> +
> +		if (GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP)

Does that one need a warning too or am I being too paranoid?

> +			goto e_term;
> +
> +		if (GHCB_MSR_PSC_RESP_VAL(val)) {
> +			WARN(1, "Failed to change page state to '%s' paddr 0x%lx error 0x%llx\n",
> +				op == SNP_PAGE_STATE_PRIVATE ? "private" : "shared", paddr,
> +				GHCB_MSR_PSC_RESP_VAL(val));
> +			goto e_term;

                if (WARN(GHCB_MSR_PSC_RESP_VAL(val),
                         "Failed to change page state to '%s' paddr 0x%lx error 0x%llx\n",
                         op == SNP_PAGE_STATE_PRIVATE ? "private" : "shared", 
                         paddr, GHCB_MSR_PSC_RESP_VAL(val)))
                        goto e_term;


> +		}
> +
> +		paddr = paddr + PAGE_SIZE;
> +	}
> +
> +	return;
> +
> +e_term:
> +	sev_es_terminate(1, GHCB_TERM_PSC);
> +}
> +
> +void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr,
> +					 unsigned int npages)
> +{
> +	if (!sev_snp_active())
> +		return;
> +
> +	 /* Ask hypervisor to add the memory pages in RMP table as a 'private'. */
> +	early_snp_set_page_state(paddr, npages, SNP_PAGE_STATE_PRIVATE);
> +
> +	/* Validate the memory pages after its added in the RMP table. */

				     after they've been added...

> +	pvalidate_pages(vaddr, npages, 1);
> +}
> +
> +void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
> +					unsigned int npages)
> +{
> +	if (!sev_snp_active())
> +		return;
> +
> +	/* Invalidate memory pages before making it shared in the RMP table. */

Ditto.

> +	pvalidate_pages(vaddr, npages, 0);
> +
> +	 /* Ask hypervisor to make the memory pages shared in the RMP table. */
> +	early_snp_set_page_state(paddr, npages, SNP_PAGE_STATE_SHARED);
> +}
> +
>  int sev_es_setup_ap_jump_table(struct real_mode_header *rmh)
>  {
>  	u16 startup_cs, startup_ip;
> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
> index 076d993acba3..f722518b244f 100644
> --- a/arch/x86/mm/mem_encrypt.c
> +++ b/arch/x86/mm/mem_encrypt.c
> @@ -30,6 +30,7 @@
>  #include <asm/processor-flags.h>
>  #include <asm/msr.h>
>  #include <asm/cmdline.h>
> +#include <asm/sev.h>
>  
>  #include "mm_internal.h"
>  
> @@ -50,6 +51,30 @@ bool sev_enabled __section(".data");
>  /* Buffer used for early in-place encryption by BSP, no locking needed */
>  static char sme_early_buffer[PAGE_SIZE] __initdata __aligned(PAGE_SIZE);
>  
> +/*
> + * When SNP is active, this routine changes the page state from private to

s/this routine changes/change/

> + * shared before copying the data from the source to destination and restore
> + * after the copy. This is required because the source address is mapped as
> + * decrypted by the caller of the routine.
> + */
> +static inline void __init snp_memcpy(void *dst, void *src, size_t sz, unsigned long paddr,
> +				     bool dec)

					 decrypt

> +{
> +	unsigned long npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
> +
> +	if (dec) {
> +		/* If the paddr needs to be accessed decrypted, make the page shared before memcpy. */
> +		early_snp_set_memory_shared((unsigned long)__va(paddr), paddr, npages);
> +
> +		memcpy(dst, src, sz);
> +
> +		/* Restore the page state after the memcpy. */
> +		early_snp_set_memory_private((unsigned long)__va(paddr), paddr, npages);
> +	} else {
> +		memcpy(dst, src, sz);
> +	}

Hmm, this function needs reorg. How about this:

/*
 * When SNP is active, change the page state from private to shared before
 * copying the data from the source to destination and restore after the copy.
 * This is required because the source address is mapped as decrypted by the
 * caller of the routine.
 */
static inline void __init snp_memcpy(void *dst, void *src, size_t sz,
                                     unsigned long paddr, bool decrypt)
{
        unsigned long npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;

        if (!sev_snp_active() || !decrypt) {
                memcpy(dst, src, sz);
                return;
	}

        /*
         * If the paddr needs to be accessed decrypted, mark the page
         * shared in the RMP table before copying it.
         */
        early_snp_set_memory_shared((unsigned long)__va(paddr), paddr, npages);

        memcpy(dst, src, sz);

        /* Restore the page state after the memcpy. */
        early_snp_set_memory_private((unsigned long)__va(paddr), paddr, npages);
}


-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 13/20] x86/sev: Register GHCB memory when SEV-SNP is active
  2021-05-26  9:57           ` Borislav Petkov
@ 2021-05-26 13:23             ` Brijesh Singh
  0 siblings, 0 replies; 62+ messages in thread
From: Brijesh Singh @ 2021-05-26 13:23 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, tglx, jroedel,
	thomas.lendacky, pbonzini, mingo, dave.hansen, rientjes, seanjc,
	peterz, hpa, tony.luck


On 5/26/21 4:57 AM, Borislav Petkov wrote:
> On Tue, May 25, 2021 at 09:47:24AM -0500, Brijesh Singh wrote:
>> Maybe I should have said, its not applicable in the decompressed path.
> Aha, ok. How's that, ontop of yours:

Sure, its fine with me. I will apply this change.

-Birjesh


> ---
> diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
> index 07b9529d7d95..c9dd98b9dcdf 100644
> --- a/arch/x86/boot/compressed/sev.c
> +++ b/arch/x86/boot/compressed/sev.c
> @@ -208,7 +208,7 @@ static bool early_setup_sev_es(void)
>  
>  	/* SEV-SNP guest requires the GHCB GPA must be registered */
>  	if (sev_snp_enabled())
> -		snp_register_ghcb(__pa(&boot_ghcb_page));
> +		snp_register_ghcb_early(__pa(&boot_ghcb_page));
>  
>  	return true;
>  }
> diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
> index 37a23c524f8c..7200f44d6b6b 100644
> --- a/arch/x86/kernel/sev-shared.c
> +++ b/arch/x86/kernel/sev-shared.c
> @@ -81,7 +81,7 @@ static bool ghcb_get_hv_features(void)
>  	return true;
>  }
>  
> -static void snp_register_ghcb(unsigned long paddr)
> +static void snp_register_ghcb_early(unsigned long paddr)
>  {
>  	unsigned long pfn = paddr >> PAGE_SHIFT;
>  	u64 val;
> diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
> index 5544557d9fb6..144c20479cae 100644
> --- a/arch/x86/kernel/sev.c
> +++ b/arch/x86/kernel/sev.c
> @@ -108,7 +108,18 @@ DEFINE_STATIC_KEY_FALSE(sev_es_enable_key);
>  void do_early_exception(struct pt_regs *regs, int trapnr);
>  
>  /* Defined in sev-shared.c */
> -static void snp_register_ghcb(unsigned long paddr);
> +static void snp_register_ghcb_early(unsigned long paddr);
> +
> +static void snp_register_ghcb(struct sev_es_runtime_data *data,
> +			      unsigned long paddr)
> +{
> +	if (data->snp_ghcb_registered)
> +		return;
> +
> +	snp_register_ghcb_early(paddr);
> +
> +	data->snp_ghcb_registered = true;
> +}
>  
>  static void __init setup_vc_stacks(int cpu)
>  {
> @@ -239,10 +250,8 @@ static __always_inline struct ghcb *sev_es_get_ghcb(struct ghcb_state *state)
>  	}
>  
>  	/* SEV-SNP guest requires that GHCB must be registered before using it. */
> -	if (sev_snp_active() && !data->snp_ghcb_registered) {
> -		snp_register_ghcb(__pa(ghcb));
> -		data->snp_ghcb_registered = true;
> -	}
> +	if (sev_snp_active())
> +		snp_register_ghcb(data, __pa(ghcb));
>  
>  	return ghcb;
>  }
> @@ -681,7 +690,7 @@ static bool __init sev_es_setup_ghcb(void)
>  
>  	/* SEV-SNP guest requires that GHCB GPA must be registered */
>  	if (sev_snp_active())
> -		snp_register_ghcb(__pa(&boot_ghcb_page));
> +		snp_register_ghcb_early(__pa(&boot_ghcb_page));
>  
>  	return true;
>  }
>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 14/20] x86/sev: Add helper for validating pages in early enc attribute changes
  2021-05-26 10:39   ` Borislav Petkov
@ 2021-05-26 13:34     ` Brijesh Singh
  0 siblings, 0 replies; 62+ messages in thread
From: Brijesh Singh @ 2021-05-26 13:34 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, tglx, jroedel,
	thomas.lendacky, pbonzini, mingo, dave.hansen, rientjes, seanjc,
	peterz, hpa, tony.luck


On 5/26/21 5:39 AM, Borislav Petkov wrote:
> On Fri, Apr 30, 2021 at 07:16:10AM -0500, Brijesh Singh wrote:
>> +static void pvalidate_pages(unsigned long vaddr, unsigned int npages, bool validate)
>> +{
>> +	unsigned long vaddr_end;
>> +	int rc;
>> +
>> +	vaddr = vaddr & PAGE_MASK;
>> +	vaddr_end = vaddr + (npages << PAGE_SHIFT);
>> +
>> +	while (vaddr < vaddr_end) {
>> +		rc = pvalidate(vaddr, RMP_PG_SIZE_4K, validate);
>> +		if (rc) {
>> +			WARN(rc, "Failed to validate address 0x%lx ret %d", vaddr, rc);
> WARN does return the condition it warned on, look at its definition.
>
> IOW, you can do (and btw e_fail is not needed either):
>
>                 rc = pvalidate(vaddr, RMP_PG_SIZE_4K, validate);
>                 if (WARN(rc, "Failed to validate address 0x%lx ret %d", vaddr, rc))
> 			sev_es_terminate(1, GHCB_TERM_PVALIDATE);
>
> Ditto for the other WARN.

Okay, I agree with comment, I will go through each code block and use
the return value from the WARN().

..
>> +	unsigned long npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
>> +
>> +	if (dec) {
>> +		/* If the paddr needs to be accessed decrypted, make the page shared before memcpy. */
>> +		early_snp_set_memory_shared((unsigned long)__va(paddr), paddr, npages);
>> +
>> +		memcpy(dst, src, sz);
>> +
>> +		/* Restore the page state after the memcpy. */
>> +		early_snp_set_memory_private((unsigned long)__va(paddr), paddr, npages);
>> +	} else {
>> +		memcpy(dst, src, sz);
>> +	}
> Hmm, this function needs reorg. How about this:

I think with your edits its looking similar to what I had in v1. In v1
early_snp_set_memory_shared() was not doing sev_snp_active()  check
whereas it does now.I didn't check the sev_snp_active() in this function
but I agree that it can be reorg. I am good with your changes. thanks


>
> /*
>  * When SNP is active, change the page state from private to shared before
>  * copying the data from the source to destination and restore after the copy.
>  * This is required because the source address is mapped as decrypted by the
>  * caller of the routine.
>  */
> static inline void __init snp_memcpy(void *dst, void *src, size_t sz,
>                                      unsigned long paddr, bool decrypt)
> {
>         unsigned long npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
>
>         if (!sev_snp_active() || !decrypt) {
>                 memcpy(dst, src, sz);
>                 return;
> 	}
>
>         /*
>          * If the paddr needs to be accessed decrypted, mark the page
>          * shared in the RMP table before copying it.
>          */
>         early_snp_set_memory_shared((unsigned long)__va(paddr), paddr, npages);
>
>         memcpy(dst, src, sz);
>
>         /* Restore the page state after the memcpy. */
>         early_snp_set_memory_private((unsigned long)__va(paddr), paddr, npages);
> }
>
>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 16/20] x86/kernel: Validate rom memory before accessing when SEV-SNP is active
  2021-04-30 12:16 ` [PATCH Part1 RFC v2 16/20] x86/kernel: Validate rom memory before accessing when SEV-SNP is active Brijesh Singh
@ 2021-05-27 11:49   ` Borislav Petkov
  2021-05-27 12:12     ` Brijesh Singh
  0 siblings, 1 reply; 62+ messages in thread
From: Borislav Petkov @ 2021-05-27 11:49 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Fri, Apr 30, 2021 at 07:16:12AM -0500, Brijesh Singh wrote:
> +	/*
> +	 * The ROM memory is not part of the E820 system RAM and is not pre-validated
> +	 * by the BIOS. The kernel page table maps the ROM region as encrypted memory,
> +	 * the SEV-SNP requires the encrypted memory must be validated before the
> +	 * access. Validate the ROM before accessing it.
> +	 */
> +	n = ((system_rom_resource.end + 1) - video_rom_resource.start) >> PAGE_SHIFT;
> +	early_snp_set_memory_private((unsigned long)__va(video_rom_resource.start),
> +			video_rom_resource.start, n);

From last review:

I don't like this sprinkling of SNP-special stuff that needs to be done,
around the tree. Instead, pls define a function called

        snp_prep_memory(unsigned long pa, unsigned int num_pages, enum operation);

or so which does all the manipulation needed and the callsites only
simply unconditionally call that function so that all detail is
extracted and optimized away when not config-enabled.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 16/20] x86/kernel: Validate rom memory before accessing when SEV-SNP is active
  2021-05-27 11:49   ` Borislav Petkov
@ 2021-05-27 12:12     ` Brijesh Singh
  2021-05-27 12:23       ` Borislav Petkov
  0 siblings, 1 reply; 62+ messages in thread
From: Brijesh Singh @ 2021-05-27 12:12 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, tglx, jroedel,
	thomas.lendacky, pbonzini, mingo, dave.hansen, rientjes, seanjc,
	peterz, hpa, tony.luck


On 5/27/21 6:49 AM, Borislav Petkov wrote:
> On Fri, Apr 30, 2021 at 07:16:12AM -0500, Brijesh Singh wrote:
>> +	/*
>> +	 * The ROM memory is not part of the E820 system RAM and is not pre-validated
>> +	 * by the BIOS. The kernel page table maps the ROM region as encrypted memory,
>> +	 * the SEV-SNP requires the encrypted memory must be validated before the
>> +	 * access. Validate the ROM before accessing it.
>> +	 */
>> +	n = ((system_rom_resource.end + 1) - video_rom_resource.start) >> PAGE_SHIFT;
>> +	early_snp_set_memory_private((unsigned long)__va(video_rom_resource.start),
>> +			video_rom_resource.start, n);
> From last review:
>
> I don't like this sprinkling of SNP-special stuff that needs to be done,
> around the tree. Instead, pls define a function called
>
>         snp_prep_memory(unsigned long pa, unsigned int num_pages, enum operation);

In the previous patch we were doing:

if (sev_snp_active()) {

   early_set_memory_private(....)

}

Based on your feedback on other patches, I moved the sev_snp_active()
check inside the function. The callsites can now make unconditional call
to change the page state. After implementing that feedback, I don't see
a strong reason for yet another helper unless I am missing something:

snp_prep_memory(pa, n, SNP_PAGE_PRIVATE) == snp_set_memory_private(pa, n)

snp_prep_memory(pa, n, SNP_PAGE_SHARED) == snp_set_memory_shared(pa, n)

Let me know if you still think that snp_prep_memory() helper is required.

-Brijesh

> or so which does all the manipulation needed and the callsites only
> simply unconditionally call that function so that all detail is
> extracted and optimized away when not config-enabled.
>

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 16/20] x86/kernel: Validate rom memory before accessing when SEV-SNP is active
  2021-05-27 12:12     ` Brijesh Singh
@ 2021-05-27 12:23       ` Borislav Petkov
  2021-05-27 12:56         ` Brijesh Singh
  0 siblings, 1 reply; 62+ messages in thread
From: Borislav Petkov @ 2021-05-27 12:23 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: x86, linux-kernel, kvm, tglx, jroedel, thomas.lendacky, pbonzini,
	mingo, dave.hansen, rientjes, seanjc, peterz, hpa, tony.luck

On Thu, May 27, 2021 at 07:12:00AM -0500, Brijesh Singh wrote:
> Let me know if you still think that snp_prep_memory() helper is required.

Yes, I still do think that because you can put the comment and all
the manupulation of parameters in there and have a single oneliner in
probe_roms.c:

	snp_prep_memory(...);

and have the details in sev.c where they belong.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 62+ messages in thread

* Re: [PATCH Part1 RFC v2 16/20] x86/kernel: Validate rom memory before accessing when SEV-SNP is active
  2021-05-27 12:23       ` Borislav Petkov
@ 2021-05-27 12:56         ` Brijesh Singh
  0 siblings, 0 replies; 62+ messages in thread
From: Brijesh Singh @ 2021-05-27 12:56 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, x86, linux-kernel, kvm, tglx, jroedel,
	thomas.lendacky, pbonzini, mingo, dave.hansen, rientjes, seanjc,
	peterz, hpa, tony.luck


On 5/27/21 7:23 AM, Borislav Petkov wrote:
> On Thu, May 27, 2021 at 07:12:00AM -0500, Brijesh Singh wrote:
>> Let me know if you still think that snp_prep_memory() helper is required.
> Yes, I still do think that because you can put the comment and all
> the manupulation of parameters in there and have a single oneliner in
> probe_roms.c:
>
> 	snp_prep_memory(...);
>
> and have the details in sev.c where they belong.

Okay, I will add the helper.


^ permalink raw reply	[flat|nested] 62+ messages in thread

end of thread, other threads:[~2021-05-27 12:56 UTC | newest]

Thread overview: 62+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-30 12:15 [PATCH Part1 RFC v2 00/20] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
2021-04-30 12:15 ` [PATCH Part1 RFC v2 01/20] x86/sev: Define the GHCB MSR protocol for AP reset hold Brijesh Singh
2021-04-30 12:15 ` [PATCH Part1 RFC v2 02/20] x86/sev: Save the negotiated GHCB version Brijesh Singh
2021-05-11  9:23   ` Borislav Petkov
2021-05-11 18:29     ` Brijesh Singh
2021-05-11 18:41       ` Borislav Petkov
2021-05-12 14:03         ` Brijesh Singh
2021-05-12 14:31           ` Borislav Petkov
2021-05-12 15:03             ` Brijesh Singh
2021-04-30 12:15 ` [PATCH Part1 RFC v2 03/20] x86/sev: Add support for hypervisor feature VMGEXIT Brijesh Singh
2021-05-11 10:01   ` Borislav Petkov
2021-05-11 18:53     ` Brijesh Singh
2021-05-17 14:40       ` Borislav Petkov
2021-04-30 12:16 ` [PATCH Part1 RFC v2 04/20] x86/sev: Increase the GHCB protocol version Brijesh Singh
2021-04-30 12:16 ` [PATCH Part1 RFC v2 05/20] x86/sev: Define SNP Page State Change VMGEXIT structure Brijesh Singh
2021-05-18 10:41   ` Borislav Petkov
2021-05-18 15:06     ` Brijesh Singh
2021-04-30 12:16 ` [PATCH Part1 RFC v2 06/20] x86/sev: Define SNP guest request NAE events Brijesh Singh
2021-05-18 10:45   ` Borislav Petkov
2021-05-18 13:42     ` Brijesh Singh
2021-05-18 13:54       ` Borislav Petkov
2021-05-18 14:13         ` Brijesh Singh
2021-04-30 12:16 ` [PATCH Part1 RFC v2 07/20] x86/sev: Define error codes for reason set 1 Brijesh Singh
2021-05-18 11:05   ` Borislav Petkov
2021-04-30 12:16 ` [PATCH Part1 RFC v2 08/20] x86/mm: Add sev_snp_active() helper Brijesh Singh
2021-05-18 18:11   ` Borislav Petkov
2021-05-19 17:28     ` Brijesh Singh
2021-04-30 12:16 ` [PATCH Part1 RFC v2 09/20] x86/sev: check SEV-SNP features support Brijesh Singh
2021-05-20 16:02   ` Borislav Petkov
2021-05-20 17:40     ` Brijesh Singh
2021-04-30 12:16 ` [PATCH Part1 RFC v2 10/20] x86/sev: Add a helper for the PVALIDATE instruction Brijesh Singh
2021-04-30 13:05   ` Brijesh Singh
2021-05-20 17:32     ` Borislav Petkov
2021-05-20 17:44       ` Brijesh Singh
2021-05-20 17:51         ` Borislav Petkov
2021-05-20 17:57           ` Brijesh Singh
2021-04-30 12:16 ` [PATCH Part1 RFC v2 11/20] x86/compressed: Add helper for validating pages in the decompression stage Brijesh Singh
2021-05-20 17:52   ` Borislav Petkov
2021-05-20 18:05     ` Brijesh Singh
2021-05-25 10:18       ` Borislav Petkov
2021-04-30 12:16 ` [PATCH Part1 RFC v2 12/20] x86/compressed: Register GHCB memory when SEV-SNP is active Brijesh Singh
2021-05-25 10:41   ` Borislav Petkov
2021-04-30 12:16 ` [PATCH Part1 RFC v2 13/20] x86/sev: " Brijesh Singh
2021-05-25 11:11   ` Borislav Petkov
2021-05-25 14:28     ` Brijesh Singh
2021-05-25 14:35       ` Borislav Petkov
2021-05-25 14:47         ` Brijesh Singh
2021-05-26  9:57           ` Borislav Petkov
2021-05-26 13:23             ` Brijesh Singh
2021-04-30 12:16 ` [PATCH Part1 RFC v2 14/20] x86/sev: Add helper for validating pages in early enc attribute changes Brijesh Singh
2021-05-26 10:39   ` Borislav Petkov
2021-05-26 13:34     ` Brijesh Singh
2021-04-30 12:16 ` [PATCH Part1 RFC v2 15/20] x86/kernel: Make the bss.decrypted section shared in RMP table Brijesh Singh
2021-04-30 12:16 ` [PATCH Part1 RFC v2 16/20] x86/kernel: Validate rom memory before accessing when SEV-SNP is active Brijesh Singh
2021-05-27 11:49   ` Borislav Petkov
2021-05-27 12:12     ` Brijesh Singh
2021-05-27 12:23       ` Borislav Petkov
2021-05-27 12:56         ` Brijesh Singh
2021-04-30 12:16 ` [PATCH Part1 RFC v2 17/20] x86/mm: Add support to validate memory when changing C-bit Brijesh Singh
2021-04-30 12:16 ` [PATCH Part1 RFC v2 18/20] x86/boot: Add Confidential Computing address to setup_header Brijesh Singh
2021-04-30 12:16 ` [PATCH Part1 RFC v2 19/20] x86/sev: Register SNP guest request platform device Brijesh Singh
2021-04-30 12:16 ` [PATCH Part1 RFC v2 20/20] virt: Add SEV-SNP guest driver Brijesh Singh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).