All of lore.kernel.org
 help / color / mirror / Atom feed
* [Part2 PATCH v7 30/38] KVM: SVM: Add support for SEV LAUNCH_FINISH command
@ 2017-11-01 21:17 Brijesh Singh
  2017-11-01 21:17 ` [Part2 PATCH v7 31/38] KVM: SVM: Add support for SEV GUEST_STATUS command Brijesh Singh
                   ` (7 more replies)
  0 siblings, 8 replies; 15+ messages in thread
From: Brijesh Singh @ 2017-11-01 21:17 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: bp, Brijesh Singh, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Paolo Bonzini, Radim Krčmář,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, x86

The command is used for finializing the SEV guest launch process.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Borislav Petkov <bp@suse.de>
---
 arch/x86/kvm/svm.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index c9be9dde7b85..0ed823806bbc 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5982,6 +5982,26 @@ static int sev_launch_measure(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return ret;
 }
 
+static int sev_launch_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &kvm->arch.sev_info;
+	struct sev_data_launch_finish *data;
+	int ret;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (!data)
+		return -ENOMEM;
+
+	data->handle = sev->handle;
+	ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_FINISH, data, &argp->error);
+
+	kfree(data);
+	return ret;
+}
+
 static int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -6008,6 +6028,9 @@ static int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_LAUNCH_MEASURE:
 		r = sev_launch_measure(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_LAUNCH_FINISH:
+		r = sev_launch_finish(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Part2 PATCH v7 31/38] KVM: SVM: Add support for SEV GUEST_STATUS command
  2017-11-01 21:17 [Part2 PATCH v7 30/38] KVM: SVM: Add support for SEV LAUNCH_FINISH command Brijesh Singh
@ 2017-11-01 21:17 ` Brijesh Singh
  2017-11-01 21:17 ` [Part2 PATCH v7 32/38] KVM: SVM: Add support for SEV DEBUG_DECRYPT command Brijesh Singh
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Brijesh Singh @ 2017-11-01 21:17 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: bp, Brijesh Singh, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Paolo Bonzini, Radim Krčmář,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, x86

The command is used for querying the SEV guest information.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Borislav Petkov <bp@suse.de>
---
 arch/x86/kvm/svm.c | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 0ed823806bbc..65ef2a72b8e8 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -6002,6 +6002,36 @@ static int sev_launch_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return ret;
 }
 
+static int sev_guest_status(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &kvm->arch.sev_info;
+	struct kvm_sev_guest_status params;
+	struct sev_data_guest_status *data;
+	int ret;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (!data)
+		return -ENOMEM;
+
+	data->handle = sev->handle;
+	ret = sev_issue_cmd(kvm, SEV_CMD_GUEST_STATUS, data, &argp->error);
+	if (ret)
+		goto e_free;
+
+	params.policy = data->policy;
+	params.state = data->state;
+	params.handle = data->handle;
+
+	if (copy_to_user((void __user *)(uintptr_t)argp->data, &params, sizeof(params)))
+		ret = -EFAULT;
+e_free:
+	kfree(data);
+	return ret;
+}
+
 static int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -6031,6 +6061,9 @@ static int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_LAUNCH_FINISH:
 		r = sev_launch_finish(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_GUEST_STATUS:
+		r = sev_guest_status(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Part2 PATCH v7 32/38] KVM: SVM: Add support for SEV DEBUG_DECRYPT command
  2017-11-01 21:17 [Part2 PATCH v7 30/38] KVM: SVM: Add support for SEV LAUNCH_FINISH command Brijesh Singh
  2017-11-01 21:17 ` [Part2 PATCH v7 31/38] KVM: SVM: Add support for SEV GUEST_STATUS command Brijesh Singh
@ 2017-11-01 21:17 ` Brijesh Singh
  2017-11-06 11:24   ` Borislav Petkov
  2017-11-01 21:17 ` [Part2 PATCH v7 33/38] KVM: SVM: Add support for SEV DEBUG_ENCRYPT command Brijesh Singh
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 15+ messages in thread
From: Brijesh Singh @ 2017-11-01 21:17 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: bp, Brijesh Singh, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Paolo Bonzini, Radim Krčmář,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, x86

The command is used for decrypting a guest memory region for debug
purposes.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c | 157 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 157 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 65ef2a72b8e8..adf4d80caee4 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -6032,6 +6032,160 @@ static int sev_guest_status(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return ret;
 }
 
+static int __sev_issue_dbg_cmd(struct kvm *kvm, unsigned long src,
+			       unsigned long dst, int size,
+			       int *error, bool enc)
+{
+	struct kvm_sev_info *sev = &kvm->arch.sev_info;
+	struct sev_data_dbg *data;
+	int ret;
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (!data)
+		return -ENOMEM;
+
+	data->handle = sev->handle;
+	data->dst_addr = dst;
+	data->src_addr = src;
+	data->len = size;
+
+	ret = sev_issue_cmd(kvm,
+			    enc ? SEV_CMD_DBG_ENCRYPT : SEV_CMD_DBG_DECRYPT,
+			    data, error);
+	kfree(data);
+	return ret;
+}
+
+static int __sev_dbg_decrypt(struct kvm *kvm, unsigned long src_paddr,
+			     unsigned long dst_paddr, int sz, int *err)
+{
+	int offset;
+
+	/*
+	 * Its safe to read more than we are asked, caller should ensure that
+	 * destination has enough space.
+	 */
+	src_paddr = round_down(src_paddr, 16);
+	offset = src_paddr & 15;
+	sz = round_up(sz + offset, 16);
+
+	return __sev_issue_dbg_cmd(kvm, src_paddr, dst_paddr, sz, err, false);
+}
+
+static int sev_dbg_decrypt_user(struct kvm *kvm, unsigned long paddr,
+				unsigned long __user dst_uaddr,
+				unsigned long dst_paddr,
+				int size, int *err)
+{
+	struct page *tpage = NULL;
+	int ret, offset;
+
+	/* if inputs are not 16-byte then use intermediate buffer */
+	if (!IS_ALIGNED(dst_paddr, 16) ||
+	    !IS_ALIGNED(paddr,     16) ||
+	    !IS_ALIGNED(size,      16)) {
+		tpage = (void *)alloc_page(GFP_KERNEL);
+		if (!tpage)
+			return -ENOMEM;
+
+		dst_paddr = __sme_page_pa(tpage);
+	}
+
+	ret = __sev_dbg_decrypt(kvm, paddr, dst_paddr, size, err);
+	if (ret)
+		goto e_free;
+
+	if (tpage) {
+		offset = paddr & 15;
+		if (copy_to_user((void __user *)(uintptr_t)dst_uaddr,
+				 page_address(tpage) + offset, size))
+			ret = -EFAULT;
+	}
+
+e_free:
+	if (tpage)
+		__free_page(tpage);
+
+	return ret;
+}
+
+static int sev_dbg_crypt(struct kvm *kvm, struct kvm_sev_cmd *argp, bool dec)
+{
+	unsigned long vaddr, vaddr_end, next_vaddr;
+	unsigned long dst_vaddr, dst_vaddr_end;
+	struct page **src_p, **dst_p;
+	struct kvm_sev_dbg debug;
+	unsigned long n;
+	int ret, size;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	if (copy_from_user(&debug, (void __user *)(uintptr_t)argp->data, sizeof(debug)))
+		return -EFAULT;
+
+	vaddr = debug.src_uaddr;
+	size = debug.len;
+	vaddr_end = vaddr + size;
+	dst_vaddr = debug.dst_uaddr;
+	dst_vaddr_end = dst_vaddr + size;
+
+	for (; vaddr < vaddr_end; vaddr = next_vaddr) {
+		int len, s_off, d_off;
+
+		/* lock userspace source and destination page */
+		src_p = sev_pin_memory(kvm, vaddr & PAGE_MASK, PAGE_SIZE, &n, 0);
+		if (!src_p)
+			return -EFAULT;
+
+		dst_p = sev_pin_memory(kvm, dst_vaddr & PAGE_MASK, PAGE_SIZE, &n, 1);
+		if (!dst_p) {
+			sev_unpin_memory(kvm, src_p, n);
+			return -EFAULT;
+		}
+
+		/*
+		 * The DBG_{DE,EN}CRYPT commands will perform {dec,en}cryption of the
+		 * memory content (i.e it will write the same memory region with C=1).
+		 * It's possible that the cache may contain the data with C=0, i.e.,
+		 * unencrypted so invalidate it first.
+		 */
+		sev_clflush_pages(src_p, 1);
+		sev_clflush_pages(dst_p, 1);
+
+		/*
+		 * Since user buffer may not be page aligned, calculate the
+		 * offset within the page.
+		 */
+		s_off = vaddr & ~PAGE_MASK;
+		d_off = dst_vaddr & ~PAGE_MASK;
+		len = min_t(size_t, (PAGE_SIZE - s_off), size);
+
+		ret = sev_dbg_decrypt_user(kvm,
+					   __sme_page_pa(src_p[0]) + s_off,
+					   dst_vaddr,
+					   __sme_page_pa(dst_p[0]) + d_off,
+					   len, &argp->error);
+
+		sev_unpin_memory(kvm, src_p, 1);
+		sev_unpin_memory(kvm, dst_p, 1);
+
+		if (ret)
+			goto err;
+
+		next_vaddr = vaddr + len;
+		dst_vaddr = dst_vaddr + len;
+		size -= len;
+	}
+err:
+	return ret;
+}
+
+static int sev_dbg_decrypt(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	return sev_dbg_crypt(kvm, argp, true);
+}
+
 static int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -6064,6 +6218,9 @@ static int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_GUEST_STATUS:
 		r = sev_guest_status(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_DBG_DECRYPT:
+		r = sev_dbg_decrypt(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Part2 PATCH v7 33/38] KVM: SVM: Add support for SEV DEBUG_ENCRYPT command
  2017-11-01 21:17 [Part2 PATCH v7 30/38] KVM: SVM: Add support for SEV LAUNCH_FINISH command Brijesh Singh
  2017-11-01 21:17 ` [Part2 PATCH v7 31/38] KVM: SVM: Add support for SEV GUEST_STATUS command Brijesh Singh
  2017-11-01 21:17 ` [Part2 PATCH v7 32/38] KVM: SVM: Add support for SEV DEBUG_DECRYPT command Brijesh Singh
@ 2017-11-01 21:17 ` Brijesh Singh
  2017-11-06 11:31   ` Borislav Petkov
  2017-11-01 21:17 ` [Part2 PATCH v7 34/38] KVM: SVM: Add support for SEV LAUNCH_SECRET command Brijesh Singh
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 15+ messages in thread
From: Brijesh Singh @ 2017-11-01 21:17 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: bp, Brijesh Singh, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Paolo Bonzini, Radim Krčmář,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, x86

The command copies a plaintext into guest memory and encrypts it using
the VM encryption key. The command will be used for debug purposes
(e.g setting breakpoints through gdbserver)

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c | 103 ++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 98 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index adf4d80caee4..35840979627f 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -6109,6 +6109,83 @@ static int sev_dbg_decrypt_user(struct kvm *kvm, unsigned long paddr,
 	return ret;
 }
 
+static int __sev_dbg_encrypt_user(struct kvm *kvm, unsigned long paddr,
+				  unsigned long __user vaddr,
+				  unsigned long dst_paddr,
+				  unsigned long __user dst_vaddr,
+				  int size, int *error)
+{
+	struct page *src_tpage = NULL;
+	struct page *dst_tpage = NULL;
+	int ret, len = size;
+
+	/* If source buffer is not aligned then use an intermediate buffer */
+	if (!IS_ALIGNED(vaddr, 16)) {
+		src_tpage = alloc_page(GFP_KERNEL);
+		if (!src_tpage)
+			return -ENOMEM;
+
+		if (copy_from_user(page_address(src_tpage),
+				(void __user *)(uintptr_t)vaddr, size)) {
+			__free_page(src_tpage);
+			return -EFAULT;
+		}
+
+		paddr = __sme_page_pa(src_tpage);
+	}
+
+	/*
+	 *  If destination buffer or length is not aligned then do read-modify-write:
+	 *   - decrypt destination in an intermediate buffer
+	 *   - copy the source buffer in an intermediate buffer
+	 *   - use the intermediate buffer as source buffer
+	 */
+	if (!IS_ALIGNED(dst_vaddr, 16) || !IS_ALIGNED(size, 16)) {
+		int dst_offset;
+
+		dst_tpage = alloc_page(GFP_KERNEL);
+		if (!dst_tpage) {
+			ret = -ENOMEM;
+			goto e_free;
+		}
+
+		ret = __sev_dbg_decrypt(kvm, dst_paddr,
+					__sme_page_pa(dst_tpage), size, error);
+		if (ret)
+			goto e_free;
+
+		/*
+		 *  If source is kernel buffer then use memcpy() otherwise
+		 *  copy_from_user().
+		 */
+		dst_offset = dst_paddr & 15;
+
+		if (src_tpage)
+			memcpy(page_address(dst_tpage) + dst_offset,
+			       page_address(src_tpage), size);
+		else {
+			if (copy_from_user(page_address(dst_tpage) + dst_offset,
+					   (void __user *)(uintptr_t)vaddr, size)) {
+				ret = -EFAULT;
+				goto e_free;
+			}
+		}
+
+		paddr = __sme_page_pa(dst_tpage);
+		dst_paddr = round_down(dst_paddr, 16);
+		len = round_up(size, 16);
+	}
+
+	ret = __sev_issue_dbg_cmd(kvm, paddr, dst_paddr, len, error, true);
+
+e_free:
+	if (src_tpage)
+		__free_page(src_tpage);
+	if (dst_tpage)
+		__free_page(dst_tpage);
+	return ret;
+}
+
 static int sev_dbg_crypt(struct kvm *kvm, struct kvm_sev_cmd *argp, bool dec)
 {
 	unsigned long vaddr, vaddr_end, next_vaddr;
@@ -6161,11 +6238,19 @@ static int sev_dbg_crypt(struct kvm *kvm, struct kvm_sev_cmd *argp, bool dec)
 		d_off = dst_vaddr & ~PAGE_MASK;
 		len = min_t(size_t, (PAGE_SIZE - s_off), size);
 
-		ret = sev_dbg_decrypt_user(kvm,
-					   __sme_page_pa(src_p[0]) + s_off,
-					   dst_vaddr,
-					   __sme_page_pa(dst_p[0]) + d_off,
-					   len, &argp->error);
+		if (dec)
+			ret = sev_dbg_decrypt_user(kvm,
+						   __sme_page_pa(src_p[0]) + s_off,
+						   dst_vaddr,
+						   __sme_page_pa(dst_p[0]) + d_off,
+						   len, &argp->error);
+		else
+			ret = __sev_dbg_encrypt_user(kvm,
+						     __sme_page_pa(src_p[0]) + s_off,
+						     vaddr,
+						     __sme_page_pa(dst_p[0]) + d_off,
+						     dst_vaddr,
+						     len, &argp->error);
 
 		sev_unpin_memory(kvm, src_p, 1);
 		sev_unpin_memory(kvm, dst_p, 1);
@@ -6186,6 +6271,11 @@ static int sev_dbg_decrypt(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return sev_dbg_crypt(kvm, argp, true);
 }
 
+static int sev_dbg_encrypt(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	return sev_dbg_crypt(kvm, argp, false);
+}
+
 static int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -6221,6 +6311,9 @@ static int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_DBG_DECRYPT:
 		r = sev_dbg_decrypt(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_DBG_ENCRYPT:
+		r = sev_dbg_encrypt(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Part2 PATCH v7 34/38] KVM: SVM: Add support for SEV LAUNCH_SECRET command
  2017-11-01 21:17 [Part2 PATCH v7 30/38] KVM: SVM: Add support for SEV LAUNCH_FINISH command Brijesh Singh
                   ` (2 preceding siblings ...)
  2017-11-01 21:17 ` [Part2 PATCH v7 33/38] KVM: SVM: Add support for SEV DEBUG_ENCRYPT command Brijesh Singh
@ 2017-11-01 21:17 ` Brijesh Singh
  2017-11-06 14:36   ` Borislav Petkov
  2017-11-01 21:17 ` [Part2 PATCH v7 35/38] KVM: SVM: Pin guest memory when SEV is active Brijesh Singh
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 15+ messages in thread
From: Brijesh Singh @ 2017-11-01 21:17 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: bp, Brijesh Singh, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Paolo Bonzini, Radim Krčmář,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, x86

The command is used for injecting a secret into the guest memory region.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c | 68 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 68 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 35840979627f..749d2f9898d1 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -6276,6 +6276,71 @@ static int sev_dbg_encrypt(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return sev_dbg_crypt(kvm, argp, false);
 }
 
+static int sev_launch_secret(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &kvm->arch.sev_info;
+	struct sev_data_launch_secret *data;
+	struct kvm_sev_launch_secret params;
+	struct page **pages;
+	void *blob, *hdr;
+	unsigned long n;
+	int ret;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data, sizeof(params)))
+		return -EFAULT;
+
+	pages = sev_pin_memory(kvm, params.guest_uaddr, params.guest_len, &n, 1);
+	if (!pages)
+		return -ENOMEM;
+
+	/*
+	 * The secret must be copied into contiguous memory region, lets verify
+	 * that userspace memory pages are contiguous before we issue command.
+	 */
+	if (get_num_contig_pages(0, pages, n) != n) {
+		ret = -EINVAL;
+		goto e_unpin_memory;
+	}
+
+	ret = -ENOMEM;
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (!data)
+		goto e_unpin_memory;
+
+	blob = psp_copy_user_blob(params.trans_uaddr, params.trans_len);
+	if (IS_ERR(blob)) {
+		ret = PTR_ERR(blob);
+		goto e_free;
+	}
+
+	data->trans_address = __psp_pa(blob);
+	data->trans_len = params.trans_len;
+
+	hdr = psp_copy_user_blob(params.hdr_uaddr, params.hdr_len);
+	if (IS_ERR(hdr)) {
+		ret = PTR_ERR(hdr);
+		goto e_free_blob;
+	}
+	data->trans_address = __psp_pa(blob);
+	data->trans_len = params.trans_len;
+
+	data->handle = sev->handle;
+	ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_SECRET, data, &argp->error);
+
+	kfree(hdr);
+
+e_free_blob:
+	kfree(blob);
+e_free:
+	kfree(data);
+e_unpin_memory:
+	sev_unpin_memory(kvm, pages, n);
+	return ret;
+}
+
 static int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -6314,6 +6379,9 @@ static int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_DBG_ENCRYPT:
 		r = sev_dbg_encrypt(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_LAUNCH_SECRET:
+		r = sev_launch_secret(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Part2 PATCH v7 35/38] KVM: SVM: Pin guest memory when SEV is active
  2017-11-01 21:17 [Part2 PATCH v7 30/38] KVM: SVM: Add support for SEV LAUNCH_FINISH command Brijesh Singh
                   ` (3 preceding siblings ...)
  2017-11-01 21:17 ` [Part2 PATCH v7 34/38] KVM: SVM: Add support for SEV LAUNCH_SECRET command Brijesh Singh
@ 2017-11-01 21:17 ` Brijesh Singh
  2017-11-06 14:43   ` Borislav Petkov
  2017-11-01 21:17 ` [Part2 PATCH v7 36/38] KVM: SVM: Clear C-bit from the page fault address Brijesh Singh
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 15+ messages in thread
From: Brijesh Singh @ 2017-11-01 21:17 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: bp, Brijesh Singh, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Paolo Bonzini, Radim Krčmář,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, x86

The SEV memory encryption engine uses a tweak such that two identical
plaintext pages at different location will have different ciphertext.
So swapping or moving ciphertext of two pages will not result in
plaintext being swapped. Relocating (or migrating) physical backing
pages for a SEV guest will require some additional steps. The current SEV
key management spec does not provide commands to swap or migrate (move)
ciphertext pages. For now, we pin the guest memory registered through
KVM_MEMORY_ENCRYPT_REGISTER_REGION ioctl.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/include/asm/kvm_host.h |   1 +
 arch/x86/kvm/svm.c              | 112 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 113 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 924ce807c76c..0458f494f5e4 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -753,6 +753,7 @@ struct kvm_sev_info {
 	unsigned int handle;	/* SEV firmware handle */
 	int fd;			/* SEV device fd */
 	unsigned long pages_locked; /* Number of pages locked */
+	struct list_head regions_list;  /* List of registered regions */
 };
 
 struct kvm_arch {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 749d2f9898d1..6df96d4c164f 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -334,6 +334,14 @@ static unsigned int max_sev_asid;
 static unsigned long *sev_asid_bitmap;
 #define __sme_page_pa(x) __sme_set(page_to_pfn(x) << PAGE_SHIFT)
 
+struct enc_region {
+	struct list_head list;
+	unsigned long npages;
+	struct page **pages;
+	unsigned long uaddr;
+	unsigned long size;
+};
+
 static inline bool svm_sev_enabled(void)
 {
 	return max_sev_asid;
@@ -1627,13 +1635,42 @@ static void sev_clflush_pages(struct page *pages[], unsigned long npages)
 	}
 }
 
+static void __unregister_enc_region(struct kvm *kvm,
+				    struct enc_region *region)
+{
+	/*
+	 * The guest may change the memory encryption attribute from C=0 -> C=1
+	 * or vice versa for this memory range. Lets make sure caches are
+	 * flushed to ensure that guest data gets written into memory with
+	 * correct C-bit.
+	 */
+	sev_clflush_pages(region->pages, region->npages);
+
+	sev_unpin_memory(kvm, region->pages, region->npages);
+	list_del(&region->list);
+	kfree(region);
+}
+
 static void sev_vm_destroy(struct kvm *kvm)
 {
 	struct kvm_sev_info *sev = &kvm->arch.sev_info;
+	struct list_head *head = &sev->regions_list;
+	struct list_head *pos, *q;
 
 	if (!sev_guest(kvm))
 		return;
 
+	/*
+	 * if userspace was terminated before unregistering the memory regions
+	 * then lets unpin all the registered memory.
+	 */
+	if (!list_empty(head)) {
+		list_for_each_safe(pos, q, head) {
+			__unregister_enc_region(kvm,
+				list_entry(pos, struct enc_region, list));
+		}
+	}
+
 	sev_unbind_asid(kvm, sev->handle);
 	sev_asid_free(kvm);
 }
@@ -5683,6 +5720,7 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
 
 	sev->active = true;
 	sev->asid = asid;
+	INIT_LIST_HEAD(&sev->regions_list);
 
 	return 0;
 
@@ -6395,6 +6433,78 @@ static int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	return r;
 }
 
+static int svm_register_enc_region(struct kvm *kvm,
+				   struct kvm_enc_region *range)
+{
+	struct kvm_sev_info *sev = &kvm->arch.sev_info;
+	struct enc_region *region;
+	int ret = 0;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	region = kzalloc(sizeof(*region), GFP_KERNEL);
+	if (!region)
+		return -ENOMEM;
+
+	region->pages = sev_pin_memory(kvm, range->addr, range->size, &region->npages, 1);
+	if (!region->pages) {
+		ret = -ENOMEM;
+		goto e_free;
+	}
+
+	/*
+	 * The guest may change the memory encryption attribute from C=0 -> C=1
+	 * or vice versa for this memory range. Lets make sure caches are
+	 * flushed to ensure that guest data gets written into memory with
+	 * correct C-bit.
+	 */
+	sev_clflush_pages(region->pages, region->npages);
+
+	region->uaddr = range->addr;
+	region->size = range->size;
+	list_add_tail(&region->list, &sev->regions_list);
+	return ret;
+
+e_free:
+	kfree(region);
+	return ret;
+}
+
+static struct enc_region *
+find_enc_region(struct kvm *kvm, struct kvm_enc_region *range)
+{
+	struct kvm_sev_info *sev = &kvm->arch.sev_info;
+	struct list_head *head = &sev->regions_list;
+	struct enc_region *i;
+
+	list_for_each_entry(i, head, list) {
+		if (i->uaddr == range->addr &&
+		    i->size == range->size)
+			return i;
+	}
+
+	return NULL;
+}
+
+
+static int svm_unregister_enc_region(struct kvm *kvm,
+				     struct kvm_enc_region *range)
+{
+	struct enc_region *region;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	region = find_enc_region(kvm, range);
+	if (!region)
+		return -EINVAL;
+
+	__unregister_enc_region(kvm, region);
+
+	return 0;
+}
+
 static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.cpu_has_kvm_support = has_svm,
 	.disabled_by_bios = is_disabled,
@@ -6507,6 +6617,8 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.setup_mce = svm_setup_mce,
 
 	.mem_enc_op = svm_mem_enc_op,
+	.mem_enc_reg_region = svm_register_enc_region,
+	.mem_enc_unreg_region = svm_unregister_enc_region,
 };
 
 static int __init svm_init(void)
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Part2 PATCH v7 36/38] KVM: SVM: Clear C-bit from the page fault address
  2017-11-01 21:17 [Part2 PATCH v7 30/38] KVM: SVM: Add support for SEV LAUNCH_FINISH command Brijesh Singh
                   ` (4 preceding siblings ...)
  2017-11-01 21:17 ` [Part2 PATCH v7 35/38] KVM: SVM: Pin guest memory when SEV is active Brijesh Singh
@ 2017-11-01 21:17 ` Brijesh Singh
  2017-11-01 21:17 ` [Part2 PATCH v7 37/38] KVM: SVM: Do not install #UD intercept when SEV is enabled Brijesh Singh
  2017-11-01 21:17 ` [Part2 PATCH v7 38/38] KVM: X86: Restart the guest when insn_len is zero and " Brijesh Singh
  7 siblings, 0 replies; 15+ messages in thread
From: Brijesh Singh @ 2017-11-01 21:17 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: bp, Brijesh Singh, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Paolo Bonzini, Radim Krčmář,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, x86

When SEV is active, on #NPF the page fault address will contain the C-bit.
We must clear the C-bit before handling the fault.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 6df96d4c164f..a0e5e22bb661 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -2404,7 +2404,7 @@ static void svm_set_dr7(struct kvm_vcpu *vcpu, unsigned long value)
 
 static int pf_interception(struct vcpu_svm *svm)
 {
-	u64 fault_address = svm->vmcb->control.exit_info_2;
+	u64 fault_address = __sme_clr(svm->vmcb->control.exit_info_2);
 	u64 error_code = svm->vmcb->control.exit_info_1;
 
 	return kvm_handle_page_fault(&svm->vcpu, error_code, fault_address,
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Part2 PATCH v7 37/38] KVM: SVM: Do not install #UD intercept when SEV is enabled
  2017-11-01 21:17 [Part2 PATCH v7 30/38] KVM: SVM: Add support for SEV LAUNCH_FINISH command Brijesh Singh
                   ` (5 preceding siblings ...)
  2017-11-01 21:17 ` [Part2 PATCH v7 36/38] KVM: SVM: Clear C-bit from the page fault address Brijesh Singh
@ 2017-11-01 21:17 ` Brijesh Singh
  2017-11-01 21:17 ` [Part2 PATCH v7 38/38] KVM: X86: Restart the guest when insn_len is zero and " Brijesh Singh
  7 siblings, 0 replies; 15+ messages in thread
From: Brijesh Singh @ 2017-11-01 21:17 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: bp, Brijesh Singh, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Paolo Bonzini, Radim Krčmář,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, x86

On #UD, x86_emulate_instruction() fetches the data from guest memory and
decodes the instruction bytes to assist further. When SEV is enabled, the
instruction bytes will be encrypted using the guest-specific key and the
hypervisor will no longer able to fetch the instruction bytes to assist
UD handling. By not installing intercept we let the guest receive and
handle #UD.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/svm.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index a0e5e22bb661..29c30ecde780 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1421,8 +1421,10 @@ static void init_vmcb(struct vcpu_svm *svm)
 		svm->vmcb->control.int_ctl |= V_GIF_ENABLE_MASK;
 	}
 
-	if (sev_guest(svm->vcpu.kvm))
+	if (sev_guest(svm->vcpu.kvm)) {
 		svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_SEV_ENABLE;
+		clr_exception_intercept(svm, UD_VECTOR);
+	}
 
 	mark_all_dirty(svm->vmcb);
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Part2 PATCH v7 38/38] KVM: X86: Restart the guest when insn_len is zero and SEV is enabled
  2017-11-01 21:17 [Part2 PATCH v7 30/38] KVM: SVM: Add support for SEV LAUNCH_FINISH command Brijesh Singh
                   ` (6 preceding siblings ...)
  2017-11-01 21:17 ` [Part2 PATCH v7 37/38] KVM: SVM: Do not install #UD intercept when SEV is enabled Brijesh Singh
@ 2017-11-01 21:17 ` Brijesh Singh
  7 siblings, 0 replies; 15+ messages in thread
From: Brijesh Singh @ 2017-11-01 21:17 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: bp, Brijesh Singh, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Paolo Bonzini, Radim Krčmář,
	Joerg Roedel, Borislav Petkov, Tom Lendacky, x86

On AMD platforms, under certain conditions insn_len may be zero on #NPF.
This can happen if a guest gets a page-fault on data access but the HW
table walker is not able to read the instruction page (e.g instruction
page is not present in memory).

Typically, when insn_len is zero, x86_emulate_instruction() walks the
guest page table and fetches the instruction bytes from guest memory.
When SEV is enabled, the guest memory is encrypted with guest-specific
key hence hypervisor will not able to fetch the instruction bytes.
In those cases we simply restart the guest.

I have encountered this issue when running kernbench inside the guest.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
---
 arch/x86/kvm/mmu.c | 10 ++++++++++
 arch/x86/kvm/svm.c |  3 ++-
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 7a69cf053711..0d4776b855bb 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4954,6 +4954,16 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
 	if (mmio_info_in_cache(vcpu, cr2, direct))
 		emulation_type = 0;
 emulate:
+	/*
+	 * On AMD platforms, under certain conditions insn_len may be zero on #NPF.
+	 * This can happen if a guest gets a page-fault on data access but the HW
+	 * table walker is not able to read the instruction page (e.g instruction
+	 * page is not present in memory). In those cases we simply restart the
+	 * guest.
+	 */
+	if (unlikely(insn && !insn_len))
+		return 1;
+
 	er = x86_emulate_instruction(vcpu, cr2, emulation_type, insn, insn_len);
 
 	switch (er) {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 29c30ecde780..3299d0d2ee1d 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -2410,7 +2410,8 @@ static int pf_interception(struct vcpu_svm *svm)
 	u64 error_code = svm->vmcb->control.exit_info_1;
 
 	return kvm_handle_page_fault(&svm->vcpu, error_code, fault_address,
-			svm->vmcb->control.insn_bytes,
+			static_cpu_has(X86_FEATURE_DECODEASSISTS) ?
+			svm->vmcb->control.insn_bytes : NULL,
 			svm->vmcb->control.insn_len, !npt_enabled);
 }
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [Part2 PATCH v7 32/38] KVM: SVM: Add support for SEV DEBUG_DECRYPT command
  2017-11-01 21:17 ` [Part2 PATCH v7 32/38] KVM: SVM: Add support for SEV DEBUG_DECRYPT command Brijesh Singh
@ 2017-11-06 11:24   ` Borislav Petkov
  0 siblings, 0 replies; 15+ messages in thread
From: Borislav Petkov @ 2017-11-06 11:24 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: kvm, linux-kernel, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Paolo Bonzini, Radim Krčmář,
	Joerg Roedel, Tom Lendacky, x86

On Wed, Nov 01, 2017 at 04:17:17PM -0500, Brijesh Singh wrote:
> The command is used for decrypting a guest memory region for debug
> purposes.
> 
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Radim Krčmář" <rkrcmar@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>  arch/x86/kvm/svm.c | 157 +++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 157 insertions(+)

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Part2 PATCH v7 33/38] KVM: SVM: Add support for SEV DEBUG_ENCRYPT command
  2017-11-01 21:17 ` [Part2 PATCH v7 33/38] KVM: SVM: Add support for SEV DEBUG_ENCRYPT command Brijesh Singh
@ 2017-11-06 11:31   ` Borislav Petkov
  2017-11-06 16:38     ` Brijesh Singh
  0 siblings, 1 reply; 15+ messages in thread
From: Borislav Petkov @ 2017-11-06 11:31 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: kvm, linux-kernel, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Paolo Bonzini, Radim Krčmář,
	Joerg Roedel, Tom Lendacky, x86

On Wed, Nov 01, 2017 at 04:17:18PM -0500, Brijesh Singh wrote:
> The command copies a plaintext into guest memory and encrypts it using
> the VM encryption key. The command will be used for debug purposes
> (e.g setting breakpoints through gdbserver)

...

> @@ -6161,11 +6238,19 @@ static int sev_dbg_crypt(struct kvm *kvm, struct kvm_sev_cmd *argp, bool dec)
>  		d_off = dst_vaddr & ~PAGE_MASK;
>  		len = min_t(size_t, (PAGE_SIZE - s_off), size);
>  
> -		ret = sev_dbg_decrypt_user(kvm,
> -					   __sme_page_pa(src_p[0]) + s_off,
> -					   dst_vaddr,
> -					   __sme_page_pa(dst_p[0]) + d_off,
> -					   len, &argp->error);
> +		if (dec)
> +			ret = sev_dbg_decrypt_user(kvm,
> +						   __sme_page_pa(src_p[0]) + s_off,
> +						   dst_vaddr,
> +						   __sme_page_pa(dst_p[0]) + d_off,
> +						   len, &argp->error);
> +		else
> +			ret = __sev_dbg_encrypt_user(kvm,
> +						     __sme_page_pa(src_p[0]) + s_off,
> +						     vaddr,
> +						     __sme_page_pa(dst_p[0]) + d_off,
> +						     dst_vaddr,
> +						     len, &argp->error);

sev_dbg_decrypt_user but __sev_dbg_encrypt_user, with the "__" ??

>  
>  		sev_unpin_memory(kvm, src_p, 1);
>  		sev_unpin_memory(kvm, dst_p, 1);
> @@ -6186,6 +6271,11 @@ static int sev_dbg_decrypt(struct kvm *kvm, struct kvm_sev_cmd *argp)
>  	return sev_dbg_crypt(kvm, argp, true);
>  }
>  
> +static int sev_dbg_encrypt(struct kvm *kvm, struct kvm_sev_cmd *argp)
> +{
> +	return sev_dbg_crypt(kvm, argp, false);
> +}

Get rid of those silly wrappers:

---
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index a60454afb4d2..68d398e72c4c 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -6261,16 +6261,6 @@ static int sev_dbg_crypt(struct kvm *kvm, struct kvm_sev_cmd *argp, bool dec)
 	return ret;
 }
 
-static int sev_dbg_decrypt(struct kvm *kvm, struct kvm_sev_cmd *argp)
-{
-	return sev_dbg_crypt(kvm, argp, true);
-}
-
-static int sev_dbg_encrypt(struct kvm *kvm, struct kvm_sev_cmd *argp)
-{
-	return sev_dbg_crypt(kvm, argp, false);
-}
-
 static int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -6304,10 +6294,10 @@ static int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 		r = sev_guest_status(kvm, &sev_cmd);
 		break;
 	case KVM_SEV_DBG_DECRYPT:
-		r = sev_dbg_decrypt(kvm, &sev_cmd);
+		r = sev_dbg_crypt(kvm, argp, true);
 		break;
 	case KVM_SEV_DBG_ENCRYPT:
-		r = sev_dbg_encrypt(kvm, &sev_cmd);
+		r = sev_dbg_crypt(kvm, argp, false);
 		break;
 	default:
 		r = -EINVAL;

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [Part2 PATCH v7 34/38] KVM: SVM: Add support for SEV LAUNCH_SECRET command
  2017-11-01 21:17 ` [Part2 PATCH v7 34/38] KVM: SVM: Add support for SEV LAUNCH_SECRET command Brijesh Singh
@ 2017-11-06 14:36   ` Borislav Petkov
  0 siblings, 0 replies; 15+ messages in thread
From: Borislav Petkov @ 2017-11-06 14:36 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: kvm, linux-kernel, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Paolo Bonzini, Radim Krčmář,
	Joerg Roedel, Tom Lendacky, x86

On Wed, Nov 01, 2017 at 04:17:19PM -0500, Brijesh Singh wrote:
> The command is used for injecting a secret into the guest memory region.
> 
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Radim Krčmář" <rkrcmar@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> 
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>  arch/x86/kvm/svm.c | 68 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 68 insertions(+)

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Part2 PATCH v7 35/38] KVM: SVM: Pin guest memory when SEV is active
  2017-11-01 21:17 ` [Part2 PATCH v7 35/38] KVM: SVM: Pin guest memory when SEV is active Brijesh Singh
@ 2017-11-06 14:43   ` Borislav Petkov
  2017-11-06 16:55     ` Brijesh Singh
  0 siblings, 1 reply; 15+ messages in thread
From: Borislav Petkov @ 2017-11-06 14:43 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: kvm, linux-kernel, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Paolo Bonzini, Radim Krčmář,
	Joerg Roedel, Tom Lendacky, x86

On Wed, Nov 01, 2017 at 04:17:20PM -0500, Brijesh Singh wrote:
> The SEV memory encryption engine uses a tweak such that two identical
> plaintext pages at different location will have different ciphertext.
> So swapping or moving ciphertext of two pages will not result in
> plaintext being swapped. Relocating (or migrating) physical backing
> pages for a SEV guest will require some additional steps. The current SEV
> key management spec does not provide commands to swap or migrate (move)
> ciphertext pages. For now, we pin the guest memory registered through
> KVM_MEMORY_ENCRYPT_REGISTER_REGION ioctl.

...

> +static int svm_register_enc_region(struct kvm *kvm,
> +				   struct kvm_enc_region *range)
> +{
> +	struct kvm_sev_info *sev = &kvm->arch.sev_info;
> +	struct enc_region *region;
> +	int ret = 0;
> +
> +	if (!sev_guest(kvm))
> +		return -ENOTTY;
> +
> +	region = kzalloc(sizeof(*region), GFP_KERNEL);
> +	if (!region)
> +		return -ENOMEM;
> +
> +	region->pages = sev_pin_memory(kvm, range->addr, range->size, &region->npages, 1);
> +	if (!region->pages) {
> +		ret = -ENOMEM;
> +		goto e_free;
> +	}
> +
> +	/*
> +	 * The guest may change the memory encryption attribute from C=0 -> C=1
> +	 * or vice versa for this memory range. Lets make sure caches are
> +	 * flushed to ensure that guest data gets written into memory with
> +	 * correct C-bit.
> +	 */
> +	sev_clflush_pages(region->pages, region->npages);
> +
> +	region->uaddr = range->addr;
> +	region->size = range->size;
> +	list_add_tail(&region->list, &sev->regions_list);
> +	return ret;

Nothing's protecting that list from concurrent modifications of adding
and removal of regions.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Part2 PATCH v7 33/38] KVM: SVM: Add support for SEV DEBUG_ENCRYPT command
  2017-11-06 11:31   ` Borislav Petkov
@ 2017-11-06 16:38     ` Brijesh Singh
  0 siblings, 0 replies; 15+ messages in thread
From: Brijesh Singh @ 2017-11-06 16:38 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, kvm, linux-kernel, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Paolo Bonzini, Radim Krčmář,
	Joerg Roedel, Tom Lendacky, x86



On 11/06/2017 05:31 AM, Borislav Petkov wrote:
...

>> -		ret = sev_dbg_decrypt_user(kvm,
>> -					   __sme_page_pa(src_p[0]) + s_off,
>> -					   dst_vaddr,
>> -					   __sme_page_pa(dst_p[0]) + d_off,
>> -					   len, &argp->error);
>> +		if (dec)
>> +			ret = sev_dbg_decrypt_user(kvm,
>> +						   __sme_page_pa(src_p[0]) + s_off,
>> +						   dst_vaddr,
>> +						   __sme_page_pa(dst_p[0]) + d_off,
>> +						   len, &argp->error);
>> +		else
>> +			ret = __sev_dbg_encrypt_user(kvm,
>> +						     __sme_page_pa(src_p[0]) + s_off,
>> +						     vaddr,
>> +						     __sme_page_pa(dst_p[0]) + d_off,
>> +						     dst_vaddr,
>> +						     len, &argp->error);
> 
> sev_dbg_decrypt_user but __sev_dbg_encrypt_user, with the "__" ??
> 

yes they both should be "__", I will overlooked it and will fix in next rev.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Part2 PATCH v7 35/38] KVM: SVM: Pin guest memory when SEV is active
  2017-11-06 14:43   ` Borislav Petkov
@ 2017-11-06 16:55     ` Brijesh Singh
  0 siblings, 0 replies; 15+ messages in thread
From: Brijesh Singh @ 2017-11-06 16:55 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: brijesh.singh, kvm, linux-kernel, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Paolo Bonzini, Radim Krčmář,
	Joerg Roedel, Tom Lendacky, x86



On 11/06/2017 08:43 AM, Borislav Petkov wrote:
...

>> +
>> +	region->uaddr = range->addr;
>> +	region->size = range->size;
>> +	list_add_tail(&region->list, &sev->regions_list);
>> +	return ret;
> 
> Nothing's protecting that list from concurrent modifications of adding
> and removal of regions.
> 

Ah good catch. I will fix this in next rev. Similar to the mem_enc_op(), 
we need to acquire the kvm->lock when adding or removing the regions.

-Brijesh

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2017-11-06 16:55 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-01 21:17 [Part2 PATCH v7 30/38] KVM: SVM: Add support for SEV LAUNCH_FINISH command Brijesh Singh
2017-11-01 21:17 ` [Part2 PATCH v7 31/38] KVM: SVM: Add support for SEV GUEST_STATUS command Brijesh Singh
2017-11-01 21:17 ` [Part2 PATCH v7 32/38] KVM: SVM: Add support for SEV DEBUG_DECRYPT command Brijesh Singh
2017-11-06 11:24   ` Borislav Petkov
2017-11-01 21:17 ` [Part2 PATCH v7 33/38] KVM: SVM: Add support for SEV DEBUG_ENCRYPT command Brijesh Singh
2017-11-06 11:31   ` Borislav Petkov
2017-11-06 16:38     ` Brijesh Singh
2017-11-01 21:17 ` [Part2 PATCH v7 34/38] KVM: SVM: Add support for SEV LAUNCH_SECRET command Brijesh Singh
2017-11-06 14:36   ` Borislav Petkov
2017-11-01 21:17 ` [Part2 PATCH v7 35/38] KVM: SVM: Pin guest memory when SEV is active Brijesh Singh
2017-11-06 14:43   ` Borislav Petkov
2017-11-06 16:55     ` Brijesh Singh
2017-11-01 21:17 ` [Part2 PATCH v7 36/38] KVM: SVM: Clear C-bit from the page fault address Brijesh Singh
2017-11-01 21:17 ` [Part2 PATCH v7 37/38] KVM: SVM: Do not install #UD intercept when SEV is enabled Brijesh Singh
2017-11-01 21:17 ` [Part2 PATCH v7 38/38] KVM: X86: Restart the guest when insn_len is zero and " Brijesh Singh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.